Theses
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/6
The theses in UWSpace are publicly accessible unless restricted due to publication or patent pending.
This collection includes a subset of theses submitted by graduates of the University of Waterloo as a partial requirement of a degree program at the Master's or PhD level. It includes all electronically submitted theses. (Electronic submission was optional from 1996 through 2006. Electronic submission became the default submission format in October 2006.)
This collection also includes a subset of UW theses that were scanned through the Theses Canada program. (The subset includes UW PhD theses from 1998 - 2002.)
Browse
Recent Submissions
Item type: Item , Functional Causal Mediation Analysis with Zero-inflated Count Data(University of Waterloo, 2026-04-01) Xu, HenanCausal mediation analysis decomposes the effect of an exposure on an outcome into a component operating directly and a component operating through an intermediate variable. In many biomedical and behavioural studies, the mediator evolves over time and is naturally represented as a function, while the outcome is a count with excess zeros. These features create methodological challenges for identification, estimation, and robustness assessment, particularly when mediator processes are sparsely and irregularly observed, and the outcome model is nonlinear due to zero inflation. This thesis develops a functional causal mediation framework for zero-inflated count outcomes, together with estimation procedures and robustness tools tailored to the functional setting. Chapter 2 lays the conceptual and methodological foundations for functional mediation with zero-inflated counts. Within the potential outcomes framework, it defines the total effect, natural direct and indirect effects when the mediator is a time-varying process observed sparsely and irregularly, and clarifies the identifying assumptions required for natural-effect decompositions. The chapter develops a simulation-based implementation of the mediation formula under function-on-scalar regression models for the mediator and functional zero-inflated count models for the outcome. The proposed framework is illustrated in a MIMIC-IV application among patients undergoing coronary artery bypass grafting, where the use of sex as a non-manipulable treatment is discussed, and a principal stratification framework is adopted to handle post-treatment selection into the surgical cohort. Postoperative processes of physiological measurements are studied as candidate mechanisms relating sex to subsequent rehospitalisation outcomes within the always-CABG principal stratum. Chapter 3 addresses computational and inferential limitations of simulation-based mediation implementations by developing a marginal functional mediation approach built on the functional marginalised zero-inflated Poisson model. By modelling the marginal mean of the zero-inflated count outcome directly, the resulting framework yields closed-form expressions for total, natural direct, and natural indirect effects on both incidence-rate-ratio and risk-difference scales. This analytic structure supports fast computation, transparent decomposition of time-local contributions through inner-product representations, and delta-method standard error estimation. Extensive simulation studies across multiple sample sizes and mediator complexities demonstrate that the proposed estimators exhibit good performance in moderate to large samples. The methodology is applied to the Wisconsin Smokers' Health Study 2, evaluating whether time-varying craving during the first two weeks after cessation mediates the effect of varenicline versus nicotine patch on subsequent cigarette use, which is highly concentrated at zero. Chapter 4 revisits causal interpretation and robustness when natural-effect identification assumptions are vulnerable in realistic observational settings. In particular, it addresses concerns about unmeasured baseline mediator--outcome confounding, and exposure-induced mediator--outcome confounding. To assess the robustness of natural effect estimates under potential unmeasured baseline mediator--outcome confounding, we explore both a simulation-based Gaussian copula approach and a closed-form characterisation approach to sensitivity analysis. As natural effects are generally unidentifiable in the presence of post-treatment confounding, the chapter further develops interventional estimands that avoid cross-world potential outcomes and are more directly connected to policy-relevant mediator-distribution interventions. These ideas are illustrated through the MIMIC-IV CABG analysis, providing a robustness-oriented perspective on conclusions drawn from functional mediators and zero-inflated rehospitalisation outcomes. Collectively, the thesis contributes a set of causal estimands, modelling tools, and inferential procedures for mediation problems in which mediators evolve over time and outcomes display zero inflation, and it demonstrates their practical value through simulation evidence and applications to large-scale electronic health records and intensive longitudinal trial data.Item type: Item , Scalable Deep Learning for Individual Tree Species Classification from Cross-Platform LiDAR Point Clouds(University of Waterloo, 2026-04-01) Wang, LanyingAccurate individual tree species classification from point cloud data are essential tasks with significant implications for forest inventory, biomass estimation, and carbon monitoring. Recent advancements in Light Detection and Ranging (LiDAR) technologies, such as airborne LiDAR, Unmanned Aerial Vehicle (UAV)-based LiDAR, and handheld mobile LiDAR, provide rich data sources for these applications. Additionally, the rapid rise of deep learning techniques has demonstrated considerable potential for enhancing the accuracy and efficiency of data interpretation tasks. However, effectively leveraging deep learning to utilize diverse LiDAR datasets for individual tree segmentation and species classification remains challenging. However, deep learning methods typically require extensive annotated data, posing a critical challenge in forestry applications, where labelling individual trees in point clouds is particularly difficult, unlike the abundant large-scale datasets available in image-based domains. This issue leads to three primary research questions: Firstly, how can individual tree segmentation be achieved more efficiently and reliably in complex forest environments characterized by crown overlap, occlusion, and varying point cloud densities? Secondly, can deep learning models effectively classify individual tree species using low-density LiDAR point clouds? Lastly, how can we enhance cross-platform generalization and enable efficient adaptation to newly introduced tree species with limited labelled samples? To address these questions, this thesis presents three main contributions. First, it develops an interactive deep learning pipeline for individual tree segmentation from cross platform laser-scanning data. By integrating user-guided prompts, such as bounding boxes or point clicks, with point cloud-based instance segmentation, the pipeline produces accurate and flexible tree delineations while reducing manual delineation time, thereby supporting the efficient preparation of tree-level training samples. Second, it proposes the Attribute-Aware Cross-Branch Transformer, tailored for tree species classification from low density LiDAR data. The model jointly exploits geometric and radiometric attributes and is designed to learn discriminative, species-specific features under sparse, and uneven point cloud conditions. Third, it investigates a transfer learning framework to enhance the generalization of tree species classification models across heterogeneous LiDAR platforms and to support rapid fine tuning for unseen species. By pretraining on multi platform datasets and adapting to new domains with limited labels, the framework improves scalability and data efficiency. These contributions provide a unified and scalable framework that integrates interactive segmentation, tree species classification, and transfer learning for forestry LiDAR applications, enabling automated, data-efficient forest monitoring in support of operational inventory and carbon-related assessment.Item type: Item , Sovereignty, Rhetoric, and World Order: Woodrow Wilson’s Self- Determination(University of Waterloo, 2026-03-31) Lauer, CalebThis thesis elucidates Woodrow Wilson’s unique rendering of the concept of “self-determination,” examining how Wilson made the concept his own, and how this unique rendering has been obscured by certain conventions and characterizations that predominate in the scholarly literature. I demonstrate that Wilson’s self-determination was less sensational, more limited, and more instrumentalist than is typically acknowledged in the literature—while also being richer in articulation than prevailing interpretations suggest. I situate Wilson’s self-determination within Wilson’s larger persuasive project, a broader rhetorical and political framework which reflected who he was trying to persuade of what, showing that Wilson’s conceptualization of self-determination was inseparable from his efforts to persuade people of his interpretation of the First World War, of the peace settlement that followed, and of the League of Nations as the institutional embodiment of a new world order. This framework was prior to, superordinate to, and much more important to Wilson than was any standalone notion of self-determination. In this way, and contrary to standard accounts, I argue for an image of Wilson’s self-determination that was neither Wilson’s priority, nor summative of his worldview, and neither intrinsically democratic, nor, in its essence, about nationality. And yet in the way Wilson deployed the concept within his larger persuasive project, I argue for an image of Wilson’s self-determination that remains a key to understanding the basis of his vision of a new world order—an order constructed upon the limitations intrinsic to the concept of sovereignty. In this regard, I show that Wilson’s largely unexamined theory of sovereignty is essential to understanding better the significance of his conceptualization of self-determination—not only did Wilson here employ the phrase “self-determination” much earlier than is recognized in the literature, but this earlier usage also offers a corrective to the often-misunderstood distinction between self-determination and the closely associated notions of “the consent of the governed” and “self-government,” and it illuminates his later views on sovereignty in relation to the League of Nations and his vision of world order.Item type: Item , Analysis of Neural Networks with Physics Applications(University of Waterloo, 2026-03-30) Mohamed, AhmedThis thesis investigates core aspects of machine learning, spanning foundational studies on generalization phenomena in neural networks, novel architectural strategies for enhancing representation learning and classification performance, and high accuracy predictive and inverse modeling of emerging nanoelectronic devices. Together, these studies highlight the significance of data and model structure, the impact of nonlinearity, and the potential of interpretable, generalizable machine learning methods for scientific and engineering applications. For generalization in neural networks, the thesis focuses on the phenomenon of grokking, a delayed generalization effect where models initially overfit but eventually learn to generalize well after extended training. Through a series of interconnected studies, this work proposes insights and practical tools to diagnose, forecast, and enhance generalization in modern machine learning systems. The first part of the thesis examines grokking in modular arithmetic tasks, revealing how dropout induced variance, embedding similarity, activation sparsity, and weight entropy evolve across training, and hence introduces diagnostic metrics to capture phase transitions between memorization and generalization. Further analysis shows that nonlinearity, network depth, and symmetry in data collectively modulate grokking behavior, linking model architecture to its capacity for structured generalization. Next, the thesis introduces a Branched Variational Autoencoder (BVAE), a hybrid architecture that integrates generative and discriminative objectives. By shaping latent representations through a supervised branch, the BVAE achieves improved class separability and interpretability on benchmark datasets, illustrating the potential of structured latent shaping for semi-supervised learning. Finally, the research extends to scientific machine learning, demonstrating how neural and ensemble models as Random Forests can accelerate the modeling and inverse design of Carbon Nanotube Tunnel Field-Effect Transistors (CNT TFETs). By coupling physical insights with machine learning interpretability techniques, this work bridges the gap between theoretical ML and real-world scientific applications.Item type: Item , Linear Scala is All You Need for Safe Static Memory and Alias Management(University of Waterloo, 2026-03-30) Pashaeehir, AmirhosseinRust has become one of the most popular languages for systems programming. This popularity is largely driven by Rust's ability to provide safe static memory management without garbage collection, eliminating GC-induced pauses and runtime overhead that can be difficult to predict and control. In addition, Rust enforces alias and mutability control through its ownership and borrowing discipline, enabling features such as fearless concurrency and stronger compiler optimizations while preserving memory safety. Scala Native brings Scala to systems-level targets by compiling to LLVM IR. However, it still relies on third-party garbage collectors for memory management and does not provide Rust-style static guarantees for safe memory management, aliasing, and mutability control. This thesis presents imem, a library that brings Rust-inspired ownership and borrow checking to Scala, and Scinear, a minimal compiler plugin that adds linear types to Scala and integrates them with capture checking and polymorphism. imem proves that, given Scala's type system, linearity is the only missing ingredient needed to implement most of Rust's ownership and borrowing discipline as a library rather than a dedicated language feature. imem provides linear Box values and immutable and mutable references, enforces ownership rules, and statically controls aliasing and mutability, following the Stacked Borrows model. In addition, imem offers optional runtime verification to detect potential safety violations when users apply workarounds to the static rules. To demonstrate practicality, the thesis develops a safe linked-list case study and compares imem against Rust, vanilla Scala, and linear Scala. The evaluation shows that imem matches Rust's level of expressiveness, so it can support list operations and iterators alongside statically enforcing ownership rules and controlling mutability.Item type: Item , Considerations for the Design of a UD-NCF Composite Energy Absorbing Structure for Frontal and Oblique Crush Loading(University of Waterloo, 2026-03-27) Huang, NingweiGrowing concerns regarding climate change have prompted national and international regulatory agencies to implement increasingly strict regulations aimed at reducing carbon dioxide (CO₂) emissions. These regulations have driven automotive manufacturers to place greater emphasis on sustainability and improved fuel efficiency in vehicle development. Owing to their high specific strength and stiffness, and superior energy absorption capability, carbon fiber-reinforced plastic (CFRP) composites are considered promising lightweight materials for vehicle frontal crash structures. Their widespread adoption in the automotive industry was previously limited due to high manufacturing costs and challenges in accurately predicting their response under impact loading. However, CFRP components manufactured via high-pressure resin transfer (HP-RTM) with highly reactive resins enable reduced production cycle times and, thus, adoption in automotive structures. Unidirectional non-crimp fabric (UD-NCF) reinforcements offer further advantages, including reduced manufacturing costs, high in-plane mechanical properties, and enhanced design flexibility. To meet safety requirements, vehicle structures must be designed to effectively absorb energy under various impact conditions to protect the occupants from injury. Previous studies have primarily focused on evaluating the impact performance and energy absorption characteristics of CFRP composite components under axial loading. Few studies have investigated the effects of oblique loading on the crush performance of composite structures and they are mainly restricted to closed-profile tubes, which are difficult to manufacture using liquid composite molding technologies such as HP-RTM. To date, the crush performance of UD-NCF composite components under oblique loading has not been examined. Therefore, this thesis aims to design a UD-NCF composite frontal crush component capable of achieving progressive energy absorption under both axial and 30-degree oblique loading conditions. The design is limited to adhesively bonded double channel components as they can be readily fabricated using HP-RTM processes, while the scope of the study is intended to address several considerations for this design concept. Firstly, the energy absorption capability and failure modes of UD-NCF composite single and double hat channel specimens with [0/±45/90]s and [±45/02]s stacking sequences under quasi-static oblique (i.e., 30-degree off-axis) loading were experimentally investigated to provide data for validation of an impact simulation model. For the single hat channel, specimens with a [0/±45/90]s layup achieved 0.78% higher total energy absorption and 11.2% higher specific energy absorption (SEA) than specimens with a [±45/02]s layup. Specimens with both stacking sequences exhibited lamina bending during the initial crushing stage, followed by premature failure. For the adhesively bonded double hat channel, the [±45/02]s specimens yielded 6.4% higher total energy absorption and 9.95% higher SEA than the [0/±45/90]s specimens due to their higher axial stiffness. The double hat channel configuration demonstrated significant improved crush stability than single hat channels throughout the loading process, regardless of stacking sequences. Secondly, computer-aided engineering (CAE) impact simulation models were developed to predict the energy absorption capability of UD-NCF composite channels under quasi-static and dynamic crushing conditions. Simulation models for both single and double hat channel specimens were validated against the performed oblique crushing experiments and exiting axial crush test data from the literature. The results showed that the CAE impact simulation model accurately predicted the crush performance for both single and double hat channel specimens under dynamic axial loading, while having reduced accuracy under quasi-static loading conditions. Lastly, the influence of channel cross-sectional geometry and laminate stacking sequence on energy absorpiton capacility of the UD-NCF composite channels under dynamic oblique loading was investigated using the validated simlation models. Single and adhesively bonded double channels with five distinct geometries and six stacking sequences were considered in the study. All single channel geometries with a [0/±45/90]s stacking sequence exhibited similar SEA, which was the case for both axial and oblique dynamic loading. Under oblique loading, all double channel geometries with [0/±45/90]s and [0/±45/90/±30]s stacking sequences exhibited premature failure. The hat channel geometry consistently demonstrated stable progressive crushing, whereas the other geometries considered showed greater sensitivity to stacking sequence and loading angle. Across all stacking sequences considered, only channels with a [±45/02]s stacking sequence achieved stable crushing under both axial and oblique loading, while also providing the highest SEA values. The double hat channel with the [±45/02]s stacking sequence was identified as the most promising configuration for subsequent frontal crush structure design. Overall, this represents the first comprehensive assessment of the crush performance of UD-NCF composite components under oblique loading conditions. These findings contribute practical design guidelines for the future development of lightweight UD-NCF frontal crush structures in vehicles.Item type: Item , From Asymptotic to Finite-Size Security in Decoy-State Quantum Key Distribution(University of Waterloo, 2026-03-24) Kamin, LarsQuantum Key Distribution (QKD) promises information-theoretic security, yet bridging the gap between theoretical proofs and practical implementations, specifically those operating with finite resources and imperfect devices against general coherent attacks, remains a critical challenge. This thesis develops a spectrum of efficient security proof techniques within the composable security framework, calculating key rates for both fixed- and variable-length protocols while accounting for realistic imperfections. We begin by addressing detection setups through an extension of a squashing map, the flag-state squasher, used for reducing the infinite-dimensional Hilbert spaces of optical elements to finite dimensions. This extension accommodates arbitrary passive linear optical setups while allowing for the inclusion of detection inefficiencies and dark counts in the security analysis. Subsequently, we advance the analysis of decoy-state protocols and introduce two major improvements. First, we reformulate the decoy-state analysis to recover no-decoy key rates, tightening the optimization. Second, we derive a unified framework that performs the key rate optimization and decoy analysis in a single step. This enables the bounding of the relevant entropies with arbitrary precision in the finite-size regime and successfully recovers the Devetak-Winter formula in the asymptotic limit. Furthermore, we improve the security analysis for generic QKD protocols against independent and identically distributed (IID) collective attacks. Our refined analysis yields finite-size corrections proportional to detected rather than transmitted signals and, by developing sharper concentration inequalities, achieves significantly improved finite-size scaling. Finally, leveraging the marginal constrained entropy accumulation theorem (MEAT), we establish a flexible numerical Rényi security framework against coherent attacks for both fixed- and variable-length protocols. This approach consistently outperforms existing reference proof techniques, including those based on entropic uncertainty relations, providing significantly higher key rates for both qubit and practically relevant decoy-state protocols. Moreover, we present finite-size key rates for generic QKD protocols accounting for realistic intensity and phase imperfections. Overall, this thesis provides the necessary theoretical framework to bridge the gap between idealized models and experimental reality, offering a scalable path toward secure quantum communication under realistic conditions, as demonstrated by the application of these techniques in experimental collaborations.Item type: Item , [UN]PREDICTABLE SUBURBIA: An Exploration of Rules, Representation, & Rigidity(University of Waterloo, 2026-03-23) Ma, Nhuy CindySuburbia presents itself as an uninspiring, homogenous landscape, where our personal lot lines define our boundaries of care. Characterized by detached houses with private gardens and fences, the controlled and uniform design of these spaces, which are rooted in historical policy, greatly limits potential for social and spatial complexity. As experts in charge of understanding the rules, guidelines, and best practices that dictate the design of urban zones, we often confront this entrenched reality, built through decades of regulatory frameworks. This thesis anchors itself within urbanism, exploring the boundaries and intersections between rules, guidelines, and suggestions. It tests various design methodologies that work within established frameworks of control to subvert suburban monotony and enable greater agency and complexity. Rather than rejecting urban rules outright, the research examines how both control and agency can coexist to produce varied and unexpected outcomes within a suburban context. Drawing upon the work of Michael Sorkin’s Local Code, Alex Lehnerer’s Grand Urban Rules, Ekim Tan’s Play the City, and Archizoom’s No Stop City, the thesis develops a novel, iterative design methodology combining analytical study with tests of agency and complexity. This method critically examines and reimagines urban rules through design experimentation aimed at uncovering new possibilities for suburban transformation. This thesis offers both a theoretical critique of suburban spatial and social homogeneity, as well as a practical methodology for designers to engage with and reshape suburban environments. By reframing suburbia as a space of controlled agency, this work encourages architectural and urban innovation within traditionally rigid, mono-programmatic landscapes. Thus, suburbia is positioned not as a fixed condition, but rather a mutable environment capable of supporting complexity and social diversity.Item type: Item , Measuring the Weak Gravitational Lensing Signal from Cosmic Voids(University of Waterloo, 2026-03-23) Martin, HunterThe field of cosmology is currently in an era principally focused on statistical precision. To achieve greater precision, deeper and wider surveys are actively being developed and conducted to gather more and more information about the surrounding cosmos. Alternatively, there are efforts to develop new statistical probes to better utilize currently-existing data to obtain tighter constraints. Cosmic voids, as vast underdense regions, then represent an ideal candidate to complement the statistical information already extracted from the opposite density extremes: massive luminous galaxies and galaxy clusters. Voids have already seen some success in constraining cosmology through probes like the void size function and void-galaxy cross-correlations. This thesis introduces the matter distribution within cosmic voids as measured by weak gravitational lensing as a new probe that is significantly detectable within current and future data sets. The goal of this demonstration is to justify future efforts in extracting the cosmological information from this newfound signal. For data currently available, we make use of the large overlap of the Sloan Digital Sky Survey (SDSS) Baryon Oscillation Spectroscopic Survey (BOSS) and the Ultraviolet Near-Infrared Optical Northern Survey (UNIONS). We measure void lensing around BOSS voids and find that we can detect the signal at 6.2 sigma significance, the most significant detection from spectroscopically-identified voids to date. We additionally are able to significantly detect differences in the void profile with void size between the larger half and smaller half of the void catalogue at 2.3 sigma. To help perform this measurement, we present and validate a novel method for computing the Gaussian component of the conventional weak lensing covariance, adapted for use with void studies. Comparing the void profile to a measurement of the void-galaxy cross-correlation to test the linearity of the relationship between mass and light, we find good visual agreement between the two, and a galaxy bias factor of 2.45 pm 0.36, consistent with other works. We additionally assist in future developments of the UNIONS data by running quality control tests for the future photometric redshift data sets. These future releases will provide additional data to make these detections stronger. For future data, we use the Flagship simulation of the Euclid survey to simulate the expected data from the Euclid VISible imager (VIS) and Near-Infrared Spectrometer and Photometer (NISP) instruments across an octant of the sky. This octant then roughly corresponds to an equivalent area of the planned second data release of the survey. We extend the methodology and covariance model to a lensing tomography setup by binning voids and sources along the line of sight. We then stack information along source bins. From this, we are able to detect the void lensing profiles with 12 sigma, 11 sigma, 7.1 sigma, and 4.7 sigma significance across the four different void redshift bins. Scaling the most significant result to the expected areas for the first and final data releases, we get 6.9 sigma and 21 sigma respectively. We additionally find a 4.4 sigma difference between the lensing profiles of the smallest voids and the largest voids.Item type: Item , Communities on-Track: A Spatial Reprogramming of Regional Railway Stations for Interinstitutional and Civic Exchange(University of Waterloo, 2026-03-23) Yuen, Hoi Man NeliAs our society grows more mobile, public transport is becoming an increasingly important alternative to private transport. In Ontario, this shift is compounded by the decentralization of post-secondary education, accelerated by the expansion of satellite campuses and changing patterns of study following the COVID-19 pandemic. As a result, post-secondary students are becoming more reliant on public transport than ever before. However, in the North American context, public transit is often still seen as secondary to private transit, resulting in stations which are underutilized, and underperform socially and functionally. This work addresses the site of the regional railway station. Having once been central to the socio-economic development of towns and cities, this role has since diminished as the result of a fixation on network connectivity alone. In response, this thesis leverages the transformational developments in both sectors to propose a network-based strategy, where public transit and post-secondary systems are conceived of and developed in conjunction. By positioning stations as sites of intersection between mobility and knowledge production, the project frames them as spaces for civic exchange, where the rhythms of travel create opportunities for collective encounter. The implementation of design interventions at three stations along GO Transit’s Kitchener Line demonstrates how context-specific programming can reactivate stations as civic anchors. Together, they offer a distributed model for linking mobility, learning, and community across the regional railway network, repositioning railway infrastructure as an active component of social life rather than a purely functional system.Item type: Item , Decentralized Traffic Correlation Using Programmable Switches(University of Waterloo, 2026-03-19) Singh, GurjotAttributing network attacks to their sources is challenging as adversaries employ proxy chains, virtual private networks, and anonymity infrastructures to obscure their origins. Traffic correlation techniques mitigate this challenge by linking flows observed at multiple network vantage points using invariant characteristics such as timing and packet volume. However, existing attack attribution systems largely rely on centralized architectures that aggregate flow features at dedicated correlators, introducing computational and communication overheads that hinder scalability in high-speed networks. This thesis discusses RevealNet, a decentralized framework for attack attribution that leverages P4-programmable switches to perform traffic correlation directly within the network fabric. RevealNet distributes feature extraction and correlation across cooperating networks, reducing dependence on centralized processing and minimizing telemetry offloading. Upon detection of a malicious flow, flow features are disseminated to participating switches, which locally correlate them against outgoing traffic using lightweight similarity metrics. To operate within the constraints of programmable data planes, RevealNet employs compact flow feature representations based on traffic aggregation matrices and sketching techniques designed for integer-only computation. The framework further incorporates heuristic optimizations that exploit temporal alignment and traffic-volume similarity to reduce correlation complexity and limit false positives. Experimental evaluation conducted over a prototype of our framework using multiple real-world attack datasets demonstrates that RevealNet achieves attack attribution accuracy comparable to state-of-the-art centralized systems while significantly improving scalability. Notably, compact flow feature representations achieve accuracy comparable to complete flow representations, substantially reducing memory requirements without sacrificing attribution performance. Overall, RevealNet's distributed design reduces bandwidth overhead by up to 96\% when deployed on a testbed consisting of 20 P4-enabled switches and enables programmable switches to correlate a significantly larger number of flows concurrently, demonstrating that attack attribution can be effectively decentralized within programmable network infrastructures.Item type: Item , Investigation of Magneto-Optical and Photonic Properties of Plasmonic Tungsten Oxide and Alkali Tungsten Bronze Nanocrystals(University of Waterloo, 2026-03-13) Jaics, Gyorgy J.Emerging quantum phenomena in multifunctional materials are reshaping the conceptual and technological foundations of modern condensed matter physics, driving the rapid adoption of plasmonic materials within quantum, optoelectronic, and photonics industries. Plasmonic semiconductor nanocrystals, characterized by collective charge carrier oscillations that are intrinsically linked to their electronic structure, provide a powerful platform for exploring quantum and photonic functionalities beyond conventional noble metal-based plasmonics. After introducing the fundamental principles of localized surface plasmon resonances (Chapter 1) and experimental methodologies (Chapter 2), in this dissertation, I experimentally examined the electronic structure, magneto-optical properties, and photonic applications of oxygen-deficient, doped plasmonic semiconductor metal oxide nanocrystals (NCs), using magnetic circular dichroism (MCD) spectroscopy. First, I investigated the electronic structure of compositionally and electronically com- plex semiconductor NCs, focusing on oxygen-deficient tungsten oxide (WO3-x) NCs. The motivation of this study (presented in Chapter 3) was to revisit and elucidate the electronic structure of plasmonic compound semiconductor nanostructures that have recently been reported to exhibit unique and promising plasmonic properties based solely on conventional optical absorption data. Unlike conventional optical absorption spectroscopy, MCD spectroscopy, which utilizes excitation by circularly polarized light in an external magnetic field, enables spectral specificity and sensitive detection of plasmon resonances. Using variable- field and variable-temperature MCD spectroscopy, I demonstrated that the broad optical absorption bands (visible-to-near infrared region) of colloidal plasmonic WO3-x NCs originate from intraionic W5+ ligand-field transitions at higher energies (visible region) and free-carrier-related plasmonic absorption at lower energies (near infrared region), which spectrally overlap despite fundamentally different electronic origin. The results of this study demonstrated that caution must be exercised when assigning the absorption spectra of complex semiconductor NCs, particularly those containing transition-metal ions, to LSPR. Consequently, a sizeable portion of the literature on plasmonic semiconductor NCs should be re-examined and their conclusions revisited. With that, MCD spectroscopy is also demonstrated to serve as an effective methodology for reliable assignment and detailed investigation of LSPR in NCs. Importantly, owing to their electronic band structure, plasmonic semiconductor NCs support the coexistence and interaction of plasmonic oscillation with other quasiparticles, enabling intrinsic (interface-free) interactions such as plasmon-exciton, plasmon-spin, vii plasmon-phonon, and plasmon-magnon coupling. Owing to the non-resonant nature of plasmonic and interband (excitonic) absorption in doped plasmonic semiconductor NCs, realization and modulation of intrinsic plasmon-exciton coupling are challenging. In Chap- ter 4, I investigated the impact of NC geometry on the intrinsic plasmon-exciton interactions, using colloidal Cs-doped non-stoichiometric tungsten oxide (Cs:WO3-x) hexagonal prisms. With the aid of variable-field and variable-temperature MCD spectroscopy, I observed that NC aspect ratio as controlled geometric parameter enables the modulation of excitonic Zeeman splitting mechanism in the presence of external magnetic fields. Specifically, while for low-aspect-ratio nanostructures (nanoplatelets) the splitting of excitonic states are dictated by the spin of localized carriers (anomalous Zeeman splitting), the high-aspect-ratio nanostructures (nanorods) exhibit free-carrier-induced splitting (normal Zeeman splitting) of the NC excited states. The results of this study demonstrate that manipulation of the aspect ratio of degenerately doped semiconductor NCs can allow for unique control of their excitonic magneto-optical properties, providing promising opportunities for further fundamental investigations and potential applications of this phenomenon in quantum technologies. Owing to their highly tunable free carrier densities and thus plasmon energies, plasmonic semiconductor NC enable a plethora of technologically relevant photonic and optoelectronic applications. In this context, we investigated the applicability of plasmonic colloidal WO3-x and Cs:WO3-x NCs for near-infrared sensing in metal-semiconductor- metal (MSM) photodetector devices, as presented in Chapter 5. The NCs, as photoactive components, were drop-cast on the active region of the detector. In the presence of the NCs, significant enhancements of photoresponse (by up to a factor of ∼2.5) were observed. The results of this work demonstrated the potential for a cost-effective and scalable method exploiting tailored plasmonic semiconductor NCs to improve the performance of NIR optoelectronic devices, such as enhanced speed and sensitivity of receivers in optical fiber communications or increased range and reliability of light detection for autonomous vehicles. Owing to their unique electronic band structures, plasmonic semiconductor nanocrystals can harness visible-NIR light to facilitate chemical reactions with high efficiency. Upon resonant photon absorption, their localized plasmon resonances generate strong electro- magnetic near-field enhancement and energetic charge carriers, which enhance light ab- sorption, foster charge carrier separation and injection, and promote surface reaction pro- cesses. This combination of optical and electronic effects allows for a precise control over photocatalytic activity, making these nanocrystals highly tunable platform for visible- and NIR-light-driven chemical transformations. In Chapter 6, I investigated the plasmonic photocatalytic activity of post-synthetically surface-modified WO3-x NCs, using Rhodamine 6G (Rh6G) dye as model compound. In this study, we observed an approximately 3.3 times improvement in the plasmonic photocatalytic activity of ligand-free WO3-x NCs, attributed to a higher surface accessibility of Rh6G dye. Through post-synthetic annealing in air at elevated temperatures (350-800°C), we modulated the oxygen-deficiency (and thus the free carrier density and plasmon resonance) of the NCs. As a result of the high temperature treatment, we observed sintering and large specific surface area of NCs, as evidenced by scanning electron micrographs and BET analysis, respectively. However, in nanostructures with large specific surface area, adsorption processes are inevitable and can co-exist with photocatalytic degradation processes, which, in the literature, is often not accurately accounted for. In this study, we used a combination of electronic absorption spectroscopy, surface area analysis, and electrospray ionization mass spectrometry, and electrospray ionization mass spectrometry, and examined the coupling be- tween the adsorption and plasmonic catalytic activity. The results of this work show that the contribution from adsorption processes can be modulated via post-synthetic annealing while retaining coupling with the photocatalysis.Item type: Item , An Investigation of the Effects of Interfaces on the Fracture Resistance of 3D Printed Biopolymer Nanocomposites(University of Waterloo, 2026-03-13) Patil, HareshBiopolymer-based bone-inspired nanocomposites are potential alternatives to conventional allografts for reconstruction of segmental bone defects if engineered to be mechanically competent and osteoconductive. The high surfaces area to volume fraction of nanoparticles contributes to enhance the mechanical properties and cell-material interaction of bone-inspired nanocomposites. Nanocomposites prepared by dispersing appropriate volume fractions of nanohydroxyapatite (nHA) into a resorbable/degradable biopolymer resin matrix can mimic the inorganic and organic phases of bone composition, respectively. Such nanocomposites mixed with relevant photoinitiator can be used as 3D printing feedstock for fabricating patient specific synthetic grafts. Direct ink writing (DIW) is an effective material extrusion 3D printing method that offers flexibility to fabricate complex parts using diverse materials and programable deposition of the extruded feedstock (raster) allows the user to manipulate the mechanical properties of a printed part. Free radical polymerization of the nanocomposite matrix upon exposure to ultraviolet (UV) light of appropriate intensity cures the deposited raster during DIW printing and also bonds the newly deposited raster with a previously cured raster. Fracture resistance is an important mechanical attribute for bone substitutes in order to avoid catastrophic failure while enduring physiological loading after defect reconstruction. Natural bone has acquired remarkable fracture resistance and mechanical properties through its hierarchically organized microstructure. Mimicry of such microstructure is a novel approach in 3D printing to enhance the mechanical properties of the printed structures. This thesis reports upon an experimental investigation of photocurable bone-inspired nanocomposite biomaterials towards the goal of achieving robust DIW-printed structures. The aim was to enhance fracture resistance of the structures fabricated using these nanocomposites. The goal was achieved by proposing and testing approaches inspired by bone. Nanocomposite rasters were deposited and simultaneously UV cured in concentric layers on a rotating mandrel bed of a custom designed and built DIW printer. Multilayer nanocomposite microstructures were achieved partially mimicking the microstructure of lamellar bone. Free radical polymerization of the nanocomposite rasters resulted in detectable interfaces in the printed microstructures because of differences in crosslink density. Each printed microstructure revealed distinct morphology of interfaces. The contributions of these interfaces and the resulting microstructures on mechanical properties and fracture resistance of the nanocomposites were further evaluated with other printed microstructure configurations. The printed anisotropic nanocomposite microstructures showed higher fracture resistance than the isotropic cast control with marginal reduction in flexural strength and modulus. Fracture testing results indicated that weak interfaces in the printed microstructures dissipated a portion of mechanical energy and contributed towards enhancing the fracture resistance of nanocomposite, especially crack stability. Fracture resistance of these nanocomposites can be tuned by altering the morphology of the interfaces and therefore the microstructure using DIW printing. Crosslink density significantly contributes to mechanical properties of UV curable resins. In another approach, crosslink density of nanocomposite matrix compositions was altered by changing composition and additional functionalization. Biopolymer functionalization improved the crosslink density of the nanocomposites and exhibited flexural properties in the recommended flexural property range of bone cement according to ISO-5833 standard. Higher crosslink density of functionalized biopolymer improved resistance to crack growth initiation but induced brittle fracture behaviour. Contrarily, the addition of the functional oligomer (tri-glycerol diacrylate- TGDA) to nonfunctionalized biopolymer matrix functioned as a plasticizer in the crosslinked network of biopolymer and enhanced the crack growth resistance of 3D printed nanocomposite by three (3) folds. The added oligomer also contributed to enhance the shape holding and morphology of interfaces in both functionalized and nonfunctionalized biopolymer nanocomposites. Alteration to crosslink density of nanocomposite matrix significantly influenced the mechanical properties of the interfaces and fracture resistance of DIW printed structures. Finally, in an effort to further enhance the fracture resistance, microstructures were printed using coextrusion of functionalized and non-functionalized biopolymer nanocomposites, principally to organize discrete mechanical phases in the microstructures in addition to the preexisting interfaces. The combination of functionalized and non-functionalized biopolymer nanocomposites with novel coextrusion printing significantly improved the fracture resistance of brittle functionalized biopolymer nanocomposites from single point fracture toughness behaviour to rising resistance curve behaviour. High magnification images of the fractured surfaces indicated that the plastic deformation at the softer nanocomposite phases in the coextruded microstructures dissipated mechanical energy and enhanced the fracture resistance. However, interfaces were not detected at the intersection wall of core-shell in coextruded raster. The custom-built mandrel bed DIW printer (SkelePrint) along with its tailored modifications, demonstrates strong potential for fabricating complex bioinspired concentric-layer structures with functionally graded properties. The findings from this thesis provide key insights into the role of interfacial bonding in DIW-printed structures and its influence on mechanical performance of printed structure. These findings will foster a path for designing robust, bone-mimicking nanocomposite grafts with tunable mechanical properties, advancing their applicability in bone tissue engineering.Item type: Item , Characterizing Tele-optometry Users: Demographics, Refractive Error Profiles, and Visit Patterns Across a Multi-Site Private Practice(University of Waterloo, 2026-03-13) Rauniyar, NutanIntroduction Tele-optometry has emerged as a valuable model for delivering eye care remotely, enabling patients to receive vision assessments and consultations. While its use has expanded rapidly, especially during the COVID-19 pandemic, there remains a limited understanding of who accesses these services, the types of refractive error diagnosed remotely, and the patient visit patterns over the years. Purpose The purpose of the study is to describe the population demographics, characteristics of the refractive error conditions and visit patterns of individuals attending a multi-site private tele-optometry clinic in British Columbia (BC). While comprehensive clinical data including ocular health assessment was available, it is beyond the scope of this study. Method A retrospective descriptive analysis was conducted using de-identified patient data collected from a multi-site, private, tele-optometry clinic in British Columbia (BC), Canada, from 2021 to March 2025. The data analyzed included demographics (age, sex, occupation and geographical location), refractive error (sphere, cylinder, axis), pupillary distance, near addition, prism correction where applicable, and follow-up outcomes. Occupations of subjects were categorized based on the National Occupational Classification (NOC) codes, and the geographical distribution of individuals was analyzed by mapping postal codes to provide a visual representation of service reach. The spherical equivalent (SE) was calculated for each eye to classify refractive error into emmetropia, myopia, and hyperopia, and further subdivided into severity levels of low, moderate, and high. Astigmatism was categorized by the orientation of the cylinder axis as: with the rule (WTR), against the rule (ATR), or oblique. Descriptive statistics and frequency analysis were used to describe the characteristics of the subjects, and a paired t-test was used to compare the refractive data across methods. All analyses were conducted using Excel Version 16.98. Results A total of 6,708 patients were seen across five private clinical practice locations in BC. The mean age was 45.06 ± 17.29 years, with 53.85% female and 46.05% male patients. Most individuals were working adults, with 17.7% characterized as professionals and 14% as trade-skilled jobs. Patients were distributed across all five clinic locations based in BC, with the highest number of individuals residing within BC, followed by Alberta, and a smaller cluster in Saskatchewan, Newfoundland and Labrador, and the Yukon Territory. Myopia was the most prevalent type of refractive error at 50%, followed by emmetropia (28%) and hyperopia (21%). The difference in mean SE between the assessment of refractive error methods was small (<0.15 D), indicating high agreement. With-the-rule (WTR) astigmatism was the most prevalent type. The follow-up visits within 1-year were consistently more common than return visits occurring after 1-year. Conclusion These findings indicate that tele-optometry is being used primarily for convenient, locally accessible care among working-age adults and provides reliable refractive assessments using a synchronous clinical workflow. Broader representation and the analysis of clinical ocular health outcomes are needed in future studies to further understand the role of tele-optometry in comprehensive eye care delivery.Item type: Item , Development of a Coupled Hydro-Economic Model to Support Groundwater Irrigation Decisions(University of Waterloo, 2026-03-12) Tian, BoyaoThis research develops an integrated hydro-economic modeling framework to support farm-level irrigation decision-making under hydrologic, economic, and climatic uncertainty. The model couples groundwater dynamics, including analytical representations of groundwater-surface water interactions, with crop yield response, and economic valuation to assess trade-offs between agricultural profitability, water use, and long-term sustainability. Conditional Value-at-Risk (CVaR) is incorporated to evaluate downside risk and capture extreme events often overlooked by traditional risk assessment methods. The framework is applied to two contrasting agricultural systems: the High Plains Aquifer (U.S.) and the Saskatchewan River Basin (Canada), representing unconfined and confined aquifers under differing climatic and hydrologic conditions. The results demonstrate that moderate water use strategies often achieve the best balance between profitability and groundwater sustainability, while excessive pumping leads to significant streamflow depletion and reduced long-term benefits. Multi-objective optimization using NSGA-II identifies Pareto-efficient solutions that balance land value, water depth, and streamflow impacts. The model’s simplicity and adaptability make it accessible to farmers, policymakers, and regulators, providing a practical decision-support tool without requiring intensive data or computational resources. Overall, this research contributes to advancing hydro-economic modeling, integrating risk assessment, and promoting sustainable groundwater irrigation management under increasing climate and market variability.Item type: Item , Exploring and Visualizing Fact-Based Software Models to Improve Program Comprehension(University of Waterloo, 2026-03-12) Ferreira Toledo, RafaelSoftware engineers dedicate significant time and effort to debugging, analyzing, and understanding large, complex software. Such systems can comprise millions of lines of code that implement the program behaviour. When working on such maintenance tasks, the engineer needs to examine the code involved to understand exactly how the program's behaviour is implemented before they can perform any changes or fixes. Depending on the complexity of the program behaviour, the engineer must navigate dozens of lines of code scattered across multiple files to comprehend a single instance of the analysis results. During this code navigation, they pose program comprehension questions that guide the building of a mental model of the program's behaviour. It is well known that answering such queries can be time-consuming, error-prone and cognitively demanding. These risks and demands increase with the complexity of the software under study, for example, when analyzing software that is a software product line (SPL), where an SPL represents a family of related software product variants (e.g., different models of cellphones or vehicles sold by the same company). Many of the above complexities can be addressed by working with a model of code because models are abstractions that are generally smaller, simpler, and more amenable to automated analyses. A software fact-based model is a collection of program facts that reflect the properties and behaviour of a software system. Program facts include source-code entities (e.g., variables, functions), their attributes (e.g., names, source file), and their relationships (e.g., function calls, class inheritance). Program facts can be automatically extracted from source code with an enhanced parser, and the facts can be linked together into a fact-based model of the software system. The resulting collection of software facts represents the system's properties and behaviour as a graphical model that can be managed and queried using graph database technologies. Graph database systems and their native features enable efficient and optimal storage, querying, and visualization of the software fact-based model. Software queries and analyses can be expressed using the database's query language. However, writing common queries from scratch can be repetitive and time-consuming, and, for large and complex queries, it can be error-prone. This thesis investigates whether fact-based software modelling and analysis can improve program comprehension of software systems, including variable systems. This thesis makes three contributions: (1) identifying the program-comprehension questions that software fact-based models can support, (2) designing a query interface that facilitates program-comprehension questions and supports incremental exploration of query results, and (3) developing an efficient visual encoding of results of queries on an SPL model. We evaluated how well fact-based models can answer program-comprehension questions. Previous studies categorized program comprehension questions, but primarily focused on code-based questions rather than model-based questions. We performed a literature review to identify program-comprehension questions that can be posed to fact-based models. We correlated engineers' information needs with the information that fact-based models supply through a comprehensive analysis of previous works on program comprehension questions and graph visualization. Finally, we demonstrated that 38 program comprehension questions could be answered by a fact-based model by expressing them as Cypher queries over a Neo4j factbase. Secondly, we studied how to improve the engineer's experience in understanding program facts through program-comprehension query templates and follow-up queries. We extended Neo4j Browser to support initial program-comprehension queries and follow-up queries over fact-based model elements, giving users greater control and precision in their exploration of the model. We conducted a user study comparing the use of our enhanced Neo4j Browser with a standard code editor, and it shows significant gains in users' efficiency and reduced mental effort during program-comprehension tasks. Finally, we studied how to improve an engineer's comprehension of variable results from a fact-based analysis of an SPL. Analyzing an SPL model produces variable results, where each result may apply to some product variants and not others (e.g. if the analysis refers to feature-specific code). Variable analysis results are typically represented by annotating each result with a presence condition (PC), where the PC is a propositional formula that represents the product(s) for which the result holds. Thus, interpreting the variable analysis results of an SPL model involves determining the program variant (or group of variants) that applies to specific results, which can be error-prone and cognitively demanding. We developed ^Neo4j Browser, a modified version of Neo4j that provides features for filtering analysis results based on the feature configuration of SPL variants and highlighting the results associated with each filter. ^Neo4j Browser helps users to interpret variable results faster, more accurately, and with less mental effort.Item type: Item , Scaling Two-Party Differentially Private Selection(University of Waterloo, 2026-03-12) Ni, HaoyanWe consider the problem of differentially private (DP) selection in the two-party setting. This problem can be solved with excellent utility guarantee in the central setting, but the distributed case is much less studied. Existing solutions use secure multi-party computation (MPC) techniques to simulate computation in the central model, which are not sufficiently scalable to large candidate sets. This work provides a new protocol for two-party DP selection that achieves sublinear runtime in the MPC phase. Our design lets each party locally trim the candidate set before participating in an MPC protocol. Based on this heuristic, we provide two variations, one of which reveals each party’s trimmed candidate set and the other does not. We evaluate our method on public datasets based on review counts and location check-ins. The results demonstrate that the variant hiding the trimmed candidate sets outperforms the other variant in both utility and efficiency. Furthermore, our solution is able to offer competitive utility to the traditional solution at a significantly lower computation cost in lower privacy regimes.Item type: Item , Developments in Photon Absorption Remote Sensing Microscopy and Deep Learning–Based Virtual Histochemical Staining(University of Waterloo, 2026-03-11) Tweel, JamesHistological staining remains the gold standard of diagnostic pathology, enabling visualization of tissue structure and cellular morphology. However, traditional staining workflows are time-consuming, destructive, and chemically intensive, limiting the number of stains that can be applied to valuable biopsy samples. These processes also introduce delays, variability in stain quality, and high resource demands. To address these limitations, this thesis presents a label-free histology framework that combines Photon Absorption Remote Sensing (PARS) microscopy with deep learning–based virtual staining to replicate commonly used histochemical stains without altering or consuming the tissue. The first component of this work focuses on the development of an automated whole slide PARS system designed for imaging thin, transmissible tissue sections. The system captures sub-micron resolution radiative and non-radiative absorption contrasts using 266 nm UV excitation, targeting endogenous chromophores such as DNA and extracellular matrix components to reveal nuclear and connective tissue structures. Whole slide imaging is achieved through automated focusing, tiling, and contrast leveling, producing gigapixel-scale images directly comparable to standard hematoxylin and eosin (H&E) slides. The second component introduces a deep learning virtual staining pipeline based on the unpaired CycleGAN architecture, with direct comparison to the paired Pix2Pix model. These models are trained on one-to-one whole slide images of PARS data and chemically stained H&E slides. The first masked clinical concordance study is conducted using breast needle core biopsies, where board-certified pathologists independently diagnose and assess the virtual and real H&E slides. The study demonstrates substantial diagnostic agreement, validating the clinical viability of the PARS-based virtual staining approach. The final component expands the PARS imaging system through the integration of a secondary long-wave UV excitation wavelength (355 nm), enabling sensitivity to additional biomolecular absorbers and thereby expanding the captured label-free contrasts. The additional label-free contrast contributes to improved emulation of histochemical stains beyond H&E, including Masson’s Trichrome, Periodic acid–Schiff, and Jones methenamine silver. To further improve performance, a more advanced registration-guided GAN model (RegGAN) is adopted, outperforming both Pix2Pix and CycleGAN. The resulting whole slide virtual images closely match their ground truth counterparts in qualitative appearance, quantitative metrics, and masked pathology review. Together, this work presents a non destructive histology pipeline capable of generating high-resolution, multi-stain images of commonly used stains without chemical labeling, representing a step toward integrating label-free microscopy and deep learning virtual staining into routine pathology workflows.Item type: Item , Transport and Irreversible Retention of Hydrophobic Nanoparticles by Fluid-fluid and Fluid-Solid Interfaces in Porous Media(University of Waterloo, 2026-03-06) Rahham, YoussraHydrophobic nanoparticle (NP) transport in porous media has implications for aquifer transport and retention of a wide range of contaminants that infiltrate water resources and threaten human health as well as aquatic environments. Comprehension of NP transport and interactions with hydrophobic surfaces and interfaces -given their ubiquity in porous aquifers- is essential for groundwater remediation from organic contaminants, toxic engineered NPs, and nanoplastics. This research investigates the transport and attachment of hydrophobic NPs under varying physicochemical conditions in saturated and unsaturated porous media by integrating experimental observations across multiple scales, theoretical extended-DLVO predictions, and numerical modeling. A non-toxic, negatively-charged, hydrophobic model NP system synthesized from ethyl cellulose (EC), and exhaustively characterized for colloidal stability and interfacial interactions, was employed to systematically explore NP interactions with fluid-fluid and solid-fluid interfaces. The upscaling capability of an advection-dispersion-retention continuum model was compared vis-à-vis a pore network model of irreversible NP attachment onto fluid interfaces in 3D columns packed with spherical glass beads, showing that the latter captures key pore-scale dynamics such as bypassed interfaces, slow-moving corner flows, and diffusion-dominated retention. Transport experiments in 2D microfluidic pore networks confirm that the dynamics of NP retention in unsaturated porous media depend not only on the saturation of the non-wetting phase, but also on its connectivity and the accessibility of immobile fluid-fluid interfaces. Experimental evidence demonstrates that ethyl cellulose nanoparticles (EC-NPs) irreversibly attach onto immobile fluid-fluid interfaces and experience delay in slow moving zones owing to geometric effects. Similarly, hydrophobic solid-fluid interfaces represent permanent sinks for EC-NPs. The attraction between a hydrophobic particle and a hydrophobic solid surface may be strong enough for irreversible attachment to take place, even under conditions of strong electrostatic repulsion. The strength of this hydrophobic interaction between an EC-NP and a hydrophobic collector surface is demonstrated using octadecyltrichlorosilane-treated glass and quantified via systematic contact angle measurements. Under destabilizing ionic conditions, irreversible EC-NP aggregation results in the formation of a secondary porous structure within hydrophilic porous media, altering permeability and retention patterns. Both phenomena are inadequately captured by macroscopic breakthrough curve (BTC) analyses alone. For example, attachment onto fluid-fluid and fluid-solid interfaces manifests itself on BTCs at low injection concentrations, whereas the opposite effect emerges in the presence of salt. This research advances the field by conducting transport experiments under carefully controlled conditions. The findings, supported by theoretical analysis and supporting experimental evidence, highlight key limitations in current modeling approaches and provide foundational experimental data that should advance the development and validation of numerical models of nano-colloid transport in porous media. Besides enhancing predictive capabilities for the fate of hydrophobic nanomaterials in the subsurface, this research informs risk assessment and the design of groundwater remediation strategies, ex-situ (i.e., NP filtration media) and in-situ (e.g., permeable adsorptive barriers for fluorinated contaminant capture and oil spill cleanup).Item type: Item , Soft Matter Templating for Fabrication of Hierarchical Cryogels(University of Waterloo, 2026-03-06) Amirieh, EstatiraHierarchical cryogels are a promising class of lightweight, highly porous materials whose multiscale pore architecture can simultaneously enable rapid mass transport and high adsorption capacity, making them attractive for diverse applications. Numerous approaches have been introduced so far to produce hierarchical cryogels. However, these approaches are often processing-intensive, requiring multi-step templating, tightly controlled freezing protocols, or complex drying strategies that can limit scalability and restrict independent control over pore hierarchy. Moreover, most existing approaches rely on the fabrication of structured cryogels from gel-like precursors, which require high solid concentrations, thereby increasing density and compromising lightweight characteristics. This work utilizes a recently introduced technique by our group, namely liquid-streaming (templating), that facilitates the formation of hierarchical cellulose nanocrystal (CNC)-based cryogels through filamentary structuring of CNC aqueous suspension (liquid-like) in an apolar medium. In this approach, an aqueous nanomaterial dispersion is injected into a surfactant-containing hexane bath to produce a filamentous all-liquid network, which is subsequently freeze-dried to yield a worm-like hierarchical cryogel. A central objective of this approach is to simplify the rheological requirements, broaden the range of extrudable materials, and dissociate filament stability from bulk viscoelasticity. By controlling factors such as interfacial tension, interfacial rheological features, extrusion rate, and solid content, one can map the operational “printing window” for producing continuous, shape-persistent filaments even from low-viscosity fluids. Herein, key injection factors governing filament formation, including needle size, nanomaterial concentration, and injection pressure, are investigated to delineate the transition between stable filament formation and breakup behavior. It is also shown how these factors dictate the morphology, e.g., filament diameter, of the structured liquids. A process–structure map is developed to define operating windows that reliably produce filamentous all-liquid systems across a range of conditions, providing practical guidance for reproducible fabrication and architectural control. The resulting worm-like cryogels from the engineered filamentous all-liquid systems exhibit intrinsic hierarchical porosity, with macroporous inter-filament voids coupled with finer porosity on and within the filament structure. To evaluate functional implications of this architecture, worm-like cryogels are compared against conventional bulk cryogel counterparts in oil absorption testing. The worm-like cryogel demonstrates improved uptake performance, achieving a 22% increase in oil absorption efficiency relative to bulk structures. In general, this thesis establishes liquid templating as an accessible and tunable route to CNC-based hierarchical cryogels and provides processing guidelines that link injection conditions to structure and absorption performance.