Theses
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/6
The theses in UWSpace are publicly accessible unless restricted due to publication or patent pending.
This collection includes a subset of theses submitted by graduates of the University of Waterloo as a partial requirement of a degree program at the Master's or PhD level. It includes all electronically submitted theses. (Electronic submission was optional from 1996 through 2006. Electronic submission became the default submission format in October 2006.)
This collection also includes a subset of UW theses that were scanned through the Theses Canada program. (The subset includes UW PhD theses from 1998 - 2002.)
Browse
Recent Submissions
Item type: Item , Dissolvable Sugar-Based Untethered Magnetic Millimeter-Scale Robot for Blood Clot Removal(University of Waterloo, 2026-01-22) Sparkes, SarahThrombosis, or the formation of blood clots, is a potentially life-threatening condition that results in the complete or partial occlusion of a blood vessel. It remains one of the most prevalent causes of death worldwide. Current treatment approaches involve the intravenous administration of thrombolytic drug, which can increase the risk of uncontrolled bleeding, or catheter-directed treatments, which may have limited access to hard-to-reach areas of the vasculature and can cause catheter-related injuries. Untethered magnetic robots present an alternative approach for thrombosis treatment that addresses the current shortcomings. In this work, a rapidly dissolvable, millimeter-scale, magnetic robot is proposed for the delivery of thrombolytic drug for thrombus removal. The robot is composed of a sucrose-based material with embedded superparamagnetic iron oxide nanoparticles for magnetic control. Although assessment of the robot actuation showed limited forward propulsion from its helical structure, it was able to withstand a maximum flow rate of approximately 72 mL/min, which is comparable to the literature for small-scale magnetic robots. The mean dissolution time of the sucrose structure was 4.65 minutes. The blood compatibility of the robot material was measured through the upregulation of platelet activation markers and found to improve with decreasing material concentration. The drug delivery mechanism consisted of a sealed cavity along the center of the robot to maximize thrombolytic load and avoid drug exposure to high temperatures during fabrication. Release of a placeholder fluorescent protein was found to be gradual across the entire dissolution of the robot. To ensure the thrombolytic agent was not compromised upon loading into the robot, in vitro incubation with human thrombi was performed. The thrombolytic-loaded robot showed similar thrombus mass reduction compared to the direct administration of thrombolytic agent. Finally, the robot functionality was validated using an ex vivo endovascular thrombosis model of the sheep iliac arteries. The robot was clearly visualized under x-ray fluoroscopy due to its embedded magnetic material, enabling its guidance to the ex vivo thrombus via an external rotating permanent magnet mounted on a robotic arm. Successful navigation through a vascular bifurcation was demonstrated and robot mechanical action was shown to accelerate clot mass reduction. The effect of localized thrombolytic delivery on clot mass reduction in the ex vivo model was inconclusive. Overall, the proposed untethered and dissolvable robot would enable the localized delivery of thrombolytic agent to blood clots without the need for retrieval. This can lead to improved patient outcomes by reducing the risks of catheter-related injuries and uncontrolled bleeding resulting from the systemic administration of thrombolytic agents.Item type: Item , Resource Management for Edge-Assisted Extended Reality(University of Waterloo, 2026-01-22) Pei, YingyingExtended reality (XR) enables immersive experiences by seamlessly merging the physical and digital worlds. Supporting such experiences requires real-time and high-quality rendering of virtual content to generate video frames, which is computationally intensive and poses a challenge for resource-constrained XR devices. To overcome this limitation, a promising approach is to offload rendering tasks to nearby edge servers with powerful computing resources. In an edge-assisted XR system, interdependent tasks, including video frame rendering, encoding, and transmission, need to be executed in a pipeline, which consumes substantial communication, computing, and caching resources. The efficiency of network resource provisioning has a direct impact on users' quality of experience (QoE), which reflects the presence and immersive of a user during virtual content viewing and is measured by the weighted sum of visual quality, quality variation, and round-trip latency. Our objective is to efficiently manage multi-dimensional network resources for the XR service to improve user QoE under dynamic network environments. However, the technical challenges are as follows: 1) given the spatiotemporally varying service demand caused by user mobility, how to proactively provision edge resources for the service while achieving satisfactory user QoE; 2) how to adaptively allocate edge resources for individual users to accommodate demand fluctuations caused by dynamic viewing behavior; and 3) in the presence of task dependencies in the pipeline, how to jointly coordinate task processing parameters (e.g., rendering quality, frame encoding type) to improve user QoE. In this thesis, we design efficient resource management schemes for an edge-assisted XR system to address the above challenges. First, a mobility-aware resource provisioning scheme is proposed to enhance resource utilization while satisfying user QoE on a large timescale. Specifically, we present a mobility model tailored for XR users to capture both user spatial movements and interaction features. Then, we estimate user-specific model parameters and adopt a sample average approximation method to model the relationship between user QoE and the consumption of communication and computing resources. A coordinate descent algorithm is designed to make resource reservation decisions, where a deep neural network provides a valuable initial point to accelerate convergence. Simulation results demonstrate that the proposed resource provisioning scheme is more efficient in reducing network resource consumption while satisfying user QoE, compared with benchmark schemes. Second, we develop an adaptive volumetric video caching and rendering scheme to enhance real-time user QoE by considering dynamic user viewing behaviors. Particularly, volumetric videos of different quality levels need to be cached, rendered, and delivered to XR devices for different viewing distances within a time latency. Given limited resources for the service, we formulate a user QoE maximization problem to jointly optimize volumetric video caching and rendering decisions based on users’ real-time locations and viewing distances. To solve this problem, we first design an online regularization-based optimization algorithm to obtain caching decisions. We then present a low-complexity binary search algorithm to determine optimal rendering quality. Simulation results demonstrate that the proposed scheme achieves higher real-time user QoE in comparison with benchmark schemes. Third, we design a scheme for joint selection of rendering quality and encoding type by considering the interdependency among edge processing tasks to enhance long term user QoE. To cope with network dynamics, the rendering quality of frames can be dynamically adjusted, which in turn triggers an intra-frame encoding and leads to a sudden transmission burst. To capture such task interdependency, we formulate a long-term QoE maximization problem under edge computing and communication resource constraints, which jointly selects the rendering quality and either intra- or inter-frame encoding for each frame. To solve this problem, we theoretically analyze the impact of per-frame decisions on long-term QoE and present an online algorithm for decision-making. Simulation results demonstrate that the proposed joint rendering quality and encoding type selection scheme can further enhance resource utilization and long-term user QoE compared with benchmark schemes. In summary, we have proposed a mobility-aware resource provision scheme, an adaptive volumetric video caching and rendering scheme, and a task dependency-aware rendering quality and encoding type selection scheme for an edge-assisted XR system. This research should provide useful insights for network operators to deliver immersive XR services at low operational costs.Item type: Item , The Contributions of ESRP1 to the Functions of the Intestinal Epithelium(University of Waterloo, 2026-01-22) Francis, JordanEpithelial Splicing Regulatory Proteins 1 and 2 (ESRP1 and ESRP2) are RNA binding proteins expressed exclusively in epithelial cells. They direct a splicing program necessary for maintaining important epithelial cell characteristics, including cell-cell adhesion, anchorage to the basement membrane, and cell-cell communication. ESRP1 and 2 have been studied for their importance in development. The loss of ESRP1 causes a series of craniofacial defects called cleft lip and cleft palate, while the loss of both ESRP1 and ESRP2 result in more severe versions of these defects, several epithelial organ formation defects, and a skin barrier defect which causes significant water loss. ESRP1 is known for its role in craniofacial development, epidermal barrier development, and cancer progression. However, its role in other epithelial organs where it is highly expressed, such as the large intestine, remains understudied. Mice with a hypo-morphic mutation of Esrp1, termed triaka, exhibited decreased intestinal wound healing and increased intestinal permeability. Thus, we hypothesized that ESRP1 contributes to intestinal homeostasis and intestinal barrier integrity by sustaining tight junction localization and intestinal cell proliferation. This thesis project sought to investigate the functions of ESRP1 in maintaining intestinal homeostasis using mouse colon organoids and mouse colon organoid-derived monolayers as a model. Upon the deletion of ESRP1 and subsequent mechanical dissociation of the organoids, we observed a significant decrease in organoid re-formation. Esrp1 KO organoids exhibited no change in organoid cell proliferation. However, they did exhibit a decrease in the expression of Lgr5, an intestinal stem cell marker and receptor for R-spondin. LGR5 helps to maintain the stem cell niche by promoting the Wnt signalling cascade through binding to R-spondin, a Wnt agonist. Thus, its downregulation in Esrp1 KO organoids suggests that ESRP1 helps sustain of the intestinal stem cell niche by maintaining the response of the intestinal epithelium to the Wnt signalling cascade. As R-spondin is produced by subepithelial stromal cells, this would suggest that ESRP1 is necessary for proper epithelial-mesenchymal communication in the intestine. This is ultimately similar to its observed role in craniofacial development. Through investigating the role of ESRP1 in maintaining intestinal barrier integrity, ZO-1 staining showed that tight junctions are still able to assemble properly in Esrp1 KO organoids but become more diffuse in Esrp1 KO monolayers. The loss of ESRP1 resulted in a slight decrease in the barrier integrity of the organoid-derived monolayers. This contrasts with published findings in other cell lines, suggesting that the dependence of epithelial barrier integrity on ESRP1 may vary based on tissue and the chosen model of the epithelium. These findings will provide a solid foundation for further investigations into the role of ESRP1 in maintaining intestinal homeostasis, which will enable future research to uncover its connection to pathological conditions such as Inflammatory Bowel Disease.Item type: Item , Practically Efficient Protocols for Private Computation using Homomorphic Encryption(University of Waterloo, 2026-01-22) Akhavan Mahdavi, RasoulDigital services have become an indispensable part of our daily lives, particularly services that interact with our most private and sensitive data. With the abundance of such services, users are left to make the difficult choice: can I safely use digital services and products, or does it necessarily come at the cost of my privacy. Private computation techniques empower service providers to perform computation over private data, without the need to observe the data. This not only provides privacy for clients while the data is being used but reduces the risk of incidents such as data leaks for service providers. One commonly used tool for private computation is Homomorphic Encryption (HE), which is a form of encryption that allows computation on data in encrypted form. While homomorphic encryption in theory permits arbitrary computation over encrypted data, in practice, a naive implementation of a desired functionality rarely yields a practical result. For example, one common obstacle when using homomorphic encryption is the high computation time and the large ciphertexts that incur high network costs. However, communication and computation costs are not the only metrics that need to be considered. In my work, we describe problems that arise when homomorphic encryption is used in applications and address these limitations by proposing new techniques and novel protocols. In these new constructions, we not only improve the performance compared to prior work in terms of communication and computation costs but also address additional problems that arise in the deployment of these protocols. Throughout the process, we draw insights on how to design protocols that can be applicable for developers, practitioners, and future researchers. For example, we enable homomorphic comparison of encrypted numbers with higher precision than previous work, using novel representation of numbers that is more suitable for homomorphic encryption. Using this and other building blocks, we propose efficient protocols for decision tree evaluation and private set intersection. Moreover, through our work on private information retrieval, we identify the challenges of using such a protocol in practice and propose novel protocols that are suited for deployment in real-world applications.Item type: Item , Bayesian Inference for Partial Differential Equations via Neural Network Surrogates(University of Waterloo, 2026-01-22) Zhen, ZihaoPartial differential equations (PDEs) provide the fundamental framework for describing physical systems; yet, in many practical applications, these equations contain unknown parameters that must be inferred from experimental observations. Solving such inverse problems using traditional mesh-based numerical methods is often computationally intensive; furthermore, because these solvers cannot be easily differentiated with respect to model parameters, they create significant bottlenecks for gradient-based inference. To address these challenges, we train parameterized Physics-Informed Neural Networks (PINNs) for two distinct systems: the Allen–Cahn and Cahn–Hilliard (AC–CH) phase field equations and diffusion models for cyclic voltammetry (CV). These surrogates demonstrate strong generalizability across continuous parameter spaces and serve as differentiable components for gradient-based Bayesian parameter estimation via the No-U-Turn Sampler (NUTS). This work verifies the feasibility of a unified PINN-surrogate-Bayesian workflow for parameter estimation, offering a promising complement to existing methods for solving inverse problems with uncertainty quantification.Item type: Item , Associative Memory for Hyperdimensional Spatial Representations with Geodesic Flow Matching(University of Waterloo, 2026-01-22) Habashy, KarimThe ability to recover clean patterns from noisy or partial inputs, known as Associative memory, is a cornerstone of robust computation in both biological and artificial intelligence systems. While well-established for discrete data, implementing this capability for continuous representations remains a challenge. Existing methods typically rely on discretizing the continuous domain into capacity-limited prototypes or performing explicit decoding to the spatial domain, which is often computationally expensive and biologically implausible. This thesis addresses this gap by reformulating cleanup as a generative transport problem entirely within the embedding space. Geodesic Flow Matching is introduced as a model that learns a continuous time-dependent velocity field to transport corrupted representations back to the valid data manifold. Standard Euclidean Flow matching is shown to be insufficient for high-dimensional normalized representations, as linear interpolants "cut through" the interior of the hypersphere, destroying the vector magnitude and phase relationships required for accurate decoding. By constraining transport dynamics to the intrinsic Riemannian geometry of the hypersphere, the proposed model preserves this structure even under severe corruption. The framework is validated using Spatial Semantic Pointers (SSPs), a biologically plausible encoding for continuous space. Benchmarks indicate that Geodesic Flow consistently outperforms Euclidean variants and classical baselines, particularly in high-noise regimes. The utility of the approach is further demonstrated through integration into a Spiking Neural SLAM system. As an online stabilizer for the path integrator, the Geodesic model prevents catastrophic drift, reducing path error by up to 72%. Furthermore, it significantly improves resource efficiency by 40%, allowing a neural population of 1,500 neurons to match the tracking accuracy of a baseline system using 2,500 neurons.Item type: Item , Short-Pulsed Laser Processing of Metal-Oxide Nanomaterials: Role of Defects in Properties, Nanojoining and Sintering(University of Waterloo, 2026-01-22) Soleimani, MaryamMetal-oxide nanomaterials are promising candidates for next-generation nano-electronic and optoelectronic devices due to their tunable band structures, multifunctionality, and chemical stability. However, their practical deployment is limited by intrinsic challenges such as low plasticity, brittleness, poor conductivity, and difficulties in reliable integration. Central to these limitations is the role of point, line, and interfacial defects, which govern charge transport, plastic deformation, diffusion, and bonding at multiple length scales. Controlling and engineering these defects without degrading structural integrity is therefore essential for unlocking the full potential of metal-oxide nanomaterials. This thesis investigates ultra-short (femtosecond) and short (nanosecond) pulsed laser irradiation as versatile strategies to manipulate defect landscapes and interfacial behavior. The highly localized, non-equilibrium energy delivery of pulsed lasers enables two complementary pathways: (i) defect engineering to enhance intrinsic mechanical and electrical properties, and (ii) defect-assisted diffusion to promote nanojoining and low-temperature and rapid sintering. Both nanosecond and femtosecond laser treatment can significantly influence the properties of individual CuO nanowires, but through fundamentally different mechanisms. Nanosecond laser pulses generate a moderate density of vacancies and dislocations that, with heat accumulation and partial annealing, reorganize into dislocation loops. This defect rearrangement converts a brittle, predominantly elastic response into elastic–plastic behavior, modestly increasing ductility and improving carrier mobility while maintaining structural stability. In contrast, femtosecond laser irradiation operates under a highly nonthermal regime, generating a supersaturated nonequilibrium defect density comprising abundant vacancies and densely interconnected dislocation networks. The excess vacancies enhance point-defect mobility, facilitating dislocation climb and cross-slip, while the internal stress fields within the dislocation network reduce nucleation barriers and promote glide and multiplication. Collectively, these mechanisms drive a brittle-to-plastic transition, resulting in stable plastic flow, greater deformability, and improved fracture resistance. By enhancing both the plasticity and electrical conductivity of CuO nanowires, laser treatment creates favorable conditions for single-nanowire device applications. Furthermore, it serves as an effective pre-treatment step before integration, mitigating the intrinsic brittleness of metal oxide nanowire and enabling reliable assembly and reuse in nanoscale systems. At the integration level, both lasers were used to fabricate functional nanodevices. Nanosecond laser treatment enabled precise cutting and rejoining of CuO nanowires by controlling defect network formation, leading to flexible, conductive CuO–CuO junctions suitable for strain-sensing applications. Similarly, it facilitated the fabrication of robust CuO–ZnO p–n junctions for photodetector devices, where thermal effects improved interfacial diffusion and bonding strength. In contrast, femtosecond laser treatment promoted non-thermal nanojoining through a two-step mechanism: first inducing localized plasticity via defect formation and then achieving strong joints without a heat-affected zone or phase transformation through shot-peening-assisted bonding. To assess the role of defects in diffusion, defect-assisted sintering was investigated using nanosecond laser pre-treatment of TiO₂ nanopowders. Optimizing the laser parameters led to partial annealing of laser-generated dislocations and recrystallization into ultrafine grains. These microstructural modifications increased the density of high-angle grain boundaries, creating short-circuit diffusion pathways that enhanced mass transport during sintering. Consequently, efficient densification was achieved at 750 °C, approximately 250 °C lower than conventional thermal furnace sintering (~1000°C). In contrast, femtosecond laser pre-treated TiO₂ nanoparticles generated a high concentration of oxygen vacancies that reorganized into dislocation-coupled vacancy channels, facilitating pipe-diffusion–dominated mass transport. This rapid and nonthermal diffusion pathway enabled fast neck growth and densification within only 10 minutes at 650 °C, demonstrating the crucial role of laser-induced defects in accelerating diffusion and achieving rapid low-temperature sintering of metal oxides. Overall, this work establishes pulsed-laser irradiation as a powerful platform for defect engineering in metal-oxide nanomaterials.Item type: Item , A Knowledge Representation for, and an Application to Requirements Elicitation of, Rhetorical Figures of Perfect Lexical Repetition(University of Waterloo, 2026-01-22) Wang, YetianRhetorical figures, such as rhyme and metaphor, affect human discourse by providing essential semantic and pragmatic information that generate a set of attentional effects such as salience, aesthetic pleasure, and memorability, that enhance the receiver’s attention. Ploke is one kind of rhetorical figure, that of perfect lexical repetition, which is a word or phrase that repeats with the same form and meaning in a passage. Rhetorical figures, including plokes, are largely ignored in natural language processing (NLP) and artificial intelligence (AI). This thesis aims to take two steps towards AIs’ being able to handle plokes as they occur in natural language. It first develops a knowledge representation model of the general concept of Ploke in the form of an ontology that represents the classification of Ploke, the forms of plokes, and the neurocognitive affinities that affect attention. This ontology will help AIs to understand and generate plokes. The ontology proposed in this thesis is able to represent the related knowledge of ploke and its subtypes. The ontology is able also to represent the neurocognitive affinities of ploke and its subtypes by representing their relations to various types of perfect lexical repetition characterized by the positions in which the repetitions occur. After observing that rhetorical figures are used to enhance persuasive discourse, the thesis hypothesizes that a requirements elicitation interview that uses plokes is more effective than one that does not. It then describes a test of this hypothesis in which the interviews were conducted by a simulated AI elicitation bot, which used some plokes in half of its interviews and avoided plokes entirely in the other half of its interviews. The experiment showed that the interview questions and statements conveyed by the simulated AI elicitation bot in its ploke-using requirements elicitation interviews were easier to recognize by the interviewees and were more memorable to them than those in its ploke-avoiding requirements elicitation interviews.Item type: Item , Leak Detection and Localization in Water Distribution Networks(University of Waterloo, 2026-01-22) Green, ThomasLeaks in water distribution networks remain a significant challenge for utilities, resulting in substantial economic and environmental losses and health risks. However, existing leak detection and localization approaches face several shortcomings, including (i) limited understanding of how algorithms generalize across different networks, (ii) limited adoptability of empirical characterization of dispersive wave behavior in water-filled pipes, and (iii) heavy dependence on cross-correlation methods when performing leak localization, failing if leaks are not located on a direct sensor-to-sensor path. This thesis addresses these gaps using machine learning-driven leak detection and localization techniques using hydrophone time-series data. First, I introduce structured frameworks for leak detection and leak localization algorithms, which define the key processing stages from signal collection to post-processing. To evaluate the ability of leak detection algorithms to generalize across different networks, I present a novel leak detection dataset collected from three real-world water distribution networks, and propose two evaluation schemes - Cross-Domain F1 Scoring and Multi-Domain F1 Scoring. Using these schemes, over 33,000 leak detection models were evaluated by varying modeling parameters, revealing that certain transformation techniques and low-frequency energy-based features (e.g., 62–124 Hz energy vs. 0–500 Hz centroid) can yield up to a 37% higher mean cross-domain F1 score. Further, I found that when sufficient training data are available, convolutional neural networks generalize better than hand-crafted-feature-based algorithms, achieving a multi-domain F1 score of 0.87 compared to 0.72 for exhaustive feature selection and 0.50 for simple feature selection when eight unique leak scenarios were included in the training data. Next, I characterize wave propagation in a controlled lab-scale system and experimentally demonstrate dispersive shell-borne surface waves traveling at approximately 291 m/s, waterborne plane waves at 350 m/s, and high-velocity ultrasonic waves traveling at approximately 1,300 m/s. I show that analytical models that predict wave speed can be inaccurate by up to 16%, and that waves traveling along the shell wall exhibit dispersive behavior, which poses problems for traditional cross-correlation-based leak localization methods. The viscothermal wave equation is implemented using the finite difference method to explore how spectral features correlate with leak proximity. These findings motivate the use of spectral features such as energy and centroid for predicting leak proximity. I then propose a novel leak localization algorithm that produces a heat map describing the probability of leakage along each point in a pipe network. The algorithm achieves reliable leak localization results, even in leak scenarios where conventional cross-correlation cannot be used. Calibration is shown to improve leak proximity regression performance by more than 53%, and the approach reliably localizes leaks within 3.66 m across leak scenarios not included in its training data, even in scenarios where traditional cross-correlation-based methods cannot be used. Overall, my thesis contributes new datasets, quantitative evaluation methods, numerical modeling, insights into wave behavior, and learning-based algorithms that together advance the development of deployable and generalizable leak detection and localization systems.Item type: Item , Liminal Habitats: An Investigation of Stormwater Management Facilities in Urban and Suburban Kitchener-Waterloo(University of Waterloo, 2026-01-22) Ruest, LiahmStormwater Management Ponds (SWMPs) are a tool to protect neighbourhoods from floods and collect pollutants before they enter the natural environment. Despite these facilities being infrastructure, they inadvertently become habitat for an array of taxa, notably macroinvertebrates. My research has three primary goals: to understand the language used in municipal documents to inform how SWMPs are viewed by the municipal governments in the Kitchener-Waterloo Region; to understand the broader implications of SWMP research; and to investigate the drivers of biodiversity in SWMPs. Firstly, I confirm that SWMPs are predominantly seen as pieces of infrastructure rather than habitat by the municipalities in the Kitchener-Waterloo region. Secondly, I found that SWMPs harbour similar levels of biodiversity compared to control ponds. The majority of the literature focuses on single-taxa investigations, predominantly those of odonates and plant diversity. Thirdly, I investigated biodiversity in SWMPs in the Kitchener-Waterloo area. I investigated the attributes of the ponds and discovered that turbidity and the surrounding land cover impact biodiversity within the ponds. Additionally, I found that facility type impacts biodiversity, where engineered wetlands harboured higher diversity and evenness of macroinvertebrate communities. I compared Kitchener-Waterloo’s SWMPs to other lentic systems across the province, finding that the macroinvertebrate communities in Kitchener-Waterloo’s SWMPs are similar. Based on my research, I recommend implementing engineered wetlands in place of traditional wet ponds in urbanized areas. As well, I recommend municipalities incorporate biodiversity initiatives in their design manuals for SMWP infrastructure. Finally, I advocate for a re-signification of the SWMP and acknowledgement that SWMPs provide habitat in urban areas for macroinvertebrates.Item type: Item , Use of atmospheric pressure spatial chemical vapor deposition to create spatially variant metal oxide semiconductor films for use in gas sensing arrays(University of Waterloo, 2026-01-21) Saini, Agosh SinghManufacturing gas sensor arrays is a key roadblock in commercially viable electronic nose systems as sensor arrays require large numbers of unique sensors. Atmospheric-pressure spatial chemical vapor deposition (APSCVD) is a fabrication method that can be utilized to lower manufacturing costs. In this thesis, APSCVD is used to create gradients of sensing materials which are then used to create an array of sensors with unique physical properties. Materials explored using APSCVD are SnO2 thickness gradients, SnO2 and Cu2O heterojunction gradients, and zinc-tin-oxide composition gradients. These materials are created using a combination of a stainless steel atmospheric-pressure spatial atomic layer deposition reactor head and a custom APSCVD reactor head designed to create metal-oxide-semiconductor thin films with physical property gradients. The custom APSCVD reactor head implements a substrate-reactor spacing gradient to achieve physical property gradients, building upon a previous work showcasing that tilting a stainless steel reactor head leads to a thickness gradient [1], [2]. The heterojunction gradient consists of a uniform Cu₂O layer with a thickness of ~103 nm and a SnO₂ layer with a thickness gradient from ~22 nm to ~12 nm, measured using ellipsometry. The ellipsometry thickness measurements show an R² value of 0.95. The energy-dispersive x-ray spectroscopy measurements of the composition gradient film show the tin to zinc ratio ranging from 0.86 to 0.21 with a R² value of 0.96. The fabricated gradient films are converted to sensors using photolithography. Interdigitated electrodes are fabricated on the top surface, and chips with 8 sensors are placed on chip carriers. A custom gas sensor testing system is created to continuously run experiments and generate response data. The test system consists of control software for heating, an Arduino-based relay for recording up to 8 sensors at a time, and mass flow controllers which auto adjust to cycle through different experiments and analytes. Ethanol, isopropyl alcohol, acetone, and water are used as analytes in this thesis. The data recorded showcases that APSCVD can be used to create functional gas sensors with thickness, heterojunction, and composition gradients. The composition gradient exhibits a response-direction inversion, resulting in an increase in resistance at room temperature and a decrease at 200 °C. Additionally, heterojunction gradient showcases a parabolically varying response across the film. Principal component analysis of heterojunction gradient sensor data shows that combining multiple sensors improves selectivity relative to individual sensors, as reflected by an increase in silhouette score from -0.02 to 0.38, corresponding to a transition from overlapping to distinct response clustering.Item type: Item , Recovery and Reuse of Nanomaterials from Radically Polymerizable Thermoset Nanocomposites; Towards A Circular Economy(University of Waterloo, 2026-01-21) Rezaei, ZahraThe widespread adoption of thermoset nanocomposites has created significant end-of-life management challenges due to their permanent crosslinked networks, which resist conventional recycling methods and trap valuable nanomaterials within non-degradable matrices. This work presents a proof-of-concept study to assess a new approach for achieving a circular economy for thermoset nanocomposites; recovering and reusing nanomaterials from thermoset nanocomposites through the incorporation of cleavable comonomers into the polymer matrix, enabling controlled matrix degradation and nanofiller recovery at end-of-life. Carbon nanotubes (CNTs) were selected as the nanofiller for this study due to their widespread use in nanocomposites and growing industrial significance, and a styrene/divinylbenzene (DVB) thermoset matrix was chosen as a model matrix for its chemical compatibility with CNTs. To enable controlled degradation at end-of-life and nanofiller recovery, comonomer additives that can install cleavable bonds into the matrix’s polymer network were systematically evaluated. Several candidates were investigated, including cyclic ketene acetal (CKA) (specifically 2-methylene-1,3-dioxepane, MDO), which underwent hydrolysis too rapidly and an unwanted ring-retaining side reaction for practical application, and thionolactones (specifically dibenzo[c,e]-oxepine-5(7H)-thione, DOT and 2-(isopropylthio)dibenzo[c,e]oxepine-5(7H)-thione, 2SiPrDOT), which was limited by the monomers’ solubility in the styrene/DVB system. Through this careful screening process, 2SiPrDOT was selected as the most suitable option, offering both chemical stability during processing and sufficient solubility in the system. Comprehensive characterization of the primary nanocomposites using thermal gravimetric analysis (TGA), differential scanning calorimetry (DSC), electrical resistivity measurements, and hardness testing confirmed that 2SiPrDOT incorporation did not significantly alter the thermal, electrical, or mechanical properties of the material, preserving the high-performance characteristics essential for practical applications. The thermoset matrix was then deconstructed through nucleophilic degradation, allowing recovery of finely distributed CNTs from the crosslinked network. Analysis of recovered CNTs using energy-dispersive X-ray spectroscopy (EDX), transmission electron microscopy (TEM), and Raman spectroscopy revealed no significant changes in the nanofiller’s structure or surface chemistry, demonstrating the gentle nature of the recovery process. The recovered CNTs (68.7% yield) were subsequently re-embedded into a fresh styrene/divinylbenzene matrix and polymerized. Characterization of these secondary nanocomposites using the same characterization techniques showed properties comparable to the primary nanocomposites, confirming successful retention of nanofiller functionality through the recovery and reuse cycle. This research demonstrates that strategic incorporation of cleavable comonomers into thermoset matrices offers a viable pathway toward circularity for high-performance nanocomposites. By enabling controlled matrix deconstruction while preserving nanomaterial quality, this approach addresses both environmental concerns associated with nanocomposite waste and the economic imperative to reclaim valuable nanomaterials. The demonstrated success with the styrene/DVB system suggests broader applicability of this methodology. As a general radical ring-opening polymerization strategy, this approach has the potential to be extended to other vinyl-based thermosets and diverse nanofillers, offering a promising foundation for developing next-generation recyclable composites across multiple industrial sectors.Item type: Item , Categories as a Foundation for both Learning and Reasoning(University of Waterloo, 2026-01-21) Shaw, NolanThis thesis explores two distinct research topics, both applying category theory to machine learning. The first topic discusses Vector Symbolic Architectures (VSAs). I present the first attempt at formalising VSAs with category theory. VSAs are built to perform symbolic reasoning in high-dimensional vector spaces. I present a brief literature survey demonstrating that the topic is currently completely unexplored. I discuss some desiderata for VSA models, then describe an initial formalisation that covers two of the three desiderata. My formalisation focuses on two of the three primary components of a VSA: binding and bundling, and presents a proof of why element-wise operations constitute the ideal means of performing binding and bundling. The work extends beyond vectors, to any co-presheaves with the desired properties. For example, GHRR representations are captured by this generalisation. The second line of work discusses, and expands upon, recent work by Milewski in the construction of "pre-lenses." This work is motivated by pre-established formalisations of supervised machine learning. From the perspective of category theory, pre-lenses are interesting because they unify the category Para, or Learn, with its dual co-Para, or co-Learn. From a computer science perspective, pre-lenses are interesting because they enable programmers to build neural networks with vanilla function composition, and they unify various network features by leveraging the fact that they are profunctors. I replicate Milewski's code, extend it to the non-synthetic data, MNIST, implement re-parameterisations, and describe generative models as dual to discriminative models by way of pre-lenses. This work involved creating a simple dataloader to read in external files, randomising the order that inputs are presented during learning, and fixing some bugs that didn't manifest when training occurred on the very small dataset used by Milewski.Item type: Item , Duped by Dream Sellers: A Case Study of Student Immobility, Precarity, and Profit in Northern Cyprus(University of Waterloo, 2026-01-21) Lariani, AichaYoung migrants arrive in Northern Cyprus seeking opportunity and safety through international higher education. Instead, they find themselves in a state of continuous precarity as financial benefactors sustaining complex legal liminalities of a de facto state. What happens when systemic marketing of an affordable, internationally recognized education and work opportunities targets individuals from countries in active war, extreme poverty, and political unrest? With safety concerns in their home countries, low-ranking passports, and limited international options, student migrants continue to arrive in Northern Cyprus despite its difficult living conditions. Drawing on 29 qualitative interviews, this thesis examines how student migration both responds to and sustains the political and economic structures of Northern Cyprus. It shows how education, labour, and legality intertwine to produce a system that depends on students’ presence and their restricted mobility. By situating student migration within the political economy of an unrecognized state, this thesis contributes to empirical research on the governance of mobility and the production of precarity for non-elite, de facto refugee students, facilitated through higher education and its institutions.Item type: Item , A Game of Urban Resilience: Playing the Social-Ecological System of a Rapidly Developing Caledon, Ontario, Canada(University of Waterloo, 2026-01-21) Zheng, CatherineConsiderable land in the Town of Caledon, located in Ontario, Canada, is protected by the Greenbelt including sections of the Niagara Escarpment and the Oak Ridges Moraine. Caledon is also subject to an expected rapid population increase, which is at the expense of cultural landscapes and ecosystem functions. As a rural town with agricultural industries and conservation authorities, provincial urban development pressures present Caledon as a case study for engaging with complex and interconnected social, ecological, and planning problems. Through the design of a serious game as a tool for community education and engagement in urban planning, this thesis investigates the relationship between sprawling urban form, ecological illiteracy, and the growing acceptance of environmental degradation. A board game modelled on Caledon’s social-ecological systems, titled Paving Paradise, has been developed to generate dialogue and address the central conflict of urban sprawl, and positions social-ecological urbanism (SEU) frameworks as a solution. SEU is a method of urbanization that applies concepts of systems resilience against urban design, where social-ecological systems are maintained and supported through social institutions and built environments to enhance a city’s resiliency. Paving Paradise functions as a speculative planning model, where urban form is influenced by top-down policy, but transformed by community resilience, adaptation, and ecological stewardship. Designed for community members, the game assigns players distinct roles and asks them to balance individual objectives with collective success to build a socially and ecologically resilient Caledon. Through play, participants engage in dialogue that fosters empathy and encourages new perspectives on the many dimensions of resilient urban growth. Games can serve as a medium to communicate and link complex ideas in accessible ways, giving agency to individuals not specifically trained in various aspects that impact their community. The designed game leverages the educational and engaging properties of game mechanics, acting as a method to communicate social-ecological systems while simultaneously fostering discussion, negotiation, and collaboration. Within this research, mapping is used to synthesize and extract Caledon’s existing social-ecological conditions, and SEU design proposals are illustrated and applied to a neighbourhood in Caledon to show improved human-nature connections, ultimately forming the narrative and logistical foundation of the game. The game’s core mechanics are adapted from an existing tile-laying board game centered on territorial expansion, but they are further informed by game theory and refined through multiple rounds of playtesting and participant interviews. Paving Paradise creates conditions for players to imagine and collaborate on a shared landscape, becoming an effective tool for collective learning and social mobilization. Paving Paradise does not aim to provide solutions or design guidelines. Rather, it simplifies complex and interlinked ideas of policy, urban form, and social-ecological systems to offer a platform for engaging willing participants, in and out of Caledon, on creating and nurturing a resilient urban environment.Item type: Item , Modelling of a Small Electric Aircraft Pipistrel Velis Electro(University of Waterloo, 2026-01-21) DASARI MURUGAPPA, LEKHATransportation electrification has become an active area of research and development in academia and industry, with a strong focus on decarbonizing the sector to move toward a more sustainable environment. As an important player in the global sustainable transportation movement, the aviation industry is also witnessing accelerated efforts towards electrification. This transition comes with many challenges in terms of battery performance, aircraft flight range, and operational safety. Therefore, development of comprehensive simulation models, that replicate the behavior of an actual aircraft, is essential for studying the system’s overall performance. Such models provide invaluable insights into battery health, methods to extend range, and ways to improve flight missions for more efficient battery usage. This thesis aims to develop a mathematical model of the aircraft propeller and a simulation model of the electric powertrain consisting of the battery pack, inverter, and motor. The aircraft under study is the Pipistrel Velis Electro, a two-seater, type-certified, fully electric aircraft. Two methods are proposed to model the propeller behavior: one based on the aircraft equations of motion and the other based on the motor power command. Both methods compute the thrust, motor rotational speed, and load torque for each phase of the flight using different input sets, and these outputs are supplied to the powertrain model. Two modes of operation are considered for the powertrain: an autopilot flight mode and a pilot-controlled mode, with phase detection between powered and glide phases. The simulations are validated by comparing the results with actual Velis Electro flight data obtained from the Waterloo Wellington Flight Center.Item type: Item , JPEG-Inspired Encoding for Deep Learning(University of Waterloo, 2026-01-21) Salamah, Ahmed Hussein Abdallah MohamedJPEG is the dominant standard for storing and transmitting digital images, while Deep Neural Networks (DNNs) have become the preeminent method for automated image understanding. This dissertation investigates how these two ubiquitous technologies can be synergistically integrated to enhance the performance of DNNs. JPEG was originally engineered for the Human Visual System (HVS), and its default parameters are not optimized for DNNs, which process visual information differently. This suboptimality, stemming from JPEG’s default implementation, is not a fundamental limitation but rather an opportunity to adapt its core components—especially the non-linear quantization stage—for DNNs. This research addresses this suboptimality by first optimizing the trade-off between compression rate and classification accuracy, and second, by introducing a learnable, end-to-end differentiable JPEG layer whose quantization parameters are jointly trained with the underlying DNN. This dissertation demonstrates that this principle of a learnable, JPEG-inspired transformation extends beyond compression, offering a novel way to address challenges in related domains such as knowledge distillation (KD), where large 'teacher' models often overfit the training set. This overfitting causes them to generate overconfident, near one-hot probability vectors that serve as poor supervisory signals for the student model, suggesting the need for novel approaches to information transfer. This dissertation addresses these issues by systematically revisiting the relationship between JPEG encoding and deep learning. It charts a logical progression from adapting JPEG externally for DNNs, to integrating it internally as a learnable network component, and finally to repurposing its core principles to amplify knowledge transfer. This progressive framework is methodically developed and empirically substantiated through three interconnected contributions: -Optimizing Compression for DNNs. To improve the interaction between standard JPEG and pre-trained DNNs, this work first reframes compression from a human-centric "rate-distortion" problem to a DNN-centric "rate-accuracy" one. This is achieved by introducing the Sensitivity Weighted Error (SWE), a novel distortion measure derived from a DNN’s loss sensitivity to frequency-domain perturbations, where higher sensitivity in a frequency band indicates its greater importance for the DNN’s decision-making. The SWE guides the OptS algorithm to generate model-specific JPEG quantization tables. This approach produces fully compliant JPEGs optimized for DNN consumption, demonstrably improving the rate-accuracy trade-off by increasing accuracy up to 2.12% at the same rate, or enabling rate reductions up to 67.84% with no loss of model accuracy. -Integrating a Differentiable JPEG Layer into the DNN Architecture. Building on this, the next contribution integrates the codec into the network architecture itself via the JPEG-Inspired Deep Learning (JPEG-DL) framework, which introduces a novel, end-to-end differentiable JPEG layer. By replacing JPEG's standard hard quantization with a differentiable alternative, this layer's parameters are jointly optimized with the network's weights. This transforms the JPEG pipeline from a static pre-processor into a dynamic, learnable component, significantly improving model accuracy—by an average of 7% on fine-grained classification tasks with only 128 additional trainable parameters—and enhancing robustness against adversarial attacks. - Amplifying Knowledge Transfer via JPEG-Inspired Perturbation. Finally, the differentiable layer is repurposed to address the "overconfident teacher" problem in KD by perturbing teacher inputs to force softer, more informative predictions. Crucially, this method requires no retraining or modification of the fixed teacher model, ensuring its practical utility with proprietary or deployed networks. Our investigation begins with Coded Knowledge Distillation (CKD), a practical heuristic that uses adaptive JPEG compression to perturb teacher inputs and soften their overconfident predictions. While effective, this approach prompted a search for a more principled theoretical foundation. This led to Generalized Coded Knowledge Distillation (GCKD), a framework that establishes the maximization of the teacher's Conditional Mutual Information (CMI) as the core objective. However, directly optimizing for CMI on a per-input basis is computationally prohibitive. This efficiency challenge is resolved in the culminating synthesis, Differentiable JPEG-based Input Perturbation (DJIP). DJIP operationalizes the GCKD theory by deploying the trainable differentiable JPEG layer as a fast, learnable, and amortized operator. Instead of performing a slow, per-input optimization search, the layer is trained once to automatically generate CMI-maximizing perturbations, making the process highly efficient. This approach demonstrably generates richer supervisory signals, boosting student model accuracy by up to 4.11%. In conclusion, this dissertation demonstrates that the relationship between JPEG and DNNs can be systematically revisited to create a powerful synergy. By progressing from adaptation to integration and synthesis, this work transforms the suboptimal default interaction of JPEG and DNNs into a versatile architectural tool. The research delivers a suite of methods that not only improve the performance of DNNs on compressed images but also offer a theoretically-grounded solution to a key challenge in knowledge distillation. By demonstrating that legacy codecs can be repurposed to enhance model accuracy, efficiency, and knowledge transfer, this work thus reframes the role of classical codecs, proposing JPEG-inspired encoding as a principled foundation for the integration of classical compression and deep learning.Item type: Item , Field-Theoretic Simulations of Binary Blends of Complementary Diblock Copolymers(University of Waterloo, 2026-01-21) Willis, JamesThe phase behavior of binary blends of AB diblock copolymers of compositions f and 1 − f is examined using field-theoretic simulations. Highly asymmetric compositions (i.e., f ≈ 0) behave like homopolymer blends macrophase separating into coexisting A- and B- rich phases as the segregation is increased, whereas more symmetric diblocks (i.e., f ≈ 0.5) microphase separate into an ordered lamellar phase. In self-consistent field theory, these behaviors are separated by a Lifshitz critical point at f = 0.2113. However, its lower critical dimension is believed to be four, which implies that the Lifshitz point should be destroyed by fluctuations. Consistent with this, it is found to transform into a tricritical point. Furthermore, the highly swollen lamellar phase near the mean-field Lifshitz point disorders into a bicontinuous microemulsion (BμE), consisting of large, interpenetrating A- and B-rich microdomains. A BμE has been previously reported in ternary blends of AB diblock copolymers with its parent A- and B-type homopolymers, but in that system the homopolymers have a tendency to macrophase separate. Our alternative system for creating BμE is free of this macrophase separation.Item type: Item , Mass Timber High-Rises: Integrating Form, Structure, and Dwelling Typologies(University of Waterloo, 2026-01-21) BABALOLA, OLUWATOBILOBA OLUWASEUNThis thesis explores mass timber not only as a sustainable material, but as a spatial and conceptual framework for reimagining vertical urban housing. It treats mass timber as massing, a modular and volumetric system that organizes structure, form, and inhabitation through stacking, subtraction, and spatial play. Moving beyond material or structural efficiency, the project frames mass timber using a grid and modular based kit of parts as both constraints and opportunity as an architectural language for adaptable, community-oriented high-rise housing that responds to the environmental and social challenges of urban living. Drawing inspiration from Adrian Wong’s explorations of modular systems and spatial adaptability, the research adopts a process of modular arrangement, like assembling and rearranging blocks, where modular volumes are assembled, layered, and reconfigured to generate diverse typologies and shared communal spaces. The project asks: How can the modular logic of mass timber inspire new forms of high-rise housing that balance environmental responsibility with social and spatial richness? The study focuses on how a repetitive volumetric modular unit can be transformed into lively, varied living environments through deliberate acts of aggregation and void-making through subtractive and additive massing. In addressing Canada’s housing crisis and the global demand for low-carbon, rapidly deployable construction, this thesis positions mass timber’s prefabricated modularity as a key strategy for delivering affordable, efficient, and low-embodied-carbon housing construction that also inspires diverse spatial possibilities. Its lightweight nature reduces on-site labor, and the capacity for off-site fabrication enables faster assembly, minimal waste, and lower emissions compared to conventional concrete or steel systems. Through digital modeling and speculative design studies using Autodesk Revit, the research develops a catalogue of spatial strategies that demonstrate how mass timber’s modular volume can act as both structure and medium for spatial play, producing architecture that is sustainable, adaptable, and deeply human, uniting environmental performance with expressive form and social value.Item type: Item , Decoding QAnon: Building an Adaptive Alternative Reality at the Crossroads of American Conspiracism, Cultic Commodification, and Schizogenic Hyperreality(University of Waterloo, 2026-01-21) Martin, ChrisQAnon has grown beyond a single conspiracy theory to become a self-perpetuating conspiracist alternative reality, one whose impact on the American political and cultural landscape will long outlive the influence of its cryptic figurehead. As bizarre as the practices of QAnon and its decoding rituals may seem, this dissertation argues that QAnon is a reflection of the techno-cultural milieu of its creation, an emergent consequence of the intersection of three key techno-cultural trends: America’s deeply entrenched cultural tradition of conspiracist narrativization, the commodification of culture under neoliberalism, and the predatory affordances of corporate media platforms optimized for the attention economy. Drawing from an array of interdisciplinary research and discursive examples drawn directly from the QAnon community, this dissertation presents a framework that can explain QAnon’s viral success within the American techno-cultural context and offer insight into the ongoing renaissance in hyper-individualistic reactionary conspiracism that QAnon has catalyzed. Only by understanding how these three trends have mutually reinforced and influenced each other can we begin to understand QAnon’s uniquely protean narrative structure and decipher the symbolic map of cultural dysfunction it represents.