Mechanical and Mechatronics Engineering
Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/9912
This is the collection for the University of Waterloo's Department of Mechanical and Mechatronics Engineering.
Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).
Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.
Browse
Recent Submissions
Item type: Item , Use of atmospheric pressure spatial chemical vapor deposition to create spatially variant metal oxide semiconductor films for use in gas sensing arrays(University of Waterloo, 2026-01-21) Saini, Agosh SinghManufacturing gas sensor arrays is a key roadblock in commercially viable electronic nose systems as sensor arrays require large numbers of unique sensors. Atmospheric-pressure spatial chemical vapor deposition (APSCVD) is a fabrication method that can be utilized to lower manufacturing costs. In this thesis, APSCVD is used to create gradients of sensing materials which are then used to create an array of sensors with unique physical properties. Materials explored using APSCVD are SnO2 thickness gradients, SnO2 and Cu2O heterojunction gradients, and zinc-tin-oxide composition gradients. These materials are created using a combination of a stainless steel atmospheric-pressure spatial atomic layer deposition reactor head and a custom APSCVD reactor head designed to create metal-oxide-semiconductor thin films with physical property gradients. The custom APSCVD reactor head implements a substrate-reactor spacing gradient to achieve physical property gradients, building upon a previous work showcasing that tilting a stainless steel reactor head leads to a thickness gradient [1], [2]. The heterojunction gradient consists of a uniform Cu₂O layer with a thickness of ~103 nm and a SnO₂ layer with a thickness gradient from ~22 nm to ~12 nm, measured using ellipsometry. The ellipsometry thickness measurements show an R² value of 0.95. The energy-dispersive x-ray spectroscopy measurements of the composition gradient film show the tin to zinc ratio ranging from 0.86 to 0.21 with a R² value of 0.96. The fabricated gradient films are converted to sensors using photolithography. Interdigitated electrodes are fabricated on the top surface, and chips with 8 sensors are placed on chip carriers. A custom gas sensor testing system is created to continuously run experiments and generate response data. The test system consists of control software for heating, an Arduino-based relay for recording up to 8 sensors at a time, and mass flow controllers which auto adjust to cycle through different experiments and analytes. Ethanol, isopropyl alcohol, acetone, and water are used as analytes in this thesis. The data recorded showcases that APSCVD can be used to create functional gas sensors with thickness, heterojunction, and composition gradients. The composition gradient exhibits a response-direction inversion, resulting in an increase in resistance at room temperature and a decrease at 200 °C. Additionally, heterojunction gradient showcases a parabolically varying response across the film. Principal component analysis of heterojunction gradient sensor data shows that combining multiple sensors improves selectivity relative to individual sensors, as reflected by an increase in silhouette score from -0.02 to 0.38, corresponding to a transition from overlapping to distinct response clustering.Item type: Item , Robust 4D Millimeter-Wave Radar Perception in Adversarial Environments(University of Waterloo, 2026-01-21) Liu, ZhenanThis thesis investigates the robustness of 4D mmWave radar perception for autonomous driving, emphasizing real-time, point-cloud-based object detection in adverse and enclosed environments. Unlike conventional radar studies that rely on range--Doppler or heatmap representations, this work leverages the native 4D radar point cloud as the sole sensing modality. This design enhances compatibility with modern 3D perception architectures, reduces computational overhead, and enables seamless integration within existing autonomous driving stacks. The study begins with a comprehensive analysis of perception sensing modalities---camera, lidar, and radar---to contextualize their relative strengths, limitations, and degradation mechanisms under visibility-challenged conditions. A system-level characterization of 4D radar measurements is presented, highlighting their unique spatio--temporal properties, the preprocessing pipeline, and the effects of dust, multipath interference, and metallic reflections in operational environments. Two complementary perception pipelines are developed. The first, a model-driven approach, integrates adaptive noise filtering, unsupervised clustering, and rule-based 3D classification. It demonstrates strong real-time performance in harsh indoor environments but reveals a limitation: the inability to detect fully static pedestrians, inherent to Doppler-reliant sensing. The second, a learning-based framework, adapts lidar-style 3D detectors through a radar pillar feature encoder, enabling effective pretraining on public datasets and fine-tuning on custom indoor scenarios. The fine-tuned model achieves a substantial gain in pedestrian detection accuracy, confirming the advantage of data-driven radar perception. Together, these results establish a unified and robust framework for standalone 4D mmWave radar perception, illustrating both its feasibility and its remaining challenges toward deployment in safety-critical autonomous and industrial applications.Item type: Item , Efficient Learning for Large Language Models(University of Waterloo, 2026-01-20) Rajabzadeh, HosseinArtificial Intelligence (AI) systems have become indispensable across domains such as healthcare, finance, robotics, and scientific discovery. At the heart of this revolution, Large Language Models (LLMs) have emerged as the central paradigm, demonstrating remarkable reasoning, generalization, and multi-domain adaptability. However, their exponential growth in scale introduces severe computational bottlenecks in training, fine-tuning, and inference, limiting accessibility, sustainability, and real-world deployment. This dissertation advances the efficiency of LLMs across all lifecycle stages by introducing a suite of five frameworks that significantly reduce compute, memory, and latency costs with minimal or no loss in accuracy. First, Quantized Dynamic Low-Rank Adaptation (QDyLoRA) enables memory-efficient fine-tuning across multiple LoRA ranks in a single training pass, achieving competitive performance to QLoRA while reducing GPU memory usage by up to 65% and supporting flexible rank selection at inference time. Second, Sorted-LoRA introduces a stochastic depth–aware fine-tuning framework that co-trains multiple sub-models of varying depths within a single cycle. On LLaMA2–7B, it produces submodels up to 40% smaller that retain over 98% task accuracy, with the largest variant even surpassing the base model by +0.34%. Third, LoRA-Drop accelerates autoregressive inference by dynamically substituting computationally redundant layers with lightweight low-rank modules during decoding. It delivers up to 2.6× faster decoding and a 50% reduction in KV-cache memory with less than 0.5% degradation in accuracy, offering latency-aware adaptability for real-world deployment. Fourth, EchoAtt exploits redundancy in attention maps by sharing attention matrices among similar layers. On TinyLLaMA–1.1B, it achieves 15% faster inference, 25% faster training, and a 4% parameter reduction while improving zero-shot accuracy, highlighting that structural compression can enhance rather than degrade model generalization. Finally, ECHO-LLaMA introduces cross-layer Key–Value (KV) and Query–Key (QK) sharing to reduce redundant attention computation. This approach achieves up to 77% higher token-per-second throughput during training, 16% higher Model FLOPs Utilization (MFU), and 7% higher test-time throughput, while preserving language modeling performance. On the mechanical-domain RoboEval benchmark, ECHO-CodeLLaMA-7B boosts average accuracy from 62.15% to 63.01% with only 50% KV sharing, confirming its robustness in domain adaptation. Together, these contributions form a coherent research program on the efficiency of large-scale Transformers. They demonstrate that intelligently exploiting representational redundancy—through quantization, low-rank structure, cross-layer sharing, and adaptive computation—can yield substantial compute savings with minimal trade-offs.Item type: Item , Effects of Stabilizing Binder on the Formability, Microstructure, and Mechanical Performance of Wet Compression Molded Unidirectional Non-Crimp Fabric Composites(University of Waterloo, 2026-01-16) Miranda Portela, RenanWet Compression Molding (WCM) with highly reactive resins is a manufacturing process capable of high-volume production that has recently gained interest in the automotive industry as an alternative to traditional methods for producing structural components. These components are subject to high loads and may experience impact loads during service; therefore, to achieve the desired mechanical properties and performance requirements, the structural components may require several layers and a significant amount of resin. For typical WCM processes utilizing molds with deep cavities, resin management can be challenging as the fabric stack may drape prematurely due to the mass of the resin; however, the use of binder-stabilized fabric can overcome this problem by enhancing the fabric bending stiffness. While the influence of stabilizing binder on the permeability of various fabrics and the flow characteristics of different resins has been previously studied, its impact on void formation and mechanical performance is less understood. This study focuses on the effects of stabilizing binders on the intra-ply draping mechanisms of wet, unidirectional, non-crimp fabric (UD-NCF), as well as on the microstructure and mechanical performance of the UD-NCF composite fabricated via WCM, which comprises PX35-UD300 carbon fiber fabric and EPIKOTE resin 06150 snap cure epoxy resin, through physical experiments. The objectives of this study are to investigate the influence of the stabilizing binder on the formability of infiltrated carbon-fiber UD-NCF (including membrane behaviour, bending, and compaction), to examine its effects on the microstructure and mechanical performance of UD-NCF composites manufactured via WCM, and to assess the impact of the stabilizing binder on the energy-absorption performance. For Objective 1, the UD-NCF carbon fiber was characterized through a series of physical experiments, including membrane, bending, and compaction tests. An infiltrated bias-extension test setup was used to analyze the membrane mechanism, a rheometer bending test setup was employed to examine the bending mechanism, and a punch-to-plate setup was utilized to study the compaction mechanism. The fabric infiltration was found to influence the membrane and bending behaviors by reducing the friction between the carbon fiber and the stitching yarns, which consequently decreased the membrane stiffness and the bending stiffness up to 30%. However, impregnation was found to have no significant impact on the compaction response due to the low friction of the carbon fibers. In contrast to fabric impregnation, the pre-activation of the stabilizing binder was found to affect all three draping mechanisms by increasing fiber/fiber and fiber/yarn friction, thereby increasing membrane stiffness by up to 100% and bending stiffness by up to 50%. For Objective 2, flat UD-NCF composite panels were fabricated by WCM to examine how the stabilizing binder and its state, as well as the vacuum application to the mold and its duration, influence the formation of voids and mechanical properties. It was observed that the use of binder-stabilized fabrics decreased the void content of WCM parts by up to 70%, likely due to reduced relative layer movement and lower air entrapment. The void size decreases further when a vacuum is applied to the mold for more than 20 seconds, which partially removes air inside the mold. This reduction in void size leads to an increase in interlaminar shear strength. Additionally, applying a vacuum enhances preform compaction, resulting in more consistent panel thickness and a higher fiber volume fraction (FVF). For Objective 3, UD-NCF composite hat channels were fabricated by WCM to examine the influence of the stabilizing binder and vacuum application on energy absorption during axial crush experiments. The use of binder-stabilized fabric and vacuum increased energy absorption; however, this increase was not statistically significant, possibly due to the high FVF of the components. The WCM hat channels achieved energy absorption levels comparable to those of similar hat channels comprising the same constituents and ply stacking sequence and fabricated by high-pressure resin transfer molding (HP-RTM) in a previous study. The WCM hat channels also showed a similar brittle fracture failure mode to the HP-RTM hat channels. The main results of this investigation include a new dataset on the viscous draping mechanisms of the binder-stabilized UD-NCF. Additionally, mechanical tests provided strong evidence of the influence of the stabilizing binder on the mechanical performance of the UD-NCF composites. These results indicate the feasibility of using the WCM process as an alternative to HP-RTM in the manufacturing of structural components.Item type: Item , Development Workflow Generation Methodology Applied to a Propulsion Supervisory Controller for Battery Electric Vehicles(University of Waterloo, 2026-01-16) Rofiq, HenriThe increasing integration of software in modern vehicles has transformed the automotive industry, enabling advanced functionalities across the domains of safety, performance and user experience. However, the design and development of vehicle control systems is a complex process that requires familiarity with specialized tools and validation practices. These skills are typically not taught during university and thus, this thesis presents a comprehensive methodology for generating and implementing a control logic development workflow. The application of this methodology is demonstrated through its successful application to the design of a Propulsion Supervisory Controller (PSC) for deployment to a Cadillac LYRIQ, developed as part of the EcoCAR EV Challenge (EVC). The proposed workflow provides a structured approach to tackle software tool and hardware selection, requirements generation, software design principles, testing strategies and codebase maintenance considerations. Application of this workflow results in the generation of the UWAFT controls development methodology which uses the MathWorks (MATLAB/Simulink) toolchain and Speedgoat hardware, where the team developed software that was a “pipes and filters”, layered and component-based control architecture. UWAFT employed Agile-hybrid principles for the comprehensive development of requirements which originate from supplier documentation, team goals as well as safety analyses. Finally, software was integrated using version control via Git and emphasized comprehensive verification which includes extensive “-in-the-loop” (XIL) testing. Application of this methodology enabled UWAFT to achieve consistent and high-quality software development under resource constraints, leading to successful deployment and validation of vehicle control features such as torque management and directional control. Furthermore, the generated software also led to success at year-end competitions where the PCM team was able to successfully achieve a 3rd place finish. Beyond technical outcomes, the workflow improved collaboration, documentation and onboarding within the student team, bridging the gap between academic learning and industry-standard experience. An assessment of limitations and areas for future improvement is presented, including enhanced CI/CD automation, cross-project integration and adaptation of the workflow for internal combustion architectures. Overall, this research contributes a modular and educationally valuable framework that can be adopted by student design teams and research groups to produce reliable automotive control software.Item type: Item , Influence of Boundary Conditions on the Sheared Edge Fracture Limits of a 3rd Generation Advanced High Strength Steel.(University of Waterloo, 2026-01-12) Advaith Narayanan, .A fundamental trade-off between strength and ductility exists in advanced high strength steels (AHSS), particularly for sheared edge splitting in automotive forming operations. The widely used ISO16630 conical hole expansion test for edge stretchability is known to be a poor representation of the in-plane deformation modes that are the primary source of edge splitting in stamping, leading to an overestimation of formability in virtual tryouts. Additionally, virtual experiments rely upon the input of a single fracture strain value to predict edge cracking in stamped parts, disregarding the effects of deformation mode and element size. An efficient and reliable modeling approach for edge failure is required without having to simulate the shear cutting process. The present work addresses some of these challenges through four interrelated tasks aimed at developing guidelines to efficiently characterize the anisotropic plasticity behavior and edge fracture limits, to support reliable experimental assessment and finite-element modelling of sheared edge fracture in practical forming applications. There is a need to develop efficient strategies for anisotropic plasticity characterization of sheet materials to be able to accurately simulate the various tensile edge stretching modes ranging from splitting without necking to potential localization before fracture. To this end, the baseline plasticity characterization of four approximately pressure-independent aluminum alloys (AA) and steels with varying ductilities and anisotropy levels: AA5182-O, AA7075-T6, DC04, and 980GEN3 steels were performed using uniaxial tensile tests in multiple orientations. Using digital image correlation (DIC), the area strain at the neck center was monitored to measure the flow stress response to strain levels more than twice the uniform elongation, with the added advantage of probing anisotropic hardening effects. A hybrid inverse analysis procedure was further developed and applied to notch tensile tests to obtain the major stress under plane strain tension while constraining the minor-to-major principal stress ratio to remain near 1:2. Anisotropic yield functions were subsequently calibrated using data from a range of stress states with emphasis on plane strain tension. The calibrated yield functions and hardening responses were shown to accurately reproduce both the local and global behavior in flat punch hole expansion tests, which activate a wide range of tensile-dominated stress states. Flat punch hole expansion simulations using yield functions calibrated without plane strain data consistently deviated from the DIC in-plane strain magnitudes with absolute differences of up to 15% for DC04 steel. The proposed methods provide general guidelines for efficient calibration of anisotropic constitutive models for approximately pressure-independent materials that are accurate to large deformation levels. Next, the mechanics of the conical hole expansion test were examined to assess the role of necking and anisotropy and to develop methodologies for fracture strain estimation. Finite-element (FE) models of the test were created in LS-DYNA software for two AHSS grades with differing plastic strain anisotropies using hexahedral solid elements. An analysis of through-thickness stress and strain gradients from the numerical models revealed that localization is suppressed until a hole expansion ratio of 200%, with the outer hole edge exhibiting a proportional uniaxial tensile stress state. Any non-uniformity in hole shape or thickness around the circumference of the extruded hole was found to be a manifestation of the tensile plastic strain anisotropy distribution and not necking. The hole expansion ratio was found to be suboptimal for quantifying edge stretchability since the inner hole edge undergoes a non-linear strain path transitioning from compression to uniaxial tension. Furthermore, when using the HER as a fracture metric, the local outer hole edge element strains from FE simulations were underpredicted with absolute differences of up to 10%. An analytical technique was proposed to obtain the local major fracture strain from conical hole expansion using the outer hole diameter measured at the crack location, with the equivalent failure strain then obtained using plastic work equivalence. The strains obtained using the proposed method were in excellent agreement with the elemental strains from numerical models with a maximum difference of 4% for the highly anisotropic CP800, confirming its suitability for fracture strain measurement from the test. Subsequently, a novel four-point fixture and specimen geometry that promotes failure under the deformation mode of in-plane bending was developed to characterize the uniaxial fracture limits of moderate ductility materials. The in-plane bending mode is also representative of edge splitting at peripheral regions of stamped parts. Techniques to detect the onset of fracture and accurately measure the edge strains from the in-plane bend tests were proposed that is applicable to a wide range of material ductilities. The uniaxial fracture strain measured in the in-plane bend test conducted with a machined edge was found to agree closely with the conical hole expansion true fracture strain of 0.68 for a 3rd generation 980GEN3 advanced high strength steel. The in-plane bend test also showed promise for plastic strain anisotropy characterization under uniaxial tension and compression to strain levels much larger than the material uniform elongation. A gauge height-to-thickness ratio of 4.0 or lower is recommended as a specimen design guideline to mitigate buckling based on a comprehensive experimental study conducted on multiple materials and thicknesses. Finally, the influence of loading conditions on the sheared edge fracture limits of 980GEN3 steel punched with a 5.0 mm hole and 12% clearance was investigated using five different test methods that imposed different stress and strain gradients in the vicinity of the sheared edge. A convergent fracture strain value of approximately 0.30 was observed across the in-plane edge fracture tests, with the conical hole expansion test exhibiting a higher strain of 0.45 due to out-of-plane deformation and fracture being defined at through-thickness cracking. Differences in fracture strains between the in-plane tests were also magnified by the choice of DIC lengthscale or virtual strain gauge length, reflecting each test’s varying sensitivity to DIC strain averaging. Global stretchability metrics were proposed for each deformation mode, enabling edge crack assessment in industrial applications without the need for DIC. The global edge stretch metrics were also found to inform the appropriate choice of DIC lengthscale for design and FE modelling. Finally, FE simulations of the edge fracture tests were conducted using multiple mesh sizes, revealing that a boundary condition dependence can also manifest in simulations with the added influence of lengthscale sensitivity. The predicted major strains at the experimental fracture instant varied with mesh size, suggesting that a single strain value may be insufficient to describe edge fracture. The elemental thinning strain showed reduced dependence on mesh size, making it a more reliable parameter for assessment of edge fracture in simulations. Importantly, the simulations indicated that the edge fracture strain cannot be represented by a unique value but is rather a function of the imposed loading condition. The in-plane stretching mode exhibited the lowest engineering thinning strain limit of 8.8%, making it the critical deformation mode for edge crack initiation in 980GEN3 steel. A key outcome of this work is the quantitative understanding of the effect of boundary conditions and lengthscale on the edge fracture limits. Prediction of sheared edge fracture must account for both the imposed loading and numerical lengthscale, with thinning strain offering a more robust metric for use in simulations. The developed methodologies provide practical and efficient guidelines that can be implemented in industrial environments for edge crack assessment and prediction in stamping simulations.Item type: Item , Smart Light Therapy Glasses for Sleep, Cognition, and Mental Wellness(University of Waterloo, 2025-12-23) Tang, LucasFor the past century, people have gradually transitioned to spending their time indoors where, compared with the outdoors, light levels are significantly lower during the day and higher at night. This mismatch of lighting has been associated with related health problems like rising depression rates and sleep issues. As such, bright-light therapy has been established as a treatment for mood and sleep disorders. However, existing devices face major barriers to adoption; light boxes require people to stay in one place for a long time, and wearable products often lack social acceptability. As a result, the research literature has been constrained to short interventions with limited exploration of dose, duration, and individualized understanding of response to light. This thesis presents the design, engineering, and clinical evaluation of a pair of smart light therapy glasses (Lumos glasses) developed to improve convenience, social acceptability, and comfort. The hardware provides the foundational infrastructure for an intelligent, data-driven approach to personalized circadian health. The glasses use a nanotechnology lens with wavelength-based reflection, which allows key light therapy components to be compacted into a classic glasses shape. A hardware-software platform was developed featuring calibrated light therapy optical systems, on-device sensors for reliable wear detection, melanopic ambient light detection, an FCC-approved Bluetooth module with a custom antenna, as well as a compatible cloud-connected mobile app. More than 100 units were manufactured and deployed in a double-blind, randomized, placebo-controlled crossover clinical trial. Participants receiving bright-light showed significant improvements in sleep disturbance (PROMIS) and psychomotor vigilance (PVT) relative to active dim-light control. Stratified analyses revealed that participants with darker eyes generally exhibited more significant improvements under bright-light compared to dim-light control. In contrast, mood (using the Montegomery Asberg Depression Rating Scale) and working memory (word-pair recall) reached statistical significance only after accounting for eye color. Exploratory models also showed that daily use and exposure to ambient light were correlated with improved outcomes, while higher baseline severity strongly predicted room for change. Age and sex contributed smaller, secondary effects. For working memory, dark-eyed participants showed significant Phase~1 gains under bright-light, while other subgroups demonstrated positive associations between daily usage and recall performance. This yielded meaningful insights that adherence and individual pigmentation influence optimal light dosage. This work demonstrates that mobile sensor-driven light therapy can overcome long-standing adherence barriers and enable in-depth research about dose–response and personalization. By combining engineering innovation with clinical validation, the Lumos Smart Glasses provide a foundation for next-generation circadian health technologies that are practical, effective, and scalable.Item type: Item , On the origins of prompt features in time-resolved laser-induced incandescence measurements of metal and carbonaceous nanoparticles(University of Waterloo, 2025-12-22) Robinson-Enebeli, StephenSynthetic nanoparticles have become highly beneficial in many applications, including, for example, catalytic conversion, enhancing the functionality of electronic devices, targeted drug delivery in medicine, and purifying water through the removal of bacteria and heavy metals. Nanoparticles are often synthesized through gas-phase synthesis, where the nanoparticles are formed in a bath gas, resulting in a nanoparticle aerosol. Such aerosols are also unintentionally emitted through processes such as welding or combustion. The benefits and negative impacts of nanoaerosols significantly depend on their properties, such as particle size and concentration. Laser and optical-based characterization techniques can provide such information in an in situ and time-resolved manner. Time-resolved laser-induced incandescence (TiRe-LII) is a widely used laser-based diagnostic for soot characterization and is increasingly being applied to non-carbonaceous nanoparticles. The technique involves heating nanoparticles in an aerosol to incandescent temperatures with a laser and recording their radiative emissions with a photodetector, as they cool to the temperature of the bath gas. Nanoparticle properties such as size and concentration are obtained from temporally- and spectrally-resolved measurements through inference techniques that involve regressing a TiRe-LII instrument model to the data. Accurate and reliable inference of the nanoaerosol properties relies on the robustness of the instrument model. Unfortunately, previously developed models do not fully describe experimental observations, and the reported discrepancies need to be reconciled to improve fundamental understanding, modeling capabilities, and ultimately measurements. These discrepancies include effects previously termed excessive absorption and anomalous cooling. The effect of excessive absorption is observed when the measured particle temperatures exceed the values predicted by the related model, and the anomalous cooling was related to the effect that occurred when the measured particle cooling rate, immediately following the peak-temperature phase, is faster than what is predicted based on the model. This thesis work addresses these reported data–model discrepancies, providing insight into laser–nanoparticle interactions in the context of TiRe-LII and the impact of certain experimental conditions. In particular, it is shown that for metal nanoparticle aggregates, radiative properties are enhanced compared to isolated metal nanoparticles, and laser energy absorption becomes spatially nonuniform within aggregates. Under laser heating, the primary nanoparticle may melt; subsequently, the aggregates may partially sinter or coalesce, which further alters their radiative properties as a function of time, phenomena that do not occur in the case of soot. Furthermore, the existence of nanoparticles of different sizes within the aerosol and spatial energy variations across the irradiating laser sheet influence the data in ways that are not accounted for in current TiRe-LII instrument models. The investigations combined theoretical modeling and experimental work. The modeling utilized several light absorption models to explore the light–matter interactions between nanoaerosols and the electromagnetic field of the laser. The experimental component employed calibrated, time- and spectrally-resolved detection techniques to observe the radiative emissions from irradiated particles within the aerosol. The findings of this thesis contribute to a deeper understanding of laser–nanoparticle interactions and open new avenues for further research in this area.Item type: Item , Predicting ACL Injuries Using Machine Learning Models and Tibial Anatomical Predictors(University of Waterloo, 2025-12-18) Cheng-Hao, KaoThe tibial slope and the tibial depth are well-established risk factors for Anterior Cruciate Ligament (ACL) injury. As ML continues to progress, it has become an increasingly reliable tool for clinical screening and risk factor analysis. This thesis aims to develop and validate an explainable prognostic ML model to predict ACL injury outcomes from these Tibial Anatomical Feature (TAF), and identify the most predictive features among these parameters. A dataset comprising Coronal Tibial Slope (CTS), Medial Tibial Slope (MTS), Lateral Tibial Slope (LTS), Medial Tibial Depth (MTD), and sex was constructed using MRI scans taken from 104 subjects (44 males: 22 injured, 22 uninjured; 60 females: 27 injured, 33 uninjured). Two distinct ML pipelines were developed: a self-developed pipeline (including K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Random Forest (RF), XGBoost, CATBoost, Multi-Layer Perceptron (MLP), and TabNet) and an advanced AutoGluon pipeline (including XGBoost, LightGBM, CatBoost, TabPFN, TabM, TabICL, MITRA, and their weighted ensembles). Both were designed as end-to-end pipelines to process the dataset and output predictions with integrated feature importance explanations. Empirically, the AutoGluon Pipeline demonstrated superior performance and training-time efficiency. The recommended F2-tuned standard ensemble achieved an F2-score of 0.736 on the validation set. On the test set, it demonstrated a test balanced accuracy of 0.955, F1-score of 0.952, F2-score of 0.980, ROC AUC of 1.000, precision of 0.909, and recall of 1.000. A full-dataset model, the F2-tuned full-dataset ensemble refitted on the entire dataset for clinical deployment achieved a validation F2-score of 0.813. The global feature importance analyses performed via SHapley Additive exPlanations (SHAP), established the descending order of influences as MTD, LTS, MTS, CTS, and sex. In summary, the study recommends two versions of the F2-tuned prognostic models, one being a standard ensemble model and the other a full-dataset ensemble. The former, which demonstrated moderately high predictive power, was designed for subsequent research comparison. The latter, without access to the original held-out test set, is constructed for maximum robustness and generalization in real-life clinical deployment. Global feature importance analyses elucidated from the standard ensemble decreased MTD along with increased LTS and MTS as most contributive features for ACL injury. These models serve as both feature attribution tools as well as clinical screening tools. These models are intended to be integrated into clinical practice as explainable machines to assist clinicians in predicting the likelihood of ACL injury.Item type: Item , Tool Wear Modeling for Application to Gear Shaping(University of Waterloo, 2025-12-11) Kropp, AlexanderTool wear has a major impact on the productivity, economics, and sustainability of metal machining operations. While the topic of tool wear has been studied extensively for conventional metal cutting processes, like milling, turning, and drilling, there is a scarcity of studies in literature on modeling and predicting tool wear in gear machining operations. Gears, on the other hand, are essential components for a vast array of engineered systems, like automotive, aerospace, and various transportation vehicles, robotics, automation, and general machinery. This thesis has targeted developing a framework for the study and prediction of tool wear in the gear shaping operation. Gear shaping is among the most versatile methods of cutting gears. In-house experiments were designed and performed to replicate the gear shaping process on a 5-axis milling machine. The kinematics were modeled, and custom NC code was generated and validated using polygon subtractions, to ensure that a gear shaping cutter would accurately produce a gear workpiece. The testing conditions were designed considering the physical capabilities of the machine tool, and the utilization of digital simulations of the gear cutting operation via ShapePro software (previously developed at the University of Waterloo), that predicts the kinematics, chip geometry, and cutting forces. As such, specimen gears were produced from AISI 1215 mild steel using HSS (high-speed steel) cutter material. The cutting edges and flanks of the tool were imaged throughout the tests, to monitor the development of wear. In the envelope of cutting speeds and feed rates that could be tested, only minimal wear was observed, and this was concentrated at the cutting tooth tips and corners, consistent with the prediction from ShapePro that these regions are subject to the largest chip thickness and longest distance of workpiece material cut. Nevertheless, these tests have demonstrated the proof-of-concept for performing shaping tests and progressively imaging the tool edges for wear. Future tests should focus on performing the cuts at more aggressive speeds and feeds to induce discernable wear. To facilitate rapid characterization of tool wear for different workpiece and tool material pairs, an analogy testing method was also developed as an interrupted cutting operation on a lathe, designed to mimic cutting conditions similar to those in shaping. With this approach, a broader set of cutting speeds and feed rates could be explored in the machining of a similar kind of material (cold rolled 1020 steel). In this case, using HSS tooling brought practical limitations (e.g., built-up edge), thus the tooling material was changed to carbide. Nevertheless, a reasonable variation of cutting conditions could be implemented and tool wear progression data, as a function of cutting distance (and time) be documented. This data has informed the development of a progression-based tool wear model for predicting flank wear in interrupted cutting, which is presented as a new contribution in this thesis. Established tool life models, such as Taylor and Colding, can predict the cutting time required for reaching only a single value of tool flank wear, whereas the proposed model can be used to predict when different values of tool wear would be reached without requiring re-calibration. More importantly, the proposed model can be integrated inside a time-domain simulation of an interrupted cutting operation, like gear shaping, in order to predict and update the wear state along each node on the tool edge, which classical tool life models cannot achieve, as they are ‘tuned’ to predict the cutting time until a preset wear is reached. The proposed model, as well as Taylor and Colding models, were benchmarked with respect to the experimental wear data collected across 12 different cutting conditions, and they all performed comparably in predicting tool life, with average prediction errors of roughly 21%-24%, thus indicating some confidence in the proposed new model. Future research should focus on collecting further data, both in gear shaping and analogy orthogonal testing experiments, to ensure further repeatability of the data, as well as validating that a tool wear model calibrated using the lower cost and faster analogy experiments can indeed predict the distribution of tool wear progression in the much more complex gear shaping operation. Furthermore, it is also advised to expand the wear model to predict uncertainty bounds along with the tool wear values themselves, and to expand the study into the machining of different kinds of metals, with different tools substrates and coatings.Item type: Item , Evaluating Face Mask Efficiency on Children and Adults(University of Waterloo, 2025-12-05) El Khayri, HichamSmall infectious aerosols have been a major vector for the spread of diseases such as COVID-19 and influenza. During the recent global pandemic, masking played a key role in reducing airborne transmission, although significant variability has been noted in the ability of different mask types to limit pathogen-laden aerosol dispersion and inhalation. Moreover, there remain significant gaps in the literature regarding mask performance for children. The aim of this thesis is to characterize mask inward protection efficiency for both children and adults, and source control efficiency for children in the 0.2 μm to 1 μm particle size range. An approach based on the conservation of mass guided the experimental methodology used to estimate mask filtration characteristics. For all tested masks, the material filtration efficiency was measured to be at or near 100%, whereas fitted filtration efficiencies for both source control and personal protection were significantly lower. This disparity underscores the highly degrading effects of leakage from gaps at the mask-face interface. Inward protection efficiency of N95, KN95, and surgical masks donned regularly and using the tie and tuck method were estimated on a medium NIOSH adult head form. The tested N95 barrier provided the greatest protection, followed by the KN95 respirator, while the surgical masks offered the least protection. Use of the tie and tuck for surgical masks method yielded only a small, statistically insignificant improvement in inward protection compared to regularly worn variants. Incorporating results from broader literature, mean inward protection efficiency ranges of [67.9%,100%] and [12.5%,79.6%] were determined for the N95 and regularly worn surgical masks, respectively. Both inward protection and apparent filtration efficiencies of adult, modified adult, and child variants of the KN95, CA-N95, and surgical mask as well as the N95 respirator, were estimated on a child manikin. Results further underscore the critical importance of proper fit to mask performance. Child-sized respirators provided higher source control and personal protection compared to other barriers tested. In contrast, the adult-sized surgical mask, which exhibited a loose fit on the child manikin, demonstrated poor performance in both metrics due to the highly degrading effects of leakage. Overall, whenever both variants are available, adult-sized masks demonstrate markedly reduced fitted efficiencies on the child manikin relative to child-sized variants, attributed to larger gaps at the mask-face interface. Flow visualization of air exhaled through the tested barriers qualitatively corroborated these findings, revealing substantially reduced leakage for child-sized variants compared to adult-sized equivalents. Given the increased sensitivity of children to mask breathing resistance, pressure differentials measured across masks donned on the child head form provided relative indicators of breathability. Results demonstrated that masks with similar filtration efficiency can exhibit significant differences in breathability. For example, the child-sized CA-N95 achieved equal or greater fitted filtration efficiency while consistently maintaining lower pressure drops compared to the child KN95. In fact, KN95 respirators showed differential pressures greater than or equal to those of all other tested masks. For a given mask type, better fit was associated with higher differential pressures. However, across different mask designs, higher filtration efficiency did not necessarily compromise breathability.Item type: Item , Performance Analysis and Optimization of Hybrid Edge-Cloud Architectures for Real-Time Robotics Applications(University of Waterloo, 2025-12-03) Salehpoor, MahdisThis thesis explores the optimization of distributed robotic perception systems for real-time applications such as autonomous navigation, smart surveillance, and multi-agent coordi- nation. These systems require fast processing of high-frequency sensor data under tight latency and reliability constraints. Edge computing offers low-latency inference and privacy, while cloud computing pro- vides centralized scalability and resource efficiency, but suffers from transmission delays and network dependency. Critically, the bandwidth required to transmit raw video streams and LiDAR pointcloud messages to the cloud is often infeasible, necessitating edge-side preprocessing. To address these challenges, this work proposes and evaluates a hybrid edge–cloud architecture in which each sensor node that is equipped with its own LiDAR and camera performs background removal and motion detection locally at the edge, while the remaining perception tasks are offloaded to the cloud. This design reduces bandwidth usage and enables real‑time responsiveness under constrained conditions. While edge modules cannot perform full object classification, they provide fast ”reflex-like” responses that enable event filtering, alert triggering, and resource prioritization, reserving the more computationally intensive object recognition for cloud processing. LiDAR processing was parallelized using Intel TBB and spatial chunking, achieving up to 2× speedups across 1–32 cores. For camera-based perception, Frame Differencing, GMM, and Dense Optical Flow were tested. Frame Differencing proved most effective for edge deployment, achieving 100% message reliability and 80.9% bandwidth savings with a 20.8 ms average processing time. Scalability tests showed that a 64-core system can support up to 75 nodes at 10 Hz or 50 nodes at 20 Hz with no message loss while meeting real-time constraints. Communication protocol testing revealed latencies ranging from 0.24 ms (ROS) to 2,589 ms (ZeroMQ over WiFi), setting architectural limits for cloud offloading. Finally, cost analysis showed that a 64-core cloud instance could replace 50 edge devices ($44,950 upfront), offering cost- effective scalability in suitable deployments. This work delivers: (1) empirical benchmarks that reveal how LiDAR and camera perception scale under parallel processing, (2) motion‑based filtering techniques that sig- nificantly reduce bandwidth without sacrificing accuracy, (3) real‑world measurements of communication protocol latency under 5G and Wi‑Fi, and (4) practical deployment guide- lines for hybrid edge–cloud robotic systems.Item type: Item , Development of High Strength Aluminum Alloys for Directed Energy Deposition Additive Manufacturing(University of Waterloo, 2025-11-26) Waqar, TahaAmong additive manufacturing (AM) techniques, directed energy deposition (DED) is of particular interest for structural Al alloys, as it combines the faster cooling rates with the flexibility to repair or build large-scale geometries. The localized thermal cycling inherent to the DED process influences solidification behavior, grain refinement and precipitate evolution for high strength age-hardenable Al alloys such as Al 7075, which in turn governs the mechanical performance. These capabilities position DED as a promising pathway for expanding use of high strength heat-treatable aluminum alloys in aerospace and automotive applications where a good strength to weight ratio is crucial. However, Al 7075 tends to crack during solidification and possesses a limited service temperature range. The research conducted explores the tailoring of an existing Al 7075 composition and delves into the development of novel Al alloy compositions for DED-AM processes. In the initial phase of the research, laser-directed energy deposition of Al 7075 wire feedstock enhanced with TiC nanoparticles to promote grain refinement was investigated. It was found that the combination of high laser power (3400 W) along with low travel speed (400 mm/min) and low wire feed speed (400 mm/min) resulted in the reduction of lack of fusion defects and reducing cracks within the multilayer prints. However, substantial evaporation during printing led to a reduced amount of Mg and Zn bearing phases in the as-printed samples. It was shown that the direct aged sample heated for 5 hours was of comparable hardness to the T6 (solution heat treated and then artificially aged) sample (115 HV0.5), which highlights the presence of solute trapping in the as-printed material. To compare the behavior of the same Al 7075 + TiC wire feedstock under arc-based solidification conditions, the research continued to investigate the microstructural evolution and mechanical response of Al 7075 reinforced with TiC nanoparticles processed via arc-based DED, with a particular focus on aging behavior. Grain refinement was primarily attributed to heterogeneous nucleation and grain boundary pinning by TiC clusters. Moreover, TiC inoculants influenced solute redistribution, driving segregation of Mg and Cr, which in turn altered the precipitation behavior during aging. Heat-treated samples revealed the co-formation of MgZn₂ strengthening precipitates and the E-phase (Al18Mg3Cr2), with the latter contributing to the heterogeneous distribution of precipitates. These findings highlight both the benefits and challenges of TiC inoculation in tailoring microstructure and age-hardening response in arc-DED processed Al 7075 alloys. The second phase of the research presents the design and evaluation of a novel Al-Ce-Mg alloy tailored for wire arc-DED. The objective was to overcome the limitations of conventional high-strength aluminum alloys, which suffer from solidification cracking, volatile element loss, and poor thermal stability at elevated temperatures. Alloy selection was guided by CALPHAD simulations, leading to the identification of a near-eutectic Al-10Ce-9Mg composition. Thin-wall structures were fabricated, and porosity was quantified using micro-computed tomography, supported by high-speed imaging that revealed oxide-film entrapment as the dominant cause of porosity. The solidified microstructure consisted of α-Al, eutectic, and primary Al₁₁Ce₃ phases, as well as β-AlMg phase, which contributed to both strength and thermal stability. Compression testing demonstrated high room-temperature strength but brittle failure. At elevated temperatures, however, the alloy retained superior strength compared to conventional precipitation-strengthened Al 7075 alloy, even after extended thermal exposure. This observation was attributed to the stability of Al-Ce intermetallics. Incorporation of Sc into Al-Ce-Mg alloys can provide a dual strengthening and thermal stabilizing effect. Therefore, in the final phase of the conducted research, an Al-8Ce-8Mg-0.2Sc alloy was developed. Laser surface remelting was employed to replicate AM-like conditions, producing a refined bimodal grain structure and fragmenting coarse Al₁₁Ce₃ networks into discontinuous, blocky morphologies. Compared to the as-cast state, the remelted alloy exhibited increased hardness (114.5 HV1 vs 133 HV1), aided by refined grains and secondary phases such as Al11Ce3 and Mg2Si. Direct aging produced an irregular hardening response, with peak hardness achieved at 375 °C for 1 h due to the precipitation of coherent Al₃Sc nanoprecipitates. Long-term thermal exposure at 200 °C for up to 1000 hours showed negligible hardness loss and minimal coarsening of Ce-bearing intermetallics. Strengthening contributions were dominated by Al₃Sc precipitation, supported by solid-solution, grain refinement, dislocation hardening, and stable Al₁₁Ce₃ dispersoids.Item type: Item , OPTIMIZATION OF BATTERY-FREE WATER LEAK DETECTORS(University of Waterloo, 2025-11-18) OGINNI, ADETOUNBattery-free water leak sensors offer a sustainable solution for real-time leak detection by harvesting energy from water-triggered reactions to power communication modules such as BLE (Bluetooth Low Energy). A key challenge for their practical use is ensuring reliable and rapid activation of the BLE electronics under varying conditions. This work investigates how material loading and sensor design parameters, such as water inlet size and elevation, influence activation time, current output, and structural stability. In this study, the influence of powder mass loading, water inlet size, and sensor elevation on activation time and electrical output was systematically investigated. Among the different mass loadings tested, the 400 mg configuration consistently demonstrated superior performance, achieving both shorter activation times and higher current output compared to other loadings. This optimal behaviour is attributed to a favourable balance in packing density, which improves conductivity and current generation without impeding water penetration. Design parameters such as inlet size and sensor elevation were also found to significantly affect wetting dynamics and activation timing. Further validation in natural water conditions confirmed the robustness of the 400 mg configuration, showing consistent BLE activation across 25 test samples. Mechanical drop tests revealed that lower mass loadings (e.g., 300 mg) resulted in pellet instability and performance degradation, while 400 mg maintained structural integrity. Overall, the results highlight 400 mg mass loading in combination with optimized structural design as the most effective configuration for reliable BLE activation. These findings provide critical insights for advancing battery-free water leak sensors toward real-world applications in leak monitoring and water damage prevention.Item type: Item , Influence of the Coating on the Radiative and Conductive Heat Transfer of 22MnB5 Steel in Hot Stamping(University of Waterloo, 2025-11-03) Bhattacharya, ArdhenduIn hot stamping of Al-Si coated 22MnB5 steel, the heat transfer coefficient (HTC) during quenching is critical for determining the microstructure and mechanical properties of the formed part. Additionally, the radiative properties elucidate how the surface transforms as the steel is heated before quenching. Knowledge of the surface transformations is paramount for understanding the damage caused by the molten Al-Si coating to ceramic rollers in a production environment. This work investigates the effect of the coating on the HTC during quenching and explores the link between radiative properties and surface state changes, including the melting of the Al-Si coating and oxide layer growth. Experiments were performed using a hydraulic press fitted with cooled dies to study the impact of interfacial pressure, coating weight, and dwell time on the HTC. The HTC increased with interfacial pressure, before saturating between 6 and 10 MPa. Specimens with higher coating weights had lower HTCs, which was corroborated by a higher arithmetic roughness for specimens with higher coating weights. Furnace dwell time did not significantly affect the HTC or the roughness of the specimen. Ex situ reflectance measurements of hot stamped specimens revealed minima and maxima between 200 and 1000 nm, due to thin film interference. Wave optics analysis on the reflectance spectra suggested that the oxide layer grew with dwell time. This was confirmed using high resolution – scanning electron microscopy, wherein the measured oxide layer thicknesses were within 50 nm of the estimated oxide layer thicknesses. Additional samples were heated in a muffle furnace for between three and sixty minutes. Wave optics analysis on the reflectance spectra suggested that the oxide layer grew parabolically, as per Wagner’s law. Microscopy measurements revealed that the interdiffusion layer grew linearly simultaneously with the oxide layer. In situ specular reflectance measurements of specimens during heating were performed using a laser-driven light source. The specular reflectance peaked twice; the first peak was attributed to initial coating liquefaction, and the second peak was attributed to subsequent intermetallic reactions. In situ measurements performed on specimens coated with Thermoboost® and iron nitrate revealed a significantly lower specular reflectance peak and higher heating rates.Item type: Item , Remote Object Pose Estimation for Agile Grasping: Leveraging Cloud Computing through Wireless Communication(University of Waterloo, 2025-09-30) Zamozhskyi, OleksiiAs industry transitions toward Industry 4.0, the demand for agile robotic systems capable of vision-guided manipulation is rapidly increasing. However, the computational limitations of onboard hardware make it challenging to support advanced perception pipelines, particularly those based on deep learning. Offloading perception to the cloud presents a promising alternative but introduces latency and reliability challenges that can compromise the real-time performance required for closed-loop robotic control. This thesis presents a robotic grasping system capable of agile, 6D pose-aware manipulation of moving objects by offloading perception to a remote inference server. RGB-D data is continuously streamed over a wireless link to the server, where a deep learning model estimates the object's 6D pose. The estimated pose is then sent back to the robot, which uses it to generate a trajectory for executing the grasp. The system was evaluated on a conveyor-based pick-and-place task under four different wireless network types: Wi-Fi at 60 GHz, Wi-Fi 5 at 5 GHz, 5G NSA at 24 GHz, and 5G NSA at 3.5 GHz. A total of 392 trials were conducted to analyze grasping success rates and the impact of network latency and reliability on performance. The results demonstrate the feasibility of performing agile, closed-loop robotic grasping with cloud-offloaded 6D pose estimation over wireless networks. They also reveal limitations of current wireless infrastructure and deep learning models. The findings suggest that lower-latency, more reliable communication, along with more intelligent local control strategies and faster, generalizing models, are required for production deployment.Item type: Item , Optimal Sensor Placement and Movement in Data Assimilation(University of Waterloo, 2025-09-23) Reshetar, OleksandrDesigning optimal locations for stationary sensors or optimal trajectories for moving sensors within a constrained sensor budget is crucial for Data Assimilation (DA) to reconstruct dynamical systems, such as ocean models or weather forecasts. Commonly used Lagrangian sensors for collecting observational data from the dynamical system may fail to capture important information due to their passive trajectory following the pathlines. This could lead to sensor clustering and undersampling of information-rich regions. The first goal of this thesis research is to study the performance of Lagrangian sensors and potential improvements on a typical DA method called the Azouani-Olson-Titi (AOT) nudging algorithm. Two dynamical systems were used as testbeds: the one-dimensional Kuramoto-Sivashinsky equation (1D KSE) and the two-dimensional turbulent Navier-Stokes equations (2D NSE). Computational experiments showed that, depending on the Stokes number of the sensors (e.g., from $St = 0$ for ideal Lagrangian sensors to $St = 1$ for realistic Lagrangian sensors with inertia), the clustering of the sensors degrades the performance of DA. However, introducing random perturbation to the ideal and realistic Lagrangian trajectories can achieve faster convergence for DA, and thus more effective reconstruction than their unperturbed counterparts. These observations suggest that ideal Lagrangian and inertia sensors may not be optimal, as even a simple random perturbation provides improvement. Therefore, it is reasonable to expect a better sensor movement strategy to achieve better convergence of DA. The second goal is to propose an optimal sensor movement strategy that directs sensors toward information-rich regions of the flow. This is achieved by maximizing the convergence rate of the AOT algorithm, potentially yielding fast and effective reconstruction of a dynamical system. The thesis demonstrates that directed sensors can outperform both Lagrangian and inertia sensors based on AOT reconstruction, particularly in a sparse sensor scenario. These findings can hopefully contribute to the DA community by showing that (i) ideal Lagrangian and inertia sensors are not necessarily good sensor movement strategies for reconstructing dynamical systems, and (ii) suggest a novel strategy to plan optimal sensor trajectories for moving sensors, which maximizes the convergence rate for the sequential DA.Item type: Item , A Novel Framework for Performance-Based Fire Safety in the National Building Code of Canada(University of Waterloo, 2025-09-22) Calder, KeithThe National Building Code of Canada (NBC) has traditionally applied a prescriptive-based framework to regulate fire and life safety in building design. While such a framework offers ease of enforcement, it limits innovative design. This framework transitioned in 2005 to be objective-based to provide greater design flexibility through clarity of intent, stopping short of a full shift to a performance-based framework due to concerns over challenges encountered by other jurisdictions during such transitions. 20 years later, the anticipated benefits of the shift to an objective-based framework have not been fully realized due to the lack of quantitative performance criteria and the reliance on primarily prescriptive and legacy-based acceptable solutions to define the acceptable level of performance for alternative designs. Key factors in the development of the NBC that have limited design flexibility and ultimately its shift from a prescriptive to a performance-based framework are the “test of time” and “absence-based inference” methods traditionally used to confirm performance. These factors, together with generational amnesia, status quo bias, shifting baselines of performance, and the limited integration of fire science and fire engineering, have resulted in fire safety design being governed primarily by the application of regulations rather than by fire safety engineering principles. As the NBC becomes further entrenched in tradition, the design flexibility required to address the evolving challenges of modern construction and emerging environmental risks becomes increasingly limited. To address these challenges, this thesis proposes a risk-based framework to facilitate the transition of the NBC from its current objective-based structure to a fully performance-based one. The proposed framework includes a historical analysis sub-framework to identify the rationale underlying the existing acceptable solution requirements, and a technical reconciliation sub-framework to align those requirements with current fire science and fire engineering principles, incorporating risk assessment methodologies to better quantify performance. It also includes a sub-framework, based on a modified IRCC hierarchy, that integrates the information identified and updated through the other sub-frameworks, establishing a clear and quantified link between the acceptable solutions and the societal objectives they are intended to achieve. By addressing factors in the development of the NBC that have limited its shift from prescriptive-based to performance-based, such as status quo bias, generational amnesia, shifting baselines of performance, and the limited integration of fire science and fire engineering, the proposed framework provides a structured approach that supports innovative design solutions while also achieving societal safety goals. The efficacy of this framework is demonstrated through detailed case studies that review acceptable solutions regulating office occupant load, exit width, building size and type of construction, and spatial separation. Historical analyses of these acceptable solutions identified the passive ventilation basis for the office occupant load factors, the influence of military experience on minimum exit widths, the limitation of building size to align with fire service capability, and the conservative nature of the spatial separation requirements due to limited test data, among other notable details. Given the outdated and nontechnical nature of these findings, recommendations were made for their modification through technical reconciliation with current test data, fire science, and fire engineering concepts. In particular, the finding that the flame front factor included in the spatial separation requirements may no longer be necessary is based on the results of a detailed risk-based analysis of incident radiant heat at a distance using an exterior venting flame model. The levels of the modified IRCC hierarchy were populated with information established from the historical analysis and updated through technical reconciliation for each case study, providing key basis information and quantified performance criteria within a risk-based structure that can be used to form a new and fully performance-based framework for the NBC. The results of this study are relevant to various stakeholders by supporting designers and building officials through clarification of intent and definition of measurable performance, and by providing building code development organizations with a tool for regulatory reform that enhances the clarity, consistency, and adaptability of the fire and life safety requirements in the NBC. The results also provide the technical basis information necessary to facilitate a transition in the practice of fire and life safety design from the strict application of prescriptive regulations toward an engineering-based approach. Additional research is recommended to evaluate the efficacy of the proposed framework through its application to other fire and life safety requirements in the NBC, to other aspects of the NBC, and to building codes in other jurisdictions. Further research should also focus on using the framework to quantify the technical risk basis necessary to support decision-making within a broader legal and regulatory context, thereby contributing to the continued advancement toward a performance-based NBC.Item type: Item , Optimizing Weld Quality in High Stacking Ratio Automotive Joints: Integrated Experimental Design and Machine Learning Benchmarking with Limited Datasets(University of Waterloo, 2025-09-22) Habib, HasanEnsuring crashworthiness in automotive body-in-white (BIW) structures requires reliable resistance spot welds meeting AWS D8.1 guidelines, which mandate minimum 20% nugget penetration into thin sheets. However, this conventional criterion based solely on nugget penetration is inadequate for high stacking ratio (HSR) joints increasingly used with advanced high-strength steels (AHSS). This research quantifies the relationship between nugget penetration and mechanical strength in dissimilar multi-sheet AHSS joints with thickness ratios ≥5:1 and develops machine learning (ML) based parameter optimization models to predict the process parameters for optimal weld joints. Systematic experimentation investigated three-sheet lap joints with thicknesses ranging from 0.65 to 2.0 mm and tensile strengths varying from 280 to 2100 MPa. A comprehensive design of experiments approach combining Box-Behnken Design (BBD) and Latin Hypercube Sampling (LHS) was implemented to optimize six welding process parameters across 80 conditions. Mechanical testing, including tensile shear strength (TSS) and cross tension strength (CTS), alongside microstructural characterization, revealed that joints without visible nugget penetration into the thin top sheet could achieve high mechanical strengths compared to fully penetrated joints. Interrupted welding experiments confirmed that bonding between sheets with high joint strength and no nugget penetration was due to either diffusion bonding or localized brazing. SEM and EDS analysis distinguished two distinct fusion interfaces: complete fusion zones with full nugget penetration and brazed interfaces, each exhibiting unique diffusion mechanisms. To extend these experimental insights, six supervised machine learning algorithms were developed and trained to predict nugget dimensions using process parameters and engineered features based on physical process relationships. Gradient boosting provided the highest predictive accuracy with R² values of 0.948 for maximum nugget width and 0.903 for nugget penetration, reducing prediction errors to 13% compared to 30% from Minitab statistical tool. Shapley additive explanation (SHAP) analysis identified welding current as the dominant process parameter, while interactions among current, weld time per pulse and electrode force proved critical for joint formation. Model-guided inverse prediction enabled dual-objective parameter optimization with experimental validation confirming predicted outcomes within target tolerances. The findings demonstrated that conventional acceptance criteria based solely on nugget penetration were inadequate for evaluating joint quality in complex dissimilar multi-sheet RSW assemblies and highlighted the need of quantitatively assessing interfacial bonding mechanisms. The validated machine learning framework provided accurate, interpretable parameter optimization, and offered a scalable pathway for broader industrial applications.Item type: Item , Functionally Graded Additive Manufacturing of Inconel 625 and CuCrZr Alloys(University of Waterloo, 2025-09-18) Zardoshtian, AliSignificant advancements over the past decade have transformed metal additive manufacturing from a prototyping tool into a full-fledged production process. These developments have enabled the use of lighter, stronger, and more cost-effective additively manufactured components in aerospace, automotive, and energy industries. As qualification efforts progress, research is increasingly focused on advanced capabilities such as combining multiple alloys within a single build to create functionally graded structures, eliminating the need for additional joints. In that regard, Functionally Graded Additive Manufacturing (FGAM) is a layer-by-layer process that varies composition and/or microstructure within a component to achieve locally tailored properties. A new class of FGAMs combining highly heat-conductive CuCrZr alloy with Inconel 625 superalloy has gained considerable attention for aerospace applications, leveraging the former’s high heat dissipation and the latter’s excellent mechanical properties. This can be done through the Laser Directed Energy Deposition (L-DED) technique; however, the implementation remains a material-processing challenge due to the noticeable thermophysical mismatch between the two alloys. This dissertation provides a comprehensive investigation into the FGAM of IN625-CuCrZr alloys, encircling process parameter optimization, gradient path development, and microstructural and defect formation analysis through advanced characterization, CALPHAD-based thermodynamic simulations, and finite element modeling. In that regard, process parameters have been optimized from single-track to multilayer scales, and the effect of process parameters on the microstructure has been studied, more specifically on CuCrZr alloy as there was a big gap in the literature. Further, the FGAM of IN625-CuCrZr has been exercised for two geometries of thin wall and cuboid, incorporating both sharp and gradual compositional transitions. Sharp transitions led to delamination at the interface, while gradual transitions resulted in structurally sound builds. In the gradual transition zone, the presence of a metastable miscibility gap between the liquid of the two alloys led to the formation of distinct Cu-lean and Cu-rich phases in the microstructure, a phenomenon predicted through CALPHAD-based thermodynamic simulations. The formation of solidification cracking in the gradient region of the cuboid geometry was further investigated using Kou’s cracking susceptibility criterion. In support of these findings, a multi-step numerical investigation of heat transfer in both thin wall and cuboid geometries was conducted using finite element analysis. First, a hybrid statistical–numerical thermal model was developed and implemented in the scale of single tracks through user-defined subroutines (DFLUX, USDFLD, and FILM) in Abaqus software. This model enabled high-fidelity prediction of melt pool geometry and thermal history and was validated against experimental melt pool dimensions and in-situ thermocouple measurements. Subsequently, the validated heat source model was used to simulate the thermal behavior during FGAM processing of both geometries. The thermal simulations highlighted the critical role of geometry on cooling rates and temperature distributions, providing deeper understanding into cracking behavior and how geometry-dependent thermal history influence microstructure and defect formation during FGAM of IN625-CuCrZr alloys. Overall, this work establishes a robust experimental–computational framework for FGAM of dissimilar alloys using L-DED process. It introduces a scalable strategy for depositing functionally graded IN625–CuCrZr structures with controlled transitions and minimized defects. The modeling and characterization approaches developed here can be extended to other material systems, while the insights into miscibility gap, solidification behavior, and cracking mechanisms lay the groundwork for future microstructure design and process control in metal additive manufacturing.