Civil and Environmental Engineering

Permanent URI for this collectionhttps://uwspace.uwaterloo.ca/handle/10012/9906

This is the collection for the University of Waterloo's Department of Civil and Environmental Engineering.

Research outputs are organized by type (eg. Master Thesis, Article, Conference Paper).

Waterloo faculty, students, and staff can contact us or visit the UWSpace guide to learn more about depositing their research.

Browse

Recent Submissions

Now showing 1 - 20 of 907
  • Item
    Removal of microplastics during drinking water treatment: Linking theory to practice to advance risk management
    (University of Waterloo, 2024-10-22) Gomes, Alice
    Microplastics (MPs) have emerged in the past decade as widespread contaminants that are harmful to human and ecosystem health. While their removal from water may be similar to those of other particulate contaminants, its characterization is complicated because MPs can undergo weathering, photolysis, and microbial degradation in the natural environment, resulting in the presence of functional groups (e.g., carbonyl, hydroxyl) on their surfaces, which may affect their removal during drinking water treatment. Given that studies using seeded polystyrene microspheres/MPs as surrogates for oocysts have shown good (but sometimes variable) removals through conventional drinking water treatment composed of coagulation, flocculation and sedimentation (CFS) followed by filtration, MPs are likely to be well removed in optimized conventional drinking water treatment plants. While many studies have focused on the removal of larger (i.e., >50 µm sized microplastics), investigations of the removal of smaller sized (<10 μm) microplastics by drinking water treatment processes have been limited largely to case studies in which foundational mechanisms necessary for maximizing treatment performance have only been superficially investigated, if at all. To address this gap, the study focused on whether MPs removal by conventional chemical pretreatment (i.e., coagulation, flocculation, and sedimentation) with alum aligns with the removal of other particles, including Cryptosporidium oocysts, for which particle destabilization is essential for removal. The study aimed to advance knowledge through three main objectives: (1) characterize MPs removal by CFS with different particle destabilization mechanisms and compare it to other important particulate contaminants (i.e., Cryptosporidium spp. oocysts), (2) evaluate the effect of particle size on MPs removal by CFS, and (3) assess the influence of weathering on MPs removal by CFS. To evaluate MPs removal by chemical pretreatment reliant on (1) adsorption and charge neutralization and (2) enmeshment in precipitate (i.e., sweep flocculation) particle destabilization mechanisms, bench-scale investigations of alum-based CFS (i.e., jar tests) were conducted with synthetic water using pristine and weathered PS microplastics of 1, 5 and 10 μm diameter. Several synthetic raw water matrices were explored to identify scenarios in which both particle destabilization mechanisms were clearly discerned. The final synthetic raw water was composed of deionized water spiked with sodium carbonate and kaolin (70 NTU) at pH 7.0. To demonstrate that MPs removal by CFS aligns with coagulation theory, sixteen alum doses between 0–38.8 mg/L were used to evaluate MPs removal by CFS. Turbidity reduction was also evaluated, and zeta potential was analyzed to identify maximal particle destabilization. MPs removal increased with particle size, aligning with gravitational settling theory. MPs removal during CFS with optimized particle destabilization was generally consistent with reported removals of other particles, including Cryptosporidium spp. oocysts during optimized chemical pretreatment, thereby suggesting that similar approaches for risk management may be relevant to MPs. Notably, differences in pristine and weathered MPs removal by CFS were not significant under the conditions investigated, thereby suggesting that weathering does not affect MPs removal when particle destabilization by coagulant addition is optimized. This study bridges the gap between the theories of conventional drinking water treatment and concerns regarding the potential passage of MPs through drinking water treatment plants, demonstrating that MPs can be removed in the same manner as other colloidal particles using conventional chemical pretreatment and—by well-recognized theory-based extension—physico-chemical filtration.
  • Item
    Phosphorus Legacies and Water Quality Trajectories Across Canada
    (University of Waterloo, 2024-10-15) Malik, Lamisa
    Phosphorus (P) pollution in freshwater is a critical environmental issue, primarily driven by agricultural runoff, wastewater discharge, and industrial effluents. Across Canada, lakes such as Lake Erie and Lake Winnipeg experience severe and persistent algal blooms driven mainly due to excess phosphorus loading. Excessive phosphorus loading leads to eutrophication, which causes harmful algal blooms and hypoxia which disrupt aquatic life, reduce biodiversity, and impair water quality, making human consumption and recreational activities unsafe. Despite policies aimed at reducing phosphorus loading, such as improved farming practices and wastewater treatment upgrades, we have not seen a marked decrease in riverine loads. Phosphorus management goals often fall short due to the persistence of legacies – phosphorus that has accumulated in soils and sediments over decades of agricultural applications – which continue to release phosphorus into water bodies for decades after its initial application. Despite recognizing the existence and significant regional and global impact of legacy P on water quality and aquatic ecosystems, our understanding of the magnitude and spatial distribution of these P stores remains limited. Understanding the legacy P stores and their contributors is crucial for efficiently managing water quality, highlighting the importance of studying these factors to develop more effective and sustainable management strategies. The central theme of this thesis is the exploration of the phosphorus legacy across various landscapes. My work has three objectives. First, I explore phosphorus legacies and water quality trajectories across the Lake Erie basin. Second, I quantify various legacy P stores and evaluate their current and future impacts on water quality. Third, I quantified phosphorus accumulation for Pan Canada. In the first objective, I develop a comprehensive phosphorus budget for the Lake Erie Basin, a 60,000 km² transboundary region between the U.S. and Canada, by collecting, harmonizing, and synthesizing agricultural, climate, and population data. The phosphorus inputs included fertilizer, livestock manure, human waste, detergents, and atmospheric deposition, while outputs focused on crop and pasture uptake, covering a historical period from 1930 to 2016. The budget allowed us to calculate excess phosphorus as phosphorus surplus– surplus defined as the difference between P inputs and non-hydrological exports. A random forest model was then employed to describe in-stream phosphorus export as a function of cumulative P surplus and streamflow. The results indicated a significant accumulation of legacy P in the watersheds of the Lake Erie Basin. Notably, higher legacy P accumulation corresponded strongly with greater manure inputs (R²=0.46, p < 0.05), whereas fertilizer inputs showed a weaker relationship. For the second objective, I model the long-term nutrient dynamics of phosphorus across 45 watersheds in the Lake Erie basin using the ELEMeNT-P model. This aimed to quantify legacy phosphorus accumulation and depletion across different landscape compartments, including soils, landfills, reservoirs, and riparian zones, and to assess the potential for phosphorus load reductions under future management scenarios. The model sought to identify key legacy phosphorus pools and explore the feasibility of achieving significant reductions in phosphorus loading, with results indicating that 40% reductions are attainable only through aggressive management efforts. For the last objective, I develop a high-resolution phosphorus budget dataset for Canada, spanning the years 1961 to 2021, at both county and 250-meter spatial scales. This dataset aimed to capture phosphorus inputs from fertilizers, manure, and domestic waste, along with phosphorus uptake by crops and pastureland, across all ten provinces. With this dataset, I aim to better understand the state and progress of phosphorus management across space and time. The results reveal significant variation in P surplus attributable to differences in land use and management practices. The highest surpluses were observed in southern Ontario and Quebec, with approximately 50 kilotons in 2021, contributing to an accumulation of over 2 tera tons of phosphorus over the past 60 years.
  • Item
    Non-Stationary Stochastic Modelling of Climate Hazards for Risk and Reliability Assessment
    (University of Waterloo, 2024-10-01) Bhadra, Rituraj
    This thesis presents methodologies for studying the effects of climate change on natural hazards. The thesis is structured around three key aspects: first, the stochastic modelling of non-stationary hazards; second, the modelling of concrete degradation in a changing climate; and third, the economic risk evaluation associated with these non-stationary hazards. The initial focus of this thesis is on applying a non-stationary stochastic process to model the increasing frequency and intensity of climate-driven hazards in Canada. The early chapters provide an overview of the effects of climate change in Canada. To understand the trends and projections of various climatic variables, such as temperature, precipitation, and wind speed, recent studies and reports from Environment and Climate Change Canada, along with other relevant literature, are examined and analyses were performed on the model outputs of the Couple Model Inter-comparison Project Phase 6 (CMIP6) data. The overview highlights the growing occurrence and severity of climate hazards, including hurricanes, droughts, wildfires, and heatwaves, as supported by other independent studies. In the light of such analyses, this study demonstrates the inadequacy of traditional stationary models for future predictions and risk assessments, thereby advocating for a shift to non-stationary frameworks. The thesis provides a robust theoretical foundation for non-stationary hazard modelling using stochastic process models. Traditional extreme value analysis (EVA) typically assumes stationarity. However, this assumption is invalidated by gradual changes in the frequency and intensity of climate-driven hazards. This research proposes methodologies to model climatic hazards using a non-stationary stochastic shock process, specifically the non-homogeneous Poisson process (NHPP), to derive the maximum value distributions over any finite period and not just restricted to annual maxima. These models account for changes in the underlying processes over time, providing a more accurate representation of climate-driven hazards by incorporating time-varying parameters that reflect the dynamic nature of climatic extremes. By integrating stochasticity and temporal variability, these stochastic process models offer a robust framework for predicting the future occurrence and intensity of climate-driven hazards. The proposed methods are demonstrated through the estimation of maximum value distributions for precipitation events using the Coupled Model Inter-comparison Project (CMIP) phase-6 multi-model ensemble data, with an analysis of inter-model variability. Furthermore, the thesis presents a case study on modeling heatwaves to illustrate the application of these models to climatic data, particularly for events where the asymptotic assumptions of extreme value theory do not hold. Climate change will not only influence the loads and hazards on infrastructure, but it will also exacerbate the degradation processes of structures due to harsher climatic conditions such as higher temperatures and increased humidity. To model these effects on the degradation of concrete bridges, simulations were conducted using physico-chemical concrete degradation processes. Based on the simulation results, non-stationary Markov transition probabilities were estimated for several key locations in Canada under various Shared Socioeconomic Pathway (SSP) scenarios. The final chapter of the thesis addresses the economic aspects of climate-driven hazards. It includes derivations to estimate various statistics of damage costs, such as the mean, variance, moments, and distribution, resulting from a non-stationary hazard process. The analytical results were derived for several cases, such as considering the loss magnitudes to be identically and non-identically distributed, and whether discounting is applied to the losses or not to address the effect of time in evaluating the net present losses or not. This analysis offers valuable information for policy makers, engineers, and scientists involved in climate adaptation and mitigation efforts.
  • Item
    A Stochastic Framework for Urban Flood Hazard Assessment: Integrating SWMM and HEC-RAS Models to Address Watershed and Climate Uncertainties
    (University of Waterloo, 2024-09-25) Abedin, Sayed Joinal Hossain
    Urbanization significantly alters natural hydrological processes, leading to increased flood risks in urban areas. The potential damages caused by flooding in urban areas are widely recognized, making it crucial for urban residents to be well-informed about flood risks to mitigate potential losses. Flood maps serve as essential tools in this regard, providing valuable information that aids in effective planning, risk assessment, and decision-making. Despite floods being the most common natural disasters in Canada, many Canadians still lack access to high-quality, up-to-date flood maps. The occurrence of recent major flood events across the country has sparked renewed interest among government officials and stakeholders in launching new flood mapping initiatives. These projects are critical for enhancing flood risk management across communities. Traditional flood hazard mapping methods, based on deterministic approaches, often fail to account for the complexities and uncertainties inherent in urban flood dynamics, especially under rapidly changing climate conditions. Uncertainty affects every stage of flood mapping, influencing accuracy and reliability. Recognizing this, recent studies advocate for stochastic approaches to explicitly incorporate these uncertainties. However, there is a lack of industry-standard tools that allow for a convenient and comprehensive analysis of uncertainty, making it challenging to routinely incorporate uncertainty into flood hazard assessments in practice. This underscores the need for a robust framework to model flood uncertainty. While various models have been proposed to address the uncertainty, many remain conceptual or lack the necessary automation. Despite no "perfect models", the Storm Water Management Model (SWMM) and the Hydrologic Engineering Center’s River Analysis System (HEC-RAS) are widely used for urban hydrology and channel hydraulics modeling, respectively, due to their robust physics-based approaches. Both SWMM and HEC-RAS models have been enhanced with commercial and open-source extensions, built on algorithms written in various programming languages, to improve their utility, particularly for automating workflows to handle complex urban flood scenarios. While SWMM has more robust extensions, most available HEC-RAS extensions are designed for one-dimensional (1D) steady state models, which lack the complexity needed for accurate urban flood modeling. The release of HEC-RAS 6.0, which allows for two-dimensional (2D) unsteady flow modeling and incorporates structures like bridges and weirs, marks a significant advancement for urban flood modeling. The current research was motivated by the perceived benefits of designing such extensions for automating workflows in recent versions of SWMM and HEC-RAS, as well as automating the coupling of these two models in a stochastic framework to facilitate the integration of uncertainty into existing flood hazard mapping workflows. This thesis introduces the SWMM-RASpy framework, a novel automated stochastic tool built using the open-source Python programming language. SWMM-RASpy integrates SWMM's detailed urban hydrologic capabilities, such as dual-drainage modeling, with HEC-RAS's 2D unsteady hydraulic modeling, coupled with stochastic simulations through Latin Hypercube Sampling (LHS) to analyze the uncertainty in flood hazard mapping. The framework was demonstrated on the Cooksville Creek watershed, a highly urbanized area in Mississauga, Ontario, known for its susceptibility to flooding. An entropy map was successfully produced for the case study site, which better reflects the uncertainty of flooding and could be used to develop tailored flood planning and preparedness strategies for different zones within the site. This thesis also presents a detailed application of the SWMM-RASpy framework to assess flood hazards, with a specific focus on topography-based hydraulic uncertainties in the watershed, particularly surface roughness variability, which affects pedestrian safety during flood events. The study highlights that traditional hazard models, which focus mainly on residential buildings, do not adequately account for the risks to pedestrians who are a significant source of fatalities in flood events, especially in densely populated urban areas with high mobility. Three flood hazard metrics were developed and used to evaluate the flood risks to pedestrians given the uncertainty surrounding surface roughness: FHM1, based on inundation depth; FHM2, combining depth and velocity; and FHM3, incorporating depth, velocity, and duration. Key findings from the assessment indicate that surface roughness significantly affects pedestrian hazard estimation across the floodplain, making it a critical factor in flood hazard management. The FHM2 metric, which incorporates depth and velocity, was found to be highly sensitive to roughness variation, potentially leading to the misclassification of hazardous zones as safe and vice versa. The inclusion of velocity in the hazard assessment, while improving accuracy, also increased variability, emphasizing the need for a balanced approach in flood risk evaluations. In contrast, the FHM3 metric, which includes flooding duration, showed minimal sensitivity to surface roughness uncertainty. The research also suggests that confidence maps, produced as part of the analysis and accounting for estimated uncertainties surrounding the hazard metrics propagated from surface roughness, can offer a more reliable alternative to traditional deterministic hazard maps. Lastly, the study, through this analysis, emphasizes the importance of combining grid-level and zonal-level analyses for a more comprehensive understanding of flood hazards at different scales, thereby supporting more robust flood risk assessments. This thesis extends the application of the SWMM-RASpy framework to assess the impacts of climate change on flood hazards within the Cooksville Creek watershed. It examines how projected changes in rainfall intensity from Global Climate Models (GCMs) affect flood risks, particularly for urban buildings, as well as the importance of incorporating uncertainties from these projections into flood hazard assessments. The same hazard metrics used for pedestrian hazard assessment, FHM1, FHM2 and FHM3, were used to evaluate building hazards. The study predicts a significant increase in flood hazards within the watershed, with a substantial expansion of inundation areas affecting up to 40% more buildings when uncertainties are considered. The analysis shows that without considering uncertainties, FHM1 and FHM3 predict a higher number of damaged buildings than FHM2, with FHM1 predicting the highest number of affected buildings. This suggests that relying solely on FHM1 to estimate building hazards may be sufficient in similar climate change scenarios, although further investigations are needed. However, when uncertainties are included, FHM2 shows a greater increase in the number of buildings at risk compared to FHM1 and FHM3, due to the larger uncertainty associated with velocity versus depth and duration. This underscores the need to incorporate uncertainty into flood hazard assessments to ensure a more comprehensive understanding of potential future damages. Overall, this study has made significant contributions to the field of urban flood hazard assessment by developing a robust method for incorporating and analyzing uncertainties, thereby supporting more effective flood management and resilience planning. Future research should apply the SWMM-RASpy framework to other watersheds and investigate additional hydrologic and hydraulic variables to further improve flood risk assessments.
  • Item
    Atmospheric Emissions Associated with the Use of Biogas in Ontario
    (University of Waterloo, 2024-09-24) Bindas, Savannah
    This study aims to quantify the atmospheric emissions associated with an energy-from-waste transition in Ontario. Specifically, it explores the emissions from using livestock and food waste to produce biogas as a source of renewable natural gas. Biogas is one potential “closed loop” solution to waste management; however, there is potential for additional emissions associated with transitioning to anaerobic digestion as a waste management strategy. Each step along the anaerobic digestion process, from the transportation of feedstock to the storage of post-processed digestate, can release both greenhouse gas (GHG) and air pollutant emissions. Here, we quantified the net effects on emissions of a biogas transition in Ontario. We evaluated scenarios using up to 100% of available waste feedstocks in the province and compared these emissions to conventional manure management and landfilling. We found that emissions from current manure management strategies dominated GHG emissions, and, as more manure was utilized in the biogas system, there was a drastic decrease in corresponding emissions of CH4, N2O, and CO2, all of which are potent GHGs. All scenarios showed emission reductions compared to the traditional practice. By the 75% biogas scenario, GHG emissions associated with the biogas process are balanced by the potential offsets from avoided synesthetic fertilizer production, leading to negligible net GHG emissions from this system. In the 100% scenario, we observed SOx, VOCs, NH3, and PM2.5 were increasingly offset by emissions savings in the natural gas production and synthetic fertilizer production. The important exceptions were the significant NH3 and PM2.5 emissions associated with conventional manure management. Due to this, we did not see net emissions savings from the biogas scenarios until the 100% run. These results highlight the atmospheric impacts of conventional waste management and demonstrate the potential for anaerobic digestion to mitigating these emissions.
  • Item
    Desorption of Per-and Polyfluoroalkyl Substances from Powdered and Colloidal Activated Carbon
    (University of Waterloo, 2024-09-24) Uramowski, Andrew
    Per- and polyfluoroalkyl substances (PFAS) are a group of synthetic chemicals with unique heat-resistant properties, leading to their usage in aqueous film-forming foams (AFFF) for fighting fuel-based fires at airports and military bases. The application of AFFF at these facilities has led to the contamination of groundwater with PFAS, which can threaten the safety of nearby drinking water, agricultural, and industrial supply wells, as well as downgradient surface water bodies. Drinking water supplies contaminated with PFAS can result in human exposure, which has been linked to developmental, immunological, endocrine, and cardiovascular disorders, and cancer. Since PFAS are resistant to biodegradation and traditional destruction technologies, current remediation efforts are focused on immobilizing PFAS using adsorptive processes that sequester PFAS from the aqueous phase, concentrating them on an adsorbent media. The injection of activated carbon (AC) particulate amendments into the subsurface has been suggested as a promising technique for the in situ immobilization of plumes of PFAS and to protect downgradient receptors. To predict the long-term performance of these AC barriers, a thorough understanding of adsorption and desorption processes is required. The objective of this research was to investigate the desorption behaviour of three PFAS (perfluorooctane sulfonic acid (PFOS), perfluorooctanoic acid (PFOA), and perfluorobutane sulfonic acid (PFBS)) from a powdered AC (PAC) and a colloidal AC (CAC). The research focused specifically on assessing whether desorption of PFOS, PFOA, and PFBS on these two AC materials was hysteretic. PFAS adsorption and desorption kinetic experiments using PAC or CAC were completed to determine the contact time required to reach near-equilibrium conditions. Adsorption experiments with PAC utilized the bottle-point method, and desorption experiments used a sequential desorption methodology where the aqueous phase of desorption reactors was replaced with an adsorbate-free solution. Adsorption of PFAS by CAC was also investigated using the bottle-point method; however, desorption experiments were conducted using two different methods: (1) a sub-sampling methodology where aliquots of slurry from a well-mixed adsorption isotherm bottle were diluted to initiate desorption, and (2) a whole-bottle dilution method where the entire contents of adsorption reactors were diluted to larger volumes to initiate desorption. The results indicated that for experiments utilizing PAC, adsorption and desorption equilibrium was established for all compounds within 72 h. Desorption of PFOS, PFOA and PFBS from PAC did not demonstrate hysteresis since all desorption data were contained within the 95% adsorption prediction band. For experiments using CAC, adsorption equilibrium was established by 120 h for all compounds, while desorption equilibrium was established by 120 h for PFOS, and 72 h for PFOA and PFBS. Desorption data using the sub-sampling method for PFOS, PFOA or PFBS and CAC were below and outside of the 95% adsorption prediction band. It was concluded that unrepresentative sub-sampling of CAC slurry occurred in the desorption step using this method. When the whole-bottle dilution method was adopted for PFOS, desorption data were within the 95% adsorption prediction band, indicating no evidence of hysteresis under the experimental conditions used. Since mass removal at each desorption step was extremely small compared to the sorbed fraction, desorption data did not reach aqueous equilibrium concentrations near the method detection limit. The absence of hysteretic behaviour in this research demonstrates that sorption processes are reversible over the concentration ranges explored. This reversibility implies that PFAS sorbed within AC barriers can be released when the groundwater concentration is decreased, either due to temporal heterogeneity in concentration profiles, or the depletion of the source zone.
  • Item
    Spatio-Temporal Analysis of Roundabout Traffic Crashes in the Region of Waterloo
    (University of Waterloo, 2024-09-18) Miyake, Ryoto
    Roundabouts are increasingly implemented as safer alternatives to stop-controlled and signalised intersections, with the goal of reducing the severity and frequency of traffic crashes. The safety performance of roundabouts, however, is influenced by their geometric design, and the effects of geometric design variables on safety can vary across different countries and regions. Despite this, there is limited research on these safety impacts within the Canadian context. This study addresses this gap by using data from the Region of Waterloo, Ontario, to develop a safety performance function (SPF) using a negative binomial regression model. The model identified significant geometric design variables affecting collision frequency, such as inscribed circle diameter (ICD), entry angle, entry lane width and number of entry lanes. The findings suggest that the safety impacts of geometric design in Canada may differ from those observed in other countries, highlighting the need for region-specific SPFs. Additionally, in areas where roundabouts are relatively new, it is expected that the safety performance of roundabouts may fluctuate over time and across different locations. However, spatio-temporal variations in roundabout safety have not been extensively studied. To fill this gap, a spatio-temporal analysis was conducted using Bayesian hierarchical models to capture spatial and temporal variations in collision frequency. The results reveal significant spatial autocorrelation, while no strong temporal patterns or novelty effect were detected within the scope of the data and modelling approach used in this analysis. This research advances the understanding of how geometric design and spatio-temporal factors influence roundabout safety, providing important insights for the planning and design of roundabouts. Moreover, it is pioneering in its application of spatio-temporal interaction effects in road safety analysis, demonstrating the potential for this approach in future studies.
  • Item
    Quantifying gas phase chlorine compounds in sewer system headspaces
    (University of Waterloo, 2024-09-18) Sun, Xiaoyu
    chlorine chloramine chlorine gas sensor mass transfer deterministic model
  • Item
    Deep Learning-Based Probabilistic Hierarchical Reconciliation for Hydrological and Water Resources Forecasting
    (University of Waterloo, 2024-09-10) Jahangir, Mohammad Sina
    Accurate, probabilistic, and consistent forecasts at different timescales (e.g., daily, weekly, monthly) are important for effective water resources management. Considering the different timescales together as a hierarchical structure, there is no guarantee that when forecast models are developed independently for each timescale in the hierarchy, they will result in consistent forecasts. For example, there is no guarantee that one-seven day(s) ahead forecasts from one model will sum to a weekly forecast from another model. Significant efforts have been made in the time-series forecasting community over the last two decades to solve this problem, resulting in the development of temporal hierarchical reconciliation (THR) methods. Until recently, THR methods had yet to be explored for hydrological and water resources forecasting. The main goal of this research is to introduce THR to the field of hydrological and water resources forecasting and to merge it with the latest advancements in deep learning (DL) to provide researchers and practitioners with a state-of-the-art model that can be used to produce accurate, probabilistic, and consistent multi-timescale forecasts. To achieve this goal, this research follows three interconnected objectives, each including a main contribution to the field of DL-based hydrological forecasting. In the first main contribution of this research, the potential of THR to produce accurate and consistent hydrological forecasts was verified for the first time in hydrology through a large-scale precipitation forecasting experiment using 84 catchments across Canada. Three THR methods were coupled with three popular time-series forecasting models (exponential time-series smoothing, artificial neural network, and seasonal auto-regressive integrated moving average) for annual precipitation forecasting at monthly (12-steps ahead), bi-monthly (6-steps ahead), quarterly (4-steps ahead), 4-monthly (3-steps ahead), semi-annual (2-steps ahead), and annual (1-step ahead) timescales. It was confirmed that not only does utilizing THR guarantee forecast consistency across all timescales, but it can also improve forecast accuracy. DL models are increasingly being used for hydrological modeling, particularly for lumped simulation, due to their ability to capture complex non-linear relationships within hydrological data as well as their efficiency in deployment. Likewise, the application of DL for hydrological forecasting has gained momentum recently. DL models can extract complex patterns from meteorological forcing data (e.g., precipitation) to forecast future streamflow, often leading to forecasts that are more accurate than current conceptual models. However, due to uncertainty in the phenomena affecting hydrological processes, it is necessary to develop accurate probabilistic forecast models to provide insights for informed water management decisions. In the second main contribution of this research, two novel state-of-art sequence-to-sequence probabilistic DL (PDL) models were developed, tested, and applied for short-term (one-seven day(s) ahead) streamflow forecasting in over 75 catchments with varied hydrometeorological properties across both the continental United States (CONUS) and Canada. The two designed models, namely quantile-based encoder-decoder and conditional variational auto-encoder (CVAE) showed superior performance compared to the benchmark long-short-term memory (LSTM) network considering forecast accuracy and reliability. Specifically, CVAE, a generative DL model that can estimate magnitudes of different sources of uncertainty (e.g., aleatoric, epistemic), proved to be effective in producing reliable forecasts for longer forecast lead times (three-seven days ahead). Given the introduction of THR to the field of hydrological forecasting through the first main contribution, there is no guidance on how to couple THR with the latest advancements in DL, especially PDL, to produce accurate, and consistent probabilistic hydrological forecasts. Furthermore, existing methods for combining THR with DL models, particularly PDL models, suffer from several limitations. Firstly, almost all approaches treat THR as a post-processing step. Secondly, existing THR methods often lack adaptability, meaning they are unable to adjust properly to changing data distributions or new information. Finally, there is limited research on implementing probabilistic THR, a crucial aspect for making probabilistic forecasts consistent. As the third main contribution, a hierarchical DL model (HDL) was introduced where THR was integrated directly into the DL model. Specifically, a custom THR layer was developed that can be combined with any DL model, much like a LSTM layer or a linear layer, to produce the proposed HDL. This integrated approach (via the new THR layer) allows any DL model to leverage temporal information across multiple timescales during training, perform probabilistic THR, and be efficient for real-time application. Furthermore, the proposed HDL is based on auto-regressive normalizing flows, a state-of-the-art generative DL model that is more flexible than CVAE in that it can non-parametrically estimate the probability distribution of the target variable (e.g., streamflow). HDL was tested on more than 400 catchments across CONUS for weekly streamflow forecasting at daily (seven-steps ahead) and weekly (one-step ahead) timescales. The performance of HDL was benchmarked against LSTM variants. HDL produced forecasts that had substantially higher accuracy than the LSTM variants and simultaneously generated consistent forecasts at both daily and weekly timescales, without the need for post-processing (as in the vast majority of THR methods). The implementation of THR as a neural network layer allows it to be seamlessly combined with other DL layers. For example, the new THR layer can be coupled with physics-based differentiable routing layers for multi-timescale distributed hydrological forecasting. It is expected that HDL will serve as a benchmark upon which future THR methods will be compared for streamflow forecasting. Furthermore, given the generality of the approach, HDL can be used for forecasting other important variables within hydrology (e.g., soil moisture) and water resources (e.g., urban water demand), as well as other disciplines, such as renewable energy (e.g., solar power).
  • Item
    Fatigue and Fracture Behaviour of Steel Wire-Arc Additively Manufactured Structural Materials
    (University of Waterloo, 2024-09-04) Lee, Jun Seo
    There is an increasing demand for automation that has influenced many industries to find ways to integrate it into their markets. Within the civil sector, the integration of automation into the various stages of a project unlocks new opportunities that were previously difficult to achieve. If the desires of the architects appeared unachievable due to high planning and manufacturing costs, this can now be resolved with the addition of automation and robotics embedded in the early stages of project development. The wire arc additive manufacturing (WAAM) process is an additive manufacturing (AM) process that allows efficient fabrication of structural elements. This process, also referred to as gas-metal arc additive manufacturing (GMAAM), uses directed energy deposition (DED) to create components. Specific to the WAAM process, a metal wire is fed into an electric arc, and then welded into a designed shape. For structural steel fabricators, this automated technology could allow for the reduction of supply chains, part inventories, and scrap waste, and will help improve the digitalization of the fabrication process. Moreover, the WAAM process allows the fabrication of customized connection nodes and unique structural shapes that are difficult to achieve with conventional subtractive manufacturing. Despite the many potential advantages of the WAAM process, research is needed for WAAM structural steel to be used in the civil engineering sector. Mechanical properties such as the elastic modulus, yield strength (YS), and the ultimate tensile strength (UTS) of WAAM material should be tested and validated. In addition, WAAM steel can have a very rough and wavy surface due to the additive manufacturing process. The rough surface can cause stress concentration zones within the material that may affect its fatigue performance. Although this can be mitigated by post-processing steps such as surface milling, it is important to study its properties in its as-built state as milling is an additional fabrication step, which takes time and cost and may not be necessary for some projects and applications. This thesis aims to explore the material properties, fracture toughness, and fatigue behaviour of WAAM steel components. Through experimental testing, mechanical properties such as the elastic modulus, yield strength, and ultimate tensile strength are determined. Further test results include Charpy v-notch impact tests to determine fracture toughness, as well as tests to determine crack propagation properties. Lastly, the experimental program included testing the fatigue behaviour of WAAM steel for both the smooth and rough (as-fabricated) specimens. The experimental program also examined the effects of weld direction by including tests on specimens oriented both perpendicular and parallel to the weld. The fatigue data collected from the experimental program was used to plot a stress-life (S-N) curve for WAAM steel. The data was then statistically analyzed and compared to current codes such as CSA S6 and S16. It was found that the fatigue behaviour of the WAAM steel was dependent on the weld direction. The specimens oriented parallel to the weld showed behaviour similar to CSA Detail Category B. Specimens oriented perpendicular to the weld showed behaviour similar to CSA Detail Category E. A metallurgical study of the WAAM steel showed that its microstructure showed resemblance to welded steel components. Looking at the microstructure, the grain sizes and boundaries indicated differences in the as-deposited zones and the reheated zones. The reheated zones, where the addition of new layers disturbed the microstructure, consisted of finer grains expected to exhibit greater toughness. Lastly, a linear elastic fracture mechanics (LEFM) model was used to predict the fatigue lives of the WAAM steel fatigue specimens in the perpendicular- and parallel-to-weld orientations. The model was able to predict the fatigue behaviour of the WAAM steel specimens, but the results were greatly dependent on the assumed surface stress concentration factors, Kt. More research is needed to obtain Kt values that will enable greater accuracy in determining the fatigue behaviour of WAAM steel.
  • Item
    Development of Improved Methods to Establish Toughness Requirements for North American Steel Highway Bridges
    (University of Waterloo, 2024-09-04) Chien, Michelle
    Brittle fracture is a major concern to structural engineers as it can have significant consequences in terms of safety and cost. Although modern day occurrences are rare, it is well known that they can occur without warning and may lead to the sudden closure of a bridge, loss of service, expensive repairs, and/or loss of property or life. In Canada, steel bridge fracture is a more significant concern due to the harsh climate present through much of the country, which, if the toughness properties are improperly specified, is sufficient to put many steels on the lower shelf of the toughness-temperature curve. The provisions for avoidance of brittle fracture in various bridge design codes vary in complexity. The existing Canadian CSA standards take a fairly simplistic approach for design against brittle fracture, using design tables that have two temperature zones. Depending on the minimum mean daily temperature of the location of interest, one can determine the Charpy V-Notch testing requirements for the grade of steel. However, it is known that temperature is not the only factor that plays a role in the fracture behaviour of steels. Other factors influencing fracture, such as plate thickness, crack size, demand-to-capacity ratio, and considerations related to traffic, are currently neglected. It is generally known, for example, that thin plates (e.g., less than 12.5-19.0 mm in bridge applications) are less susceptible to brittle fracture, due to the rolling reduction ratio at the mill. However, for the same steel grade (with a small distinction between base and weld metal), the same CVN requirements are applicable to a wide range of plate thicknesses (i.e., from the minimum allowed for corrosion considerations up to 100 mm). The existing CSA standards also assign responsibility for identifying fracture-critical members (FCMs) to the design engineer, though regulations on how to identify them are limited and vague, leaving much to engineering judgement. A comparison of brittle fracture design provisions around the world reveals that more sophisticated approaches have been developed in terms of modelling and understanding brittle fracture in existing and new bridges than the ones currently in use in North America. One of these more involved methods is the fracture mechanics method in the European EN 1993-1-10 standard, which allows factors such as plate thickness, crack size, and strain rate to be considered. This standard also gives designers the option of using a simplified method or a much more involved, fracture mechanics-based approach. While the current Canadian brittle fracture provisions generally appear to be meeting the needs of the code users, two issues are noteworthy. The first, which has already been alluded to, is that the North American provisions offer less flexibility and guidance for handling unusual situations than the Eurocode methods. The ‘one size fits all’ approach in the Canadian design standards may not be optimal and may result in structures being overdesigned or under-designed, leading to inefficiencies in safety and cost. This highlights the need for answering questions regarding the feasibility of allowing reduced toughness requirements for bridges fabricated with thinner plates or experiencing lower traffic volumes or demand-to-capacity ratios. The second issue is that few studies can be found in the literature around the world attempting to assess the level of reliability against brittle fracture provided by any of the existing design provisions. The lack of a probabilistic assessment of brittle fracture risk in Canada and the few studies globally highlights a gap in the current understanding and implementation of these design standards. This thesis includes a literature review on: 1) factors affecting material toughness, 2) common methods of evaluating toughness, 3) North American and European brittle fracture provisions, and 4) previous work on design code calibration and reliability analysis for steel structures subject to various failure modes, including brittle fracture. A comparison of the North American and European design provisions using the example of a typical steel-concrete composite highway bridge is then presented. For this case study, it was found that North American codes are typically more conservative than the Eurocode for bridge elements made with thinner plates and less conservative for elements made with thicker plates. Following this, the fracture mechanics-based European brittle fracture limit state is then evaluated in a probabilistic framework using Monte Carlo Simulation (MCS). In order to do this, statistical distributions are established for the various input parameters, and – in particular – statistical models for the live traffic load and temperature are established. Prior to application of the model, a calibration step is performed to establish a design crack depth. Sensitivity studies are then performed where key input parameters are varied to examine how the failure probability is affected by variations in each parameter. The work is then cast in a time-dependent reliability framework, using historical temperature and traffic data, to determine the failure probability with temperature and traffic loading fluctuating on a time scale throughout the year. This time-dependent model is then used to assess the reliability level provided by the current Canadian brittle fracture provisions. Given certain plate thickness, crack size, load levels and geographical temperature data, the annual probability of failure and annual reliability index, β, are obtained. The obtained reliability indices are compared with a target reliability index to assess the extent to which the Canadian provisions provide consistent and adequate levels of reliability against brittle fracture. On the basis of the results, the North American brittle fracture design provisions are critically assessed, and new design tools from these probabilistic studies are presented. Opportunities for improvement in the existing Canadian standards and areas warranting further study are lastly highlighted.
  • Item
    Costs and Benefits of Building Airtightness Improvements for Air Pollution Exposure and Human Health
    (University of Waterloo, 2024-08-29) Salehi, Amir Reza
    Air pollution is the largest global environmental health threat; fine particulate matter (PM₂.₅) alone, as the most harmful air pollutant, is associated with millions of premature deaths each year. Most studies focus on the impacts of changes in outdoor air PM₂.₅ on human health. This overlooks the fact that most exposure to PM₂.₅ of outdoor-origin occurs indoors, as people tend to spend most of their time indoors, and that the pollution infiltrates the building envelope. Specifically, Americans spend almost 70% of their time in their homes, and approximately 50% of outdoor-origin PM₂.₅ health burdens are due to residential exposure. To better understand the effect of this infiltration on human health, and to explore opportunities for improvement, this study examines the health impact of enhancing building airtightness, particularly in single-family homes where approximately 75% of the U.S. population resides. This thesis conducts a historical study on modeled daily average PM₂.₅ levels between the years 1980 to 2010 in the United States to examine the national and spatial costs and benefits associated with improving the airtightness of these homes to mitigate the health effects of air pollution. To achieve this, an integrated modeling framework was developed, which incorporates mass balance modeling, health impact modeling, and economic modeling. This framework was used to establish baseline and alternative levels of exposure to outdoor-origin PM₂.₅ across the historic building stock in the contiguous United States under the current state and post-intervention state. Subsequently, it evaluates the health benefits and retrofit costs associated with improving airtightness levels. The primary scenarios evaluated involve enhancing building air sealing to meet the standards mandated by the International Energy Conservation Code (IECC) 2018. Additionally, secondary scenarios of 20%, 25%, 40%, and 60% air leakage reductions were also considered. This study analyzes the benefits of improving three distinct building age groups. The results reveal that enhancing the airtightness of single-family homes up to IECC 2018 mandates results in interventions costing approximately $105 ($102, $107) billion nationally. However, they could save about 44,611 (29,831, 58,905) lives annually and deliver annual health benefits valued up to $356 ($45, $1,173) billion in 2020 USD. The result is an annual net benefit of approximately $251 (-$62, $1,067) billion in 2020 USD, in the intervention year. This study also indicates that older homes, particularly those constructed before 1940, exhibit the greatest reductions in indoor PM₂.₅ levels from outdoor sources. These homes demonstrate a potential annual benefit of $55 ($7, $193) billion in 2020 USD and 7,104 (4,655, 9,832) lives saved, translating to about $3,066 ($390, $10,759) in 2020 USD in benefits per resident annually. On a per-house basis, the cost of improvements in these older homes averages $1,686 ($1,616, $1,756) in 2020 USD, while the net benefit per resident can reach up to $2,263 (-$442, $9,825) in 2020 USD in the year of intervention. Significant spatial variability in benefits exists, with the greatest impacts observed in the eastern U.S. due to higher regional pollution levels and leakier homes. Further, there is uncertainty associated with model parameters, particularly associated with the health response to exposure. Despite these uncertainties, most interventions studied show large mean net benefits. These findings strongly support that targeted enhancement of building airtightness substantially impacts public health and should be considered by decision-makers when designing building standards or developing retrofit plans.
  • Item
    An Investigation of Factors Affecting the Adsorption of Per- and Polyfluoroalkyl Substances (PFAS) on Colloidal Activated Carbon (CAC): Implications for In-situ Immobilization of PFAS
    (University of Waterloo, 2024-08-28) Gilak Hakimabadi, Seyfollah
    The immobilization of per- and polyfluoroalkyl substances (PFAS) by colloidal activated carbon (CAC) barriers has been proposed as a potential in-situ method to mitigate the transport of plumes of PFAS in the subsurface. However, if PFAS are continuously released from a source zone, adsorptive sites on CAC will eventually become saturated, upon which point breakthrough of PFAS from a barrier will occur. To predict the long-term effectiveness of CAC barriers, it is important to investigate the factors that may affect the adsorption of PFAS on CAC. The objective of this research is to investigate some of these factors by answering the following questions: (1) How do co-contaminants, aquifer materials, and typical groundwater constituents affect the adsorption of PFAS by CAC?; and (2) How does reducing the particle size of activated carbons (ACs) affect their physico-chemical properties and ability to adsorb PFAS? To address the first research question, the adsorption of seven anionic PFAS on a polymer-stabilized CAC (i.e., PlumeStop®) and a polymer-free CAC was investigated using batch experiments (Chapter 3). The research employed synthetic solutions consisting of one PFAS, 1 mM of sodium bicarbonate (NaHCO3), and inorganic and organic solutes, including Na+, Cl-, Ca2+, dissolved organic carbon (DOC), diethylene glycol butyl ether (DGBE), trichloroethylene (TCE), benzene, 1,4-dioxane, and ethanol. It was observed that the affinity of PFAS to CACs was in the following order: PFOS > 6:2 FTS > PFHxS > PFOA > PFBS > PFPeA > PFBA. This result indicates that hydrophobic interaction was the predominant adsorption mechanism and that hydrophilic compounds such as PFBA and PFPeA will breakthrough CAC barriers first. The partition coefficient Kd for the adsorption of PFAS on the polymer-stabilized CAC was 1.3–3.5 times smaller than the Kd for the adsorption of PFAS on the polymer-free CAC, suggesting that the polymers decreased the adsorption, presumably due to competition. Thus, the PFAS adsorption capacity of PlumeStop CAC barriers could increase once the polymers are biodegraded and/or desorbed. The affinity of PFOS and PFOA to CAC increased when the ionic strength of the solution increased from 1 to 100 mM, or when the concentration of Ca2+ increased from 0 to 2 mM. In contrast, less mass of PFOS and PFOA was adsorbed in the presence of 1–20 mgC/L Suwannee River fulvic acid, which represented dissolved organic carbon, or in the presence of 10–100 mg/L diethylene glycol butyl ether (DGBE), which is a major component in some aqueous film-forming foam (AFFF) formulations. Therefore, information on the occurrence of DGBE and other glycol ethers in AFFF-impacted groundwater is needed to assess if the effect of these species on CAC barrier performance is appreciable. The presence of 0.5–4.8 mg/L benzene or 0.5–8 mg/L TCE, the co-contaminants that may comingle with PFAS at AFFF-impacted sites, diminished PFOS adsorption but had no effect or slightly enhanced PFOA adsorption. When the initial concentration of TCE was 8 mg/L, the Kd (514 ± 240 L/g) for the adsorption of PFOS was approximately 20 times lower than that in the TCE-free system (Kd = 9,579 ± 829 L/g). Therefore, the effect of TCE and benzene may depend on the type of PFAS. To gain insight into the effect of aquifer materials and water chemistry, the adsorption of PFOS, PFOA, and PFBS on CAC was investigated in the presence of six aquifer materials. Further, the removal of five PFAS (PFOS, PFOA, PFHxA, PFHxS, and 6:2 FTS) from six actual groundwater samples was studied (Chapter 4). Under the experimental conditions employed, the mass of PFBS, PFOA, and PFOS removed from the solution in the presence of CAC and aquifer materials was 2 to 4 orders of magnitude greater than the mass removed when only aquifer materials were present. It was also observed that the presence of aquifer materials did not appreciably affect the adsorption of PFBS, PFOA, and PFOS on CAC. In the experiments with six actual groundwater samples, the affinity of the studied PFAS to CAC was in the following order: PFOS > 6:2 FTS > PFOA ~ PFHxS > PFHxA, except for two instances of 6:2 FTS being the compound removed to the greatest extent. The adsorption affinity trend among the studied PFAS is consistent with the adsorption being driven by the hydrophobic effect. Principal component analyses (PCA) of the results obtained from the experiments with aquifer materials demonstrated that the correlation between the partition coefficient Kd for each PFAS and Ca2+ and DOC was the opposite of the correlations observed in Chapter 3. In the groundwater experiments, the correlation between Kd for each PFAS and ionic strength and Ca2+ was also the opposite of the correlations observed in Chapter 3. These opposite effects were hypothesized to be due to a complex interplay among various parameters affecting the adsorption of PFAS on CAC, which may confound the effect of each parameter. The results of this study indicate that laboratory experiments designed to evaluate the retention of PFAS in a CAC barrier should employ site-specific groundwater and aquifer materials. To address the second research question, four commercial ACs (three granular and one powdered) were pulverized by grinding and micromilling to create powdered activated carbons (PACs) and CACs, and the adsorption of PFBS, PFOA, and PFOS on these adsorbents (11 in total) was investigated (Chapter 5). All three PFAS were adsorbed less by CACs (d50 = 1.2–2.5 m) than by their parents PACs (d50 = 12–107 m). A detailed characterization of the properties (surface area, micropore, and mesopore volumes, pHpzc, and surface elemental composition) of these adsorbents suggests that the reduced adsorption capacity of CACs was likely the result of AC oxidation during milling, which decreased surface hydrophobicity. Granular activated carbons (GACs, 425–1,700 m) adsorbed PFAS less than PACs and CACs, partly due to the slow rate of adsorption. Of all ACs, the materials made from wood possessed the greatest surface area and porosity but adsorbed PFAS the least. The repulsion between the negatively charged surface of these wood-based ACs (pHpzc = 5.1) and the negatively charged headgroup of PFBS, PFOA, and PFOS molecules was identified to be the dominant factor that inhibited adsorption. The results of this study suggest that the adsorption kinetic advantage of CACs may be achieved at the expense of reduced adsorption affinity and that the role of electrostatic interaction between PFAS and AC should be considered when selecting AC for PFAS treatment applications.
  • Item
    Self-prestressing Iron-based Shape Memory Alloy (Fe-SMA) Epoxy Composite for Active Reinforced Concrete Shear Strengthening
    (University of Waterloo, 2024-08-23) Pinargote Torres, Johanna
    In Canada, the rapid deterioration of aging reinforced concrete (RC) structures has become a continuing issue, with more than 40% of bridges being older than 40 years old and 38% being in poor and fair condition, necessitating billions for rehabilitation (Cusson & Isgor, 2004; Lafleur, 2023). The loss of the strength and stiffness in RC bridge structures can be attributed to age and exposure, and it has been exacerbated with the increase of freight weight, traffic, extreme freezing/thawing cycles, and climate change. A concerning RC failure mode is shear due to its brittle and abrupt nature. Hence, various shear-strengthening mechanisms have been developed. Most of these mechanisms involve fiberreinforced polymers (FRP) and are passive, acting after the structure experiences damage. Active (prestressing) mechanisms have gained notoriety due to their ability to act immediately after application, reducing crack widths and propagation. However, implementing shear prestressing is complex, often requiring expensive and impractical large jacking equipment. Smart materials such as iron-based shape memory alloys (Fe-SMAs) have the potential to be implemented in cost-efficient and simple shear strengthening and retrofitting techniques. Fe-SMAs present a thermomechanical property known as shape memory effect (SME) that allows the material to return to its undeformed shape after reaching an activation temperature, which can be done with resistive heating. If the material is restrained, the Fe-SMA has the capacity to self-prestress an element without the need of jacking tools. This project presents an experimental study on the shear strengthening feasibility and capacity of a near-surface bonded (NSB) active Fe-SMA epoxy composite. The composite consists of u-bent strips embedded into grooves filled with epoxy. After the epoxy cures, the Fe-SMA strips are heated to at least 180oC with an electric current to self-prestress the concrete. Three shear-critical RC beams were cast, with one beam being used as control, and the other two beams being shear strengthened. Two FeSMA ratios were assessed 0.05% and 0.1%. The strengthened beams exhibit about a 27% increase in strength, and the reduction of crack widths and stirrup stresses. The NSB Fe-SMA strips interrupt the formation and widening of diagonal cracks; however, increasing their ratio may not mean an increase in shear strength. A dense NSB Fe-SMA - concrete interface weakens the stirrup plane, creating horizontal cracks running along the top face of the beam (ends of the Fe-SMA u-wrapped strips) in the compression region and causing the separation of the side concrete cover. Additional insights on the active shear strengthening have been provided with two FEA parametric studies using Vector2 by evaluating prestress level and Fe-SMA ratio. This project assesses the shear strengthening effect of near-surface bonded (NSB) active Fe-SMA epoxy composites on the load-displacement response, crack widths, and reinforcement stresses on shear-critical RC specimens.
  • Item
    Investigation of the Interrelationships between Orthophosphate Corrosion Inhibitors, Monochloramine Residual, Biofilm Development, and Nitrification in Chloraminated Drinking Water Distribution Systems
    (University of Waterloo, 2024-08-21) Badawy, Mahmoud
    Lead contamination in drinking water distribution systems (DWDS) due to pipe corrosion is a human health concern. Orthophosphate, used to control corrosion, creates passive films to eliminate lead release. At the same time, it may enhance biofilm growth, monochloramine decay, and nitrification potential since phosphorus is an essential nutrient for microorganisms. However, there is limited and contradictory information on these effects in the previous studies, which may be attributed to variations in nutrient limitations in the water used across these studies. Specifically, the addition of phosphate may enhance microbiological growth in phosphorus-limited water. However, most previous studies did not examine phosphorus limitations in the water employed in their experiments. Biofilm growth and monochloramine breakdown have not been tracked concurrently in most of the previous studies. This could be the key to understanding how orthophosphate affects monochloramine decay. Furthermore, there is a lack of research on the effect of phosphate on nitrification in real-world DWDS; hence, more research needs to be conducted. The main goal of this thesis was to investigate the effect of orthophosphate on biofilm development by assessing microbiological growth, biofilm formation potential, and metabolic activity, in addition to monitoring the effects of orthophosphate on monochloramine decay and nitrogenous compounds. These parameters were monitored simultaneously in both the presence and absence of orthophosphate to facilitate a more comprehensive understanding of its effects. This objective was achieved primarily through experiments with bench-scale flow through model distribution systems (MDS) and additional laboratory batch tests using treated water from a Great Lakes utility. In the first phase of this study, initial batch tests indicated that the test water used throughout the thesis is phosphorus-limited. Subsequently, in a 3-month experiment with 4 MDS fed with chloraminated water (2 mg Cl2/L) and orthophosphate doses of 0 to 4 mg PO43-/L, it was found that increasing the dose of orthophosphate enhanced the biofilm growth and monochloramine decay (measured as total chlorine) in the MDS, with the highest increases between 1 and 2 mg PO43-/L. A positive relationship between biofilm microbiological growth and the total chlorine decay coefficients indicates that the higher monochloramine decay due to orthophosphate addition are attributed to increased microbial activity. In the second phase, the impacts of monochloramine doses of 2 and 3 mg Cl2/L were explored with and without 2 mg PO43-/L of orthophosphate over 108 days. The presence of orthophosphate enhanced both the growth and development of the biofilm and the rates of monochloramine degradation, as observed in the first phase. Increasing the monochloramine dose from 2 to 3 mg Cl2/L slightly reduced microbiological growth and noticeably decreased first-order monochloramine coefficients (measured as total chlorine). Despite this reduction, free ammonia levels increased with the higher monochloramine dose due to a greater total ammonia presence. A strong correlation was also noted between total chlorine decay coefficients and biofilm microbiological parameters. Additionally, orthophosphate increased the genetic diversity within biofilm communities, whereas increasing the monochloramine dose resulted in a noticeable reduction in genetic diversity. In the third phase, the effect of residence time (6 days and 12 days) on monochloramine decay in the presence and absence of orthophosphate (2 mg PO43-/L) was studied using two MDSs. It was found that the longer residence time of 12 days led to higher microbial activity, monochloramine decay coefficients (measured as total chlorine), and nitrite formation compared to the shorter residence time of 6 days. Additionally, orthophosphate enhanced microbiological growth, monochloramine decomposition, and nitrite formation at the 12-day site, whereas its impact was less pronounced and became only evident after day 62 at the 6-day site. First-order total chlorine decay coefficients and nitrite concentration remained stable throughout the experiment in both MDSs at the 6-day residence time. However, at the 12-day residence time, monochloramine decay progressively increased over time, accompanied by a rise in nitrite formation by the end of the experiment. The links between monochloramine decay and biofilm microbiological parameters were also noted. These correlations suggest that the increase in monochloramine decomposition, which may have resulted from increased residence time and/or the addition of orthophosphate, was largely driven by microbiological growth and activity. In the fourth phase, the results from previous phases were evaluated on another phosphorus-limited water source with different water chemistry. A selected water source from batch-testing of different water sources and reference water from earlier phases were chloraminated at 2 mg Cl2/L and tested in four MDSs, two fed with the reference water (one control and one with 2 mg PO43-/L) and two fed with the selected water source (one control and one with 2 mg PO43-/L). The effects of orthophosphate on the two water sources concerning the growth and development of biofilm and the decomposition of monochloramine were similar. The similar impacts of orthophosphate on both water sources indicate that the results obtained in the previous phases may be valid for other phosphorus-limited water sources, even with different chemical compositions. In the final phase, a batch test study was conducted on full-scale DWDS samples that employ monochloramine and orthophosphate to assess monochloramine decay and nitrification potential. This study was compared to an earlier study conducted before the introduction of orthophosphate, which utilized samples from the same DWDS sampling sites and identical batch-testing procedures. Monochloramine decomposition due to microbiological processes was found to be higher at further points in the DWDS with longer residence time after adding orthophosphate. Also, nitrite formation during batch tests using samples collected from locations far from the distribution system's entrance was greater after adding orthophosphate, indicating a higher potential for nitrification. Monochloramine decay due to chemical processes was similar before and after orthophosphate addition. In conclusion, orthophosphate promoted biofilm formation, genetic diversity, and nitrification potential, which, in turn, increased monochloramine decay. To mitigate these effects, the thesis recommends some strategies that can be adopted, including decreasing orthophosphate dosages, increasing monochloramine dosages, and shortening the residence time while closely monitoring water quality parameters, especially nitrification indicators.
  • Item
    Direct Method of Generating Floor Response Spectra for Structures Considering Soil-Structure Interaction
    (University of Waterloo, 2024-08-21) Li, Yue
    Floor Response Spectra (FRS) are crucial for the seismic design and safety assessment of structures, systems, and components (SSCs) in nuclear power facilities. Generating accurate FRS requires considering soil-structure interaction (SSI) effects, especially for structures with flexible foundations and spatially varying ground motions. This thesis presents a comprehensive approach to address these challenges in the context of nuclear power plant (NPP) structures, with a specific focus on the applicability of this method to Small Modular Reactors (SMRs). The main contributions are as follows: (1). A novel direct spectra-to-spectra method is extended for efficient FRS generation in multi-supported structures, incorporating SSI effects through a substructure approach. This method converts Foundation Input Response Spectra (FIRS) into Foundation Level Input Response Spectra (FLIRS) using analytically derived transfer matrices based on soil stiffness and structural modal information. It accommodates both flexible and rigid foundations under varying seismic inputs, eliminating intermediate spectrum-compatible time history generations and full system reanalysis when properties change. (2). A numerical example of a 3-DOF structure with two structural nodes and one foundation node supported by a generalized soil spring is presented to verify the proposed method. Both theoretical formulation and numerical simulation verified and solidified the equivalence of seismic responses between the coupled soil-structure system under FIRS excitations and the decoupled structure under FLIRS inputs. This validation confirms the theoretical rigour of replacing FIRS with FLIRS in the analysis. (3). A comprehensive methodology for evaluating dynamic soil stiffness matrices is presented, utilizing the relationship between dynamic flexibility and stiffness matrices. The method applies sinusoidal excitations to calculate steady-state response amplitudes and phase lags, deriving real and imaginary parts of the dynamic response. A progressive validation strategy is employed, systematically validating the method from simple lumped-mass systems to continuous-mass systems, then to complex 3D half-space soil models, ensuring its reliability across various scenarios. This approach provides a robust and versatile tool for characterizing dynamic soil stiffness properties across various structural complexities. The method can significantly reduces dependence on specialized software running in “blackbox”, such as ACS SASSI, thus enhance the accessibility and efficiency of seismic analysis for nuclear facilities. (4). The proposed direct method was applied to the multiple supported structure with SSI taken into account. The FRS from the proposed method shows excellent agreement with “benchmark” time history results, particularly in horizontal directions, with errors consistently below 5%. Two seismic input scenarios, fully correlated and fully independent excitations at multiple supports, are explored, showcasing the method’s versatility. Some discrepancies in vertical direction are attributed to limitations in vertical tRS, indicating areas for future refinement including the need to refine tRS. (5). Parametric studies investigate the influence of site conditions and internal-external structure coupling element stiffness on FRS. Evaluated across various site classes per ASCE 7-10 standard, the method demonstrates robust performance for most site types. The study reveals optimal stiffness ranges affecting FRS peaks. It also identifies energy transfer patterns between structural components as the stiffness in the connecting elements changes, offering insights for nuclear facility design. The methodologies developed in this thesis advance the state-of-the-art in seismic analysis and design of nuclear structures, particularly SMRs. By addressing the complex challenges posed by multi-support excitations and SSI, this research provides a foundation for safer and more economical designs of nuclear facilities.
  • Item
    Leading Pedestrian Intervals at Intersections in Proximity to Schools: An Evaluation of Safety and Effectiveness
    (University of Waterloo, 2024-08-21) Kaur, Mavjot
    Pedestrians encounter substantial risks on roadways, particularly at signalized intersections where their exposure to traffic is unavoidable. A predominant cause of pedestrian crashes at these intersections is drivers' failure to yield while making turning maneuvers. One effective countermeasure proposed to mitigate this issue is Leading Pedestrian Interval (LPI), which provides pedestrians with a walk signal during the 'all red' phase, preceding the green signal for parallel vehicular traffic. This timing allows pedestrians to establish themselves in the crosswalk, thereby enhancing their visibility to drivers. Despite extensive research demonstrating the effectiveness of LPIs, there is a lack of research regarding their safety effect when implemented at signalized intersections near schools and the factors influencing their effectiveness. Many existing guidelines and safety programs recommend LPI implementation near schools as part of comprehensive pedestrian safety strategies, highlighting the need to evaluate these proposed guidelines. The main goal of this research is to assess the effectiveness of implementing LPIs at signalized intersections near schools. Additionally, the study aims to analyze in detail the factors that influence pedestrian crash risk in this context. The case study was conducted using ten-year crash data from thirty-three signalized intersections in the Region of Waterloo. An Empirical Bayesian (EB) before and-after analysis was utilized to evaluate the impact of LPIs on pedestrian safety. Comprehensive data collection efforts were made to gather the necessary information for developing the Safety Performance Function (SPF) model. The collected data comprised detailed records of traffic volumes, pedestrian counts, roadway and intersection characteristics, and historical crash data. In this study, two SPF models were selected using different traffic exposure variables. One model used Estimated Daily Traffic (EDT), while the other employed surrogate exposure measure - the number of legs with commercial entries/exits or residential driveways within 50 meters of the intersection (NCE). Several key factors were identified that increase pedestrian crash risks at intersections, including residential and commercial areas, the presence of commercial entries/exits or residential driveways, longer crosswalks, non-conventional crosswalk markings, and higher pedestrian and vehicular volumes. In contrast, certain factors were found to decrease pedestrian crashes at intersections, such as the presence of slip lanes and missing sidewalks. The effectiveness of LPIs was evaluated using Crash Modification Factors (CMFs). The results suggest a 26.8% reduction in pedestrian–vehicle crashes at treated intersections. The effectiveness of iv LPIs is significantly improved under certain conditions. LPIs are more effective at intersections with high pedestrian and vehicle volumes and those with complete pedestrian infrastructure, such as all sidewalks and non-conventional crosswalk markings. Conversely, LPIs are less effective at intersections with slip lanes. Calibrating the model significantly enhanced estimate accuracy and further reduced the CMF to 0.65, underscoring the importance of proper calibration to accurately measure the true impact of pedestrian safety measures.
  • Item
    A Route-Choice Model for Predicting Pedestrian Behaviour and Violations
    (University of Waterloo, 2024-08-19) Lehmann Skelton, Christopher
    Pedestrians exhibit diverse behaviours, including crossing violations. Traditionally, development of behavioural models has been divided into route choice and crossing behaviour. Route choice models are stochastic and focused on crowd dynamics, while crossing behaviour models are probabilistic or deterministic and focused on local-level behaviours. Route choice and crossing behaviour are often addressed separately, but they are inherently related. This research proposes a new pedestrian simulation model where pedestrians navigate through an intersection or mid-block environment, modelled as a grid. Each cell is assigned a cost that varies over time based on the presence of nearby vehicle traffic or changes to signal indications. Each pedestrian perceives the costs in the environment uniquely depending on their own personal preferences, like desired crossing gap or comfort committing a violation and seeks to minimize their total path cost. Pedestrians who are more comfortable committing violations perceive a lower cost for committing a violation. This approach integrates crossing behaviour with route choice and models the trade-offs of engaging in a particular behaviour. The proposed model is calibrated using video data. The model was applied to three case-studies: a stop-controlled intersection, mid-block crossing, and two crosswalks along the minor approach of a signalized intersection. The model simulates the trade-offs between walking on different surfaces, as well as the trade-off between waiting for a gap in traffic to cross, versus diverting to the nearest designated crosswalk. In the third case study, the model successfully reproduced the proportion of pedestrians crossing against the signal for the north leg crosswalk but did not reproduce the proportion of violations for the south leg crosswalk, which is across a private access. Further investigation should be undertaken into the causes of this and other differences.
  • Item
    Development of a Risk Ranking System for Prioritizing Asset Maintenance Decisions
    (University of Waterloo, 2024-08-19) Ayyamperumal, Cibi Chakravarthy
    This thesis presents a modified approach to prioritize asset maintenance decisions by evaluating the overall risk rating of engineering systems. Traditional methods rely on subjective assessments by experts, which potentially lead to greater subjectivity and inconsistency in risk prioritization, thus requiring improvements. The thesis performs a comparative analysis of existing risk prioritization techniques to understand the challenges in ranking systems. Analytical Hierarchical process (AHP) and fuzzy logic are proposed to develop a risk ranking system. AHP is employed to compute the weights of multiple criteria, providing a structured framework for decision-making and enabling systematic prioritization of system components. The Mamdani Fuzzy Inference System is integrated to manage the inherent uncertainty and imprecision in assigning ranks. The proposed AHP-Fuzzy model is applied to a plant aging management problem in the nuclear industry with various maintenance tasks, demonstrating its effectiveness in decision-making. The risk rating distribution and sensitivity analyses are studied. The results indicate that integrating AHP and Fuzzy Logic improves decision-making by effective risk prioritization. The thesis contributes to the field of engineering management by providing practical, actionable strategies for enhancing risk management practices to ensure the safety of engineering systems.
  • Item
    Traffic Conflict-based Road Safety Analysis: Data Requirements and Evaluation of Safety Countermeasures
    (University of Waterloo, 2024-08-16) Keung, Jessica May Ting
    Driven by the vision to eliminate road fatalities, Vision Zero initiatives have been widely adopted by many cities around the world, with significant investments of resources in various safety programs and countermeasures. Conflict-based traffic safety analysis is a burgeoning field, but many studies have failed to address the important question of how much data should be collected to make credible safety-related inferences and how the effectiveness of safety countermeasures could be quantified using conflict data. In this thesis research, a comprehensive framework based on power analysis is first proposed to determine the minimum sample size required for a conflict analysis study. Two case studies are investigated to illustrate how power analysis can be conducted for different types of conflict analysis study specifications, using the corresponding statistical tests. Power analysis is a well-established statistical tool used in many different scientific fields for determining an appropriate sample size. The power analysis exploits the significance criterion (α), power (1-β), and effect size (ES) such that the sample size is large enough to protect investigators from Type I and Type II errors to conventional thresholds of 95% and 80%, respectively. The minimum sample size is also the optimal sample size because it minimizes the observation period while maintaining acceptable protection from Type I and Type II errors. A case study is then conducted to assess the safety benefits of three Vision Zero safety countermeasures using data from the City of Toronto. By applying a combination of case-control and cross-sectional studies, the research attempts to quantify the safety effects of three commonly applied Vision Zero countermeasures, namely, Leading Pedestrian Interval (LPI), No Right Turn On Red (NRTOR), and installation of a dedicated Bicycle Lane (BL). The traffic interactions between vehicles and vulnerable road users (VRUs) were extracted using a video data processing platform and two surrogate measures of safety, including post-encroachment time (PET) and conflict speed, were obtained and then used to classify the conflict severity into different levels. A comparative analysis using mixed-effects negative binomial regression was conducted to quantify the impacts of different treatments on the frequency of traffic conflicts under specific road weather and traffic conditions. The results show that these three types of traffic countermeasures can effectively reduce the frequency of high-risk and moderate-risk traffic conflicts, moderated by various, traffic exposure, weather and environmental conditions, and accessible pedestrian signals (APS). These findings could help road safety engineers and decision makers make better informed decisions on their road safety initiatives and projects.