Dependable Decision Making for Autonomous Vehicles Considering Perception Uncertainties

No Thumbnail Available

Date

2024-10-17

Advisor

Khajepour, Amir
Czarnecki, Krzysztof

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Autonomous driving system (ADS), leveraging recent advancements in various learning algorithms, has demonstrated significant potential to enhance traffic safety. However, in the dynamic service environments, one of the crucial challenges in ADS safety evaluation is managing performance uncertainties inherent in these black-box learning algorithms. Among all ADS functional modules, decision-making module is responsible for interpreting sensory results and determining vehicle maneuvers. Thus, developing an uncertainty-aware decision-making module becomes critical for ensuring ADS driving safety. Building an uncertainty-aware decision-making module necessitates a comprehensive approach to identify the origins of learning algorithm uncertainties within ADS and understand the potential vehicle-level hazards they may cause. Through the associated risk assessment, these identified uncertainties can then guide ADS safety design priority and pinpoint uncertainty quantification requirements. Eventually, the quantified uncertainties and their propagation effects in ADS need to be integrated into the decision-making module to deliver more dependable decisions. However, existing ADS safety research lacks a procedure to connect qualitative uncertainty understanding to quantitative decision-making support evidence. In this thesis, a systematic approach is presented to first qualitatively identify, then quantify, and finally incorporate uncertainties into ADS decision-making process to enhance driving safety. This thesis presents three main components for constructing a dependable, uncertainty-aware decision-making module. The first part introduces a sequential ADS safety analysis using a combination of Hazard and Operability Study (HAZOP) and System-Theoretic Process Analysis (STPA) to understand the causal relationships and effects of learning algorithm uncertainties within a complex autonomous vehicle system. This analysis aids in generating combinatorial test cases for simulation verification. A detailed real-world case study is presented to demonstrate the effectiveness of the proposed safety analysis method. The second part formulates an uncertainty quantification problem based on the previous analysis results, utilizing High Definition (HD) maps and Polynomial Chaos Expansion (PCE) for statistical analysis. The focus is on pedestrian position uncertainty from the perception module, with simulation and real-world testing results showing promising accuracy of PCE in dynamic environmental conditions. The third part investigates system propagation effect of quantified uncertainty using a Dynamic Bayesian Network (DBN) and integrates the uncertainties into decision-making process through an Influence Diagram (ID) model. By updating the utility functions in the ID, the proposed DBNID method enhances safety performance when encountering unexpected pedestrian behaviors in simulations and changing weather conditions with real-world testing datasets.

Description

Keywords

uncertainty handling, autonomous vehicle safety, safe decision making, autonomous driving

LC Keywords

Citation