Adversarial Black-Box Testing of Self-Driving Vehicle Algorithms with Reinforcement Learning
Loading...
Date
2024-12-17
Authors
Advisor
Crowley, Mark
Journal Title
Journal ISSN
Volume Title
Publisher
University of Waterloo
Abstract
When we hear autonomous or ego vehicles, we imagine a safe, self-driving device. However, this designation is unsuitable for describing a self-driving car because the car's automation is still being refined, and safety is being studied and constantly improved.
Right now, we can observe mostly the vehicles that are not fully autonomous to be present on the road in the physical world. The autonomy introduced requires the driver to be able to switch and take control of the vehicle in the case of an emergency occurrence. On the contrary, for the car to be able to apply full autonomy and be recognized and approved by the government to be placed on the road, the vehicle algorithm should be able to resolve and predict cases that may occur on the road and be able to respond to the situation utilizing the decision-making approach.
The physical world setting may be constrained in terms of stress-testing the ego-vehicle before its availability to the regular user-driver. Therefore, to broaden the ability to stress-test autonomous vehicles in various scenarios, the manufacturer of the autonomous vehicle and its algorithm may benefit from utilizing the ability to stress-test vehicles in the virtual vehicle setting.
For the work presented instead of enhancing the ability of safety enhancement of the autonomous vehicle by training it as a white box using the algorithm exposed by the manufacturer - we are introducing the adversarial virtual environment setting with generated accidents by the algorithm for the provider/manufacturer of the autonomous vehicle and its algorithm to be stress-tested in a black-box approach without the exposure of the technology (ego-vehicle algorithm) by the manufacturer.
Using the Reinforcement Learning technique, a single-agent or multi-agent attacker vehicle is trained to reproduce collisions in a virtual environment. The cases generated can be different, both simple and difficult to mimic a real-world setting.
When we describe autonomous vehicle adversarial blackbox testing, we identify the target vehicle as a representation of the autonomous vehicle by determining the action space, kinematics, and behaviour of the vehicle mentioned as autonomous throughout the work. This vehicle representation mimics the autonomous vehicle approach/action/decision-making within the 2D setting of the chosen virtual environment, with constraints driven by the virtual environment.
This work challenges realism, namely, it compares the likelihood of emergencies in generated car accidents to real-life cases. To do this, as in any other area, we study historical data on road accidents. These situations are compared with selectively generated instances, and a successful comparison is recorded and described by creating a narrative for the generated situation. Thereby smoothing the boundary between virtual and real accidents on the road.
It is important to note that reproducing and preparing ego-vehicle for any situations that may arise on the road in a virtual environment can be an economical approach. After all, to train the model, only time is needed depending on the number of generated episodes, thereby reducing the cost of testing or pre-testing. And it will also help to recreate scenarios that are problematic for recreation in real, physical space. Testing cars using different algorithms can help to identify a pattern or algorithm suitable for a specific environment, type of road, and number of actors on the road. This work seeks to publicize and recommend using a virtual environment to improve the automotive industry.