Abstract
Ensuring safety and achieving human-level driving performance remain
challenges for autonomous vehicles, especially in safety-critical
situations. As a key component of artificial intelligence, reinforcement
learning is promising and has shown great potential in many complex
tasks; however, its lack of safety guarantees limits its real-world
applicability. Hence, further advancing reinforcement learning,
especially from the safety perspective, is of great importance for
autonomous driving. As revealed by cognitive neuroscientists, the
amygdala of the brain can elicit defensive responses against threats or
hazards, which is crucial for survival in and adaptation to risky
environments. Drawing inspiration from this scientific discovery, we
present a fear-neuro-inspired reinforcement learning framework to
realize safe autonomous driving through modeling the amygdala
functionality. This new technique facilitates an agent to learn
defensive behaviors and achieve safe decision making with fewer safety
violations. Through experimental tests, we show that the proposed
approach enables the autonomous driving agent to attain state-of-the-art
performance compared to the baseline agents and perform comparably to 30
certified human drivers, across various safety-critical scenarios. The
results demonstrate the feasibility and effectiveness of our framework
while also shedding light on the crucial role of simulating the amygdala
function in the application of reinforcement learning to safety-critical
autonomous driving domains.