E-mail senden E-Mail Adresse kopieren
2024-11-25

Safe Driving Adversarial Trajectory Can Mislead: Towards More Stealthy Adversarial Attack Against Autonomous Driving Prediction Module

Zusammenfassung

The prediction module, powered by deep learning models, constitutes a fundamental component of high-level Autonomous Vehicles (AVs). Given the direct influence of the module’s prediction accuracy on AV driving behavior, ensuring its security is paramount. However, limited studies have explored the adversarial robustness of the prediction modules. Furthermore, existing methods still generate adversarial trajectories that deviate significantly from human driving behavior. These deviations can be easily identified as hazardous by AVs’ anomaly detection models and thus cannot effectively evaluate and reflect the robustness of the prediction modules. To bridge this gap, we propose a stealthy and more effective optimization-based attack method. Specifically, we reformulate the optimization problem using Lagrangian relaxation and design a Frenet-based objective function along with a distinct constraint space. We conduct extensive evaluations on 2 popular prediction models and 2 benchmark datasets. Our results show that our attack is highly effective, with over 87% attack success rates, outperforming all baseline attacks. Moreover, our attack method significantly improves the stealthiness of adversarial trajectories while guaranteeing adherence to physical constraints. Our attack is also found robust to noise from upstream modules, transferable across trajectory prediction models, and high realizability. Lastly, to verify its effectiveness in real-world applications, we conduct further simulation evaluations using a production-grade simulator. These simulations reveal that the adversarial trajectory we created could convincingly induce autonomous vehicles (AVs) to initiate hard braking.

Artikel

Veröffentlichungsdatum

2024-11-25

Letztes Änderungsdatum

2024-12-11