Investigating Vulnerabilities in Autonomous Vehicle Perception Algorithms

Speaker: Saif Eddin Jabari

Location: 370 Jay Street

Date: Monday, May 19, 2025

Autonomous vehicles (AVs) rely on deep neural networks (DNNs) for critical tasks, such as environment perception—identifying traffic signs, pedestrians, and lane markings—and executing control decisions, including braking, acceleration, and lane changing. However, DNNs are vulnerable to adversarial attacks, including structured perturbations to inputs and misleading training samples that can degrade performance. This presentation begins with an overview of adversarial training, highlighting the impact of input sizes on the vulnerability of DNNs to cyberattacks. Subsequently, I will share our recent findings that explore the hypothesis that DNNs learn approximately linear relationships between inputs and outputs. This conjecture is crucial for developing both adversarial attacks and defense strategies in machine learning security. The final part of the presentation will focus on recent work utilizing error-correcting codes to safeguard DNN-based classifiers.