Among all AI solutions, I believe the cybersecurity of autonomous cars is the most crucial aspect. One incident alone affecting human lives could disrupt not only a victimized company but the whole industry. It stands to reason that people will immediately lose trust in such technologies.
Take, for example, an incident in which a self-driving car killed a woman in the street in Arizona. The case caused a stir, and researchers brought attention to the security of AI systems implemented in autonomous cars.
AI security, in general, became a topic for discussion following the release of the first papers on adversarial attacks — inputs designed to fool AI systems — in 2014. Two years later, researchers moved from theory to practice and started to apply theoretical attacks to real solutions. Since then, AI systems of autonomous cars have become one of the areas of higher interest due to their criticality.
Taking into consideration all of the related concerns, I believe the reputation of autonomous cars will be smashed to smithereens if incidents surrounding their cybersecurity don’t stop in the future. Let’s take a look at some AI-driven components of autonomous cars that could be attacked, which is just a small part of the whole variety of ways.
Image recognition systems are able to detect road signs. However, it was publicly demonstrated in 2016 that they can be fooled with the help of special stickers and graffiti. This led to improvements in 2017, which gave better results. The so-called adversarial examples are images that visually refer to one class, yet AI systems wrongly identify them as ones from another class, such as a car detecting a right turn instead of a stop sign.
Object Detection Systems
One might say that autonomous cars don’t need to recognize road signs and that they should gather this information from other channels in the future, such as encrypted communication from a server that is aware of all road rules.
Unfortunately, it will take too much time to deploy such systems worldwide. In addition, even if road signs are unnecessary, cars will have to detect other cars and pedestrians effectively. That’s where object detection and semantic segmentation algorithms come into play, yet both are also vulnerable to adversarial attacks just like any deep learning algorithm.
Researchers at the University of Central Florida analyzed the possibility of hiding cars by camouflaging them. The idea of bypassing object detectors was not a new one but was first presented in the Houdini attack. Still, we should keep in mind that this research described a practical case of targeting self-driving cars.
Semantic Segmentation Systems
Semantic segmentation is an AI task that allows cars to detect the edges of another object. Researchers at the University of Oxford released a study showing how they successfully demonstrated practical attacks on the semantic segmentation system, and such attacks can pose a real danger to human lives.
Researchers at Keen Security Lab found that some autonomous cars can be fooled just by placing several stickers on the road that create a “fake lane.” During the study, the car’s autonomous system recognized the stickers as a continuation of the lane it was originally in, causing the car to switch into a lane that could have had oncoming traffic.
Voice Recognition Systems
Unfortunately, visual perception is not the only attack vector against autonomous cars. Vehicles can be fooled by recognizing malformed commands from the radio. A group of Zhejiang University security researchers in China invented the DolphinAttack acoustical technique, which was able to perform silent attacks by sending malicious commands to voice recognition systems. Hackers could also send an advertising message containing an adversarial attack to the radio and create chaos on the roads in autonomous mode.
The majority of autonomous cars have a special system called LiDAR, which stands for Light Detection and Ranging. The channel is meant for measuring the distance to a surrounding object and cannot be deactivated just at a driver’s will, so it is used as the main perception tool.
In July 2019, researchers at the University of Michigan released the first paper on practical attacks against a LiDAR system. The approach for hacking LiDAR was similar to common hacking methods of deep learning models, but with slight modifications specific to LiDAR functionality. These include the use of laser diodes for introducing adversarial signals and post-processing limitations.
What Should Be Done?
From my experience, autonomous cars, like any other internet of things (IoT) solution, shouldn’t have any software or hardware vulnerabilities. More importantly, you have to ensure that the algorithms on the back end are secure. While many people take care of hardware, software and wireless security, insufficient attention is usually paid to the security of algorithms. Machine learning technologies such as deep learning should be tested against all AI threats, including adversarial examples, poisoning, privacy issues and backdoors.
An increasing number of security-related articles on the topic of self-driving cars are being published. Since autonomous vehicles can be responsible for protecting the lives of our nearest and dearest, we all should take responsibility to help the industry and make vendors solve these issues before the actual release of the cars.
All credits to Forbes