Humans can navigate safely utilizing only their two eyes and two ears to sense their surroundings. Sensor packages for autonomous vehicles are far more complicated. To determine the state of the road ahead, they often rely on radar, lidar, ultrasonic sensors, cameras, or any combination of these.
Humans are pretty cunning and challenging to trick, but our robot-driving pals are less resilient. LiDAR sensors could be manipulated, concealing obstacles and tricking driverless cars into collisions or worse, according to some researchers.
Where Did It Go?
The moniker “LiDAR” refers to a light-based alternative to radar technology. But unlike radar, it’s still more frequently used as an acronym than as a standalone term. The system emits laser pulses and collects the light that is reflected from the surroundings. Since pulses from farther away things take longer to return to the LiDAR sensor, the sensor can estimate the distance to nearby objects. It is frequently regarded as the ideal sensor for automated driving. This is a result of its superior object recognition accuracy and dependability in-vehicle environments as compared to radar. Additionally, it provides extremely detailed depth information that is simply not possible with a standard 2D camera.
A new research paper has demonstrated an adversarial method of tricking LiDAR sensors. The method uses a laser to selectively hide certain objects from being “seen” by the LiDAR sensor. The paper calls this a “Physical Removal Attack,” or PRA.
The theory of the attack relies on the way LiDAR sensors work. Typically, these sensors prioritize stronger reflections over weaker ones. This means that a powerful signal sent by an attacker will be prioritized over a weaker reflection from the environment. LiDAR sensors and the autonomous driving frameworks that sit atop them also typically discard detections below a certain minimum distance to the sensor. This is typically on the order from 50 mm to 1000 mm away.
The attack works by firing infrared laser pulses that mimic real echoes the LiDAR device is expecting to receive. The pulses are synchronized to match the firing time of the victim LiDAR sensor, to control the perceived location of spoofed points by the sensor. By firing bright laser pulses to imitate echoes at the sensor, the sensor will typically ignore the weaker real echoes picked up from an object in its field of view. This alone may be enough to hide the obstacle from the LiDAR sensor but would seem to create a spoofed object very close to the sensor. However, since many LiDAR sensors discard excessively close echo returns, the sensor will likely discard them entirely. If the sensor doesn’t discard the data, the filtering software running on its point cloud output may do so itself. The effect is the LiDAR will show no valid point cloud data in an area where it should be picking up an obstacle.
The attack requires some knowledge but is surprisingly practical to achieve. One need only do some research to target various types of LiDAR used on autonomous vehicles to whip up a suitable spoofing apparatus. The attack works even if the attacker is firing false echoes toward the LiDAR from an angle, such as from the side of the road.
This has dangerous implications for autonomous driving systems relying on LiDAR sensor data. This technique could allow an adversary to hide obstacles from an autonomous car. Pedestrians at a crosswalk could be hidden from LiDAR, as could stop cars at a traffic light. If the autonomous car does not “see” an obstacle ahead, it may go ahead and drive through – or into – it. With this technique, it’s harder to hide objects closer than those farther away. However, hiding an object even for a few seconds might leave an autonomous vehicle with too little time to stop when it finally detects a hidden obstacle.
Outside of erasing objects from a LiDAR’s view, other spoofing attacks are possible too. Earlier work by researchers has involved tricking LiDAR sensors into seeing phantom things. This is remarkably simple to achieve – one only needs to transmit laser pulses towards a victim LiDAR that indicate a wall or other obstacle ahead.
The research team notes that there are some defenses against this technique. The attack tends to carve out an angular slice from LiDAR’s reported point cloud. Detecting this gap can indicate that a removal attack may be taking place. Alternatively, methods exist that involve comparing shadows to those expected to be cast by objects detected (or not) in the LiDAR point cloud.
Overall, protecting against spoofing attacks could become important as self-driving cars become more mainstream. At the same time, it’s important to contemplate what is and isn’t realistic to defend against. For example, human drivers are susceptible to crashing when their cars are hit with eggs or rocks thrown from an overpass. Automakers didn’t engineer advanced anti-rock lasers and super-wipers to clear egg smears. Instead, laws are enforced to discourage these attacks. It may simply be a matter of extending similar enforcement to bad actors running around with complicated laser gear on the side of the highway. In all likelihood, a certain amount of both approaches will be necessary.
Subscribe to Google News.