Could a hacker pull off a "Speed" style attack? You bet.

 

In the 1994 action thriller Speed, an extortionist bomber rigs a city bus to explode if the driver drops its speed below 50 mph.

Could something similar happen with a modern car or a bus, or even a futuristic self-driving vehicle? 

Yes, that scenario is “very reasonable,” said Steve Povolny, head of advanced threat research for security firm McAfee. 

“Any vehicle that serves up an externally accessible network connection, whether via infotainment, mobile device via Bluetooth or third-party apps or systems like OnStar has the potential to be compromised remotely,” Povolny said in an email to Fierce Electronics.

“This could lead to full remote code execution, including the ability to interact with and modify [electronic] traffic on the vehicle such as via the CANBUS [controller area network bus] and control physical components including steering, acceleration/braking and any other ECU[electronic control unit] -managed system,” Povolny said.

“With a solid vulnerability an attacker could pull off a Speed style of attack,” Povolny said. That would be a way for a hacker to deploy ransomware and threaten a disaster with a time-based mobile payment.  The ransomware could be delivered “right to the user’s phone or the browser on the infotainment system,” Povolny said.

Thankfully, the reward for an attacker on a vehicle equipped with current technology is not as high as other ways for a hacker to make money via ransomware, including with an attack on a hospital records. “The motivation for [attacking]  vehicles may be lower at the moment,” Povolny said. “But that shouldn’t detract from our focus on the real possibility.”

Povolny briefly mentioned the Speed attack as part of a virtual address on Wednesday during Fierce Electronics’ AutonomousTech Innovation Week.  He used the address to urge more research into keeping future vehicles secure. “We have to stay proactive, encouraging research and secure development,” he said.

“As we look five to 10 years in the future, there is a point where the driver is removed and attacks like this will be relevant,” he told the audience. “Ransomware would be incredibly powerful.”

Doctoring 35 mph speed sign to read 85

Also in his presentation, Povolny described McAfee threat research released in February that was widely disseminated to the security world.  In that example,  Tesla vehicles equipped with Mobileye cameras mistook a 35 mph highway sign doctored up with a 2-in. strip of black tape.  The cars mistook the speed to be 85 mph and began speeding up automatically before a human driver took control. (Mobileye has since made advancements in their camera systems and AI and Tesla had earlier decided to make their own cameras, Povolny noted.)

A human driver would have recognized the sign didn’t look like it read 85 mph, Povolny asserted. However, a NASA safety official who viewed the presentation said she could have been confused by the doctored sign. Even so, a human would make further observations and stayed at a slower speed, said Misty Davies, deputy project manager for NASA airspace  operations and safety.  “I would probably realize quickly that the road is usually 35 mph or was too narrow or I can’t see with children around,” she said in a panel discussion as part of the virtual event. “It’s my responsibility to be a safe driver.”

 Povolny noted that defense-in-depth principles never allow a single point of security failure.  “There’s always a check or balance,” he said.  However, he said additional layers of checks and balances may be isolated from each other in futuristic autonomous machines and won’t communicate with each other easily.

“It’s a can of worms to open up ways to do defense in depth,” he added during a panel discussion as part of the virtual event.

Povolny and others on the panel said AI and ML systems are already advancing safety and security in cars.  There are now about 40 neural networks operating inside some Tesla vehicles, including  rain-sensing wipers and adaptive headlights, said Phil Magney, founder and president of VSI Labs, a research and advisory firm on connected and automated vehicles.

“We have learned with AI trained for lane detection that there’s much better performance with AI,” he said.

A big concern for emerging autonomous vehicles is how they will detect the center line in a roadway or the roadway edges.  Many experts now believe that the best way to ensure future safety with self-driving vehicles is by adding roadside sensors that communicate dangers and conditions, such as pedestrians, to vehicles over V2X (vehicle-to-everything) communications.

Even so, Povolny said the effectiveness of AI will depend heavily on training AI systems with diverse data sets.  “The emergence of AI and ML to power sensors on board modern vehicles is absolutely essential,” he said. “Security is vastly improved but still today there’s a lack of understanding for data scientists and developers creating this technology of the type of data and input these systems are trained on,” he said.

Racial bias shown in facial recognition systems helps reinforce the need to expand data sets beyond where they are today, he said. “In autonomous vehicles, if you are only training on traffic signs and stop signs you see, and 99% of the time that’s the same thing, that’s what the model will predict.  Developers have to realize the need for much more diverse training models,” he said.

Povolny and other experts appeared on the virtual AutonomousTech Innovation Week  on December 16. The entire three-day event can be viewed on demand for free.