Is Cyber Security Good Enough For a World Of Driverless Cars?

Until both the online and offline worlds are safer, it might be worth sticking to your local minicab firm for the time being.

Is Cyber Security Good Enough For a World Of Driverless Cars?

The recent announcement by Lyft of a new partnership with Boston-based self-driving car start-up NuTonomy to eventually put “thousands” of on-demand self-driving cars on the road has quickly followed the company’s announcement of a plan to work with Waymo on autonomous technology. The new partnership plans to run a pilot scheme in Boston within the next few months, where Lyft users will be able to hail a driverless vehicle using the existing app.

But is this automatically the future or should we not be so hasty in assuming traditional cars with drivers will be consigned to the past, for whilst we have embraced technology, most of us plug ourselves into the ‘Internet of Things’ trusting we and they are protected.

We expect, and the law demands, drivers to have taken lessons and passed their driving test, to have insurance and drive a road-worthy vehicle. But will we just assume the same level of safety for driverless cars, or trust in the machines and the systems that back them up?

In January 2008 a tram in Lodz, Poland, veered to the left, despite the driver turning right. The tram’s cars came skidding off the rails, although mercifully no one was killed. The reason for this incident was not driver error or a technical fault, but a tech-minded teenager who had created a remote transmitter capable of controlling junctions. Having spent months studying the rail system, the fourteen-year-old had his toy train set with real drivers and passengers. He had caused the derailment of four trains, simply because he could.

This attack on a city’s infrastructure is just one of many examples where our desire to plug into technology has outstripped the precautions needed to secure it. If in the not too distant future, even 1 in 20 vehicles on the road are driverless, that is 120,000 cars on the roads of London alone which can be accessed remotely by another 14-year-old looking for ‘lolz’ or perhaps this time, a criminal gang or rival government agency.

People are already saying they prefer the security of a human driver and that is without the idea that their mode of transport could be hacked and used as a life-sized remote-controlled car for fun, or for something more deadly.

Artificial intelligence researchers also suggest that as well as hacking the cars themselves, what the cars ‘see’, such as stop signs, could also be hacked leading to cars not slowing down or stopping at junctions or red lights.

According to an article in quartz a few changes to a road sign ‘seen’ by a self-driving car could be altered to stop the car from behaving as programmed. Researchers from OpenAI and Google have said they have “created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously.”

The recent terror attacks in Europe have involved vehicles being driven at crowds. Until both the online and offline worlds are safer, it might be worth sticking to your local minicab firm for the time being.