The Department for Transport in the UK recently claimed it wants to see fully autonomous cars tested on British roads by 2021. Astonishingly, this expectation was set out after several fatal crashes in Arizona, USA.
A legacy system risk
The communications infrastructure used in cars today (known as a Controller Area Network, or CAN) was designed back in the 1980s. It was developed for exchanging information between different microcontrollers. Essentially, what we have is a peer-to-peer network – and an old one at that.
The main issue here is that these networks weren’t built with security in mind, as it wasn’t a key concern back then. As time has progressed, new functionality has been layered on top of existing functions, all connected to the CAN. This offers no access control or security features but instead leaves vehicle access potentially open to criminals.
While there are no real-world hacks that have been executed this way, it has been proven to be possible. For example, in 2015 two researchers and a journalist were able to use wireless technology to drive a Jeep Cherokee off the road. As a result of this flaw, half a million cars were recalled.
It’s an example of emerging technologies being layered on top of old infrastructure, without fully considering the security implications.
Potential security issues don’t just lie in the underlying communications network of the car itself; there’s also the possibility of an attacker infecting a driver’s smartphone and hijacking any apps they use to control functions on the car – for example, to lock and unlock it.
Cars need humans too
Following the fatal incident in Arizona last year, experts predicted that it would be many years before autonomous cars replace human drivers. The reality is, I don’t think driverless cars will or should ever replace human drivers in the way that we think of them doing so now. In the future, I think everyone will continue to drive a private car, but it will be self-driving.
However, the issue of how we implement the technology is something for society to ultimately decide – whether this takes the form of private vehicles or a co-ordinated public transport system – but I do not believe either should remove the human aspect of driving vehicles.
The risks of delegating control
I think people are becoming more apprehensive when it comes to driverless cars, where safety is paramount, and rightly so. Historically, driving has always been an aspect of life where human control has been essential, so the idea of watching a film or sleeping while a car transports us feels understandably ‘wrong’ to many people.
There are various levels of autonomy with self-driving cars – ranging from add-on features such as parking assistance through to completely driverless cars. A ‘grey area’ lies between the two, where the driver has very little to do, but has responsibility for the vehicle and might need to take control at some point. In the latter scenario, there’s a danger that the driver may switch off because they don’t feel required to be in full control and might be unable to regain control in an emergency.
Driverless car fatalities have shown that there is a very real danger with autonomous vehicles, and it’s reasonable to question whether it’s wise to resume the use of them so quickly after the incident.
There are concerns about pedestrian and driver safety – as recent stories surrounding autonomous car testing have demonstrated, which society needs to tackle before driverless cars are launched.
There’s also a moral or ethical issue to consider. Christian Wolmar raised the issue of ‘the Holborn problem‘: if driverless cars are programmed to stop when they sense a pedestrian, what happens when they’re confronted with a mass of people milling across a busy road? Will they wait all day? Or will they be programmed to operate with a lower safety bar? Or if the car is given the chance to choose to avoid hurting pedestrians or the passenger in the car in the lead-up to an accident, how and who will it choose? A car isn’t able to make moral-based decisions on its own.
Ethics aside, in terms of cybersecurity, it’s important to remember that nothing is 100 percent secure. Just like housework, security is never ‘done’ – you need to continually repeat the process of vacuuming and dusting as it will be back next week.
This same logic applies to securing the increasingly advanced technology in cars today. A recent audit by Kaspersky of connected car devices revealed several security issues, including the option to manipulate signals from the tire monitoring system and, alarmingly, the ability to open vehicle doors using the alarm system.
Collaborating for a secure future
Undoubtedly, autonomous cars – like the Uber you’ve just ordered – are just around the corner for motorists. So what does this mean for the automotive industry? ‘Safety first’ is not just a buzzword – it’s essential to consider at the design stage. Just relying on software upgrades that are connected to the car’s hardware to mitigate risks will not work in the era of IoT when absolutely everything is digital and connected. Devices need to be secure by design, and at the point of manufacture.
Collaboration between smart car manufacturers and cybersecurity experts is essential to establish standards in this emerging discipline. For example, Kaspersky collaborated with manufacturer AVL to create a Secure Communication Unit (SCU) to secure communications between car parts, and between connected cars and infrastructure. This is a true secure-by-design software that will protect people, both on and off the road.
There are still many unanswered questions and unconsidered scenarios, which regulators, the industry and society as a whole need to ascertain before we can start to consider loosening the reigns on bringing autonomous cars safely to our roads.
This article was first published in March 2019, SC Magazine.