Usually, the person at fault in a car accident can be determined relatively quickly. Whoever failed to check, stop or signal is generally responsible. But what if no one is at the wheel at the time of the accident? What if the car is in control? With the integration of automated vehicles well underway, a new grey area has emerged - who is at fault if an autonomous car crashes and how should that be determined?
The next generation of driverless vehicles will likely be commercially available around 2020 but car manufacturers are not waiting for the law to “catch up”.
The recent, tragic death of a Tesla vehicle driver in autopilot mode has highlighted the risks associated with automated vehicles and the complex question of liability.
Is it the manufacturer whose testing was not rigorous enough? Or is the human who failed to override the system when it came to the crunch? What about the sensors supplier?
And now with Volvo stepping out to accept legal liability for its vehicles when in autonomous mode, the question then becomes: is this the new legal norm?
Driverless cars have been science fiction material and Hollywood film fodder for decades. The much-loved VW Beetle ‘Love Bug’ in the 1969 film Herbie and KITT in 1982’s Knight Rider were ahead of their time.
And now it seems that automated vehicles are not in the distant future - they’re coming to a road near you.
So what are they exactly?
A driverless car (often called a self-driving car, an automated car or an autonomous vehicle) is a robotic vehicle that’s designed to work without a human operator.
To qualify as fully autonomous, the vehicle must be able to navigate without human intervention to a predetermined destination over roads that haven’t been adapted for its use.
Companies like Google, Tesla, Audi, Ford, BMW and General Motors are all diving head first into the driverless car innovation space.
According to the chairman of the International Organisation for Road Accident Prevention, more than 90% of road accidents are caused by human error.
Driverless vehicles have been created to reduce or even eliminate accidents caused by human error.
As the technology develops and becomes more sophisticated, automated vehicles are expected to become safer than human-driven vehicles.
Other benefits include increased mobility for disabled people and the elderly and a significant reduction in congestion on our busy roads.
And what about the chance to enjoy a better and more productive lifestyle? When we don’t need our hands on the wheel or our eyes on the road during long distance travel, there’s a whole lot of opportunity for podcasts/chatting/working/sleeping.
Of course, as the percentage of automated vehicles increases, the number of accidents that result from system failure (rather than human error) will inevitably rise.
Others, including Diana Bowman from the University of Michigan, have raised the possibility of normal cars crashing into ‘lighter’ automated vehicles and the likelihood that autonomous cars would challenge our understanding of who was liable in a motor vehicle accident.
So when crashes occur, what is the legal position on liability?
Typically, when a car crashes, the human driver is at fault (unless a faulty vehicle was at play). With automated vehicles, the situation is more complex.
Blame for accidents will fall on one or all of the following: the car manufacturer, hardware or software providers, mapping companies or (sometimes) the ‘driver’ failing to realise the car was going to run into trouble and intervene in time to avoid damage, injury or death.
So how do we figure out who is responsible, especially with hybrid car models and different levels of automation involved?
Well, the answer’s not that simple. It may depend on what type of automated vehicle is the subject of the incident and what level of awareness the human operator has when they’re in the car.
Autopilot features may lull people into a false sense of security if they think the car can drive itself with no user input required. There have been a number of cases where drivers have blamed accidents on the Tesla’s AutoPilot system only for Tesla to produce logs to show that the system was not engaged.
In July 2016, after a crash in Pennsylvania, Elon Musk tweeted to say “Onboard vehicle logs show Autopilot was turned off in Pennsylvania crash. Moreover, crash would not have occurred if it was on.”
Ryan Calo, Assistant Professor of Law at the University of Washington, has said that if drivers are deemed to be aware of the risk, companies like Tesla may be let off the liability hook.
According to the Head of the Automotive Innovation Center at Allizanz, Jacob Fuest, however, the basic legal principle of strict liability will endure which means the person in charge of the car is liable for both individual mistakes and defects of the vehicle.
If insurers and manufacturers then have to battle it out to decide if a vehicle malfunction occurred, a data event recorder (equivalent of a black box) may step in to save the day and resolve the dispute.
According to Connecticut lawyer, Andrew Garza, future liability cases will likely be handled the same way as today. Speaking with 2025AD, he said that most liability issues will be resolved by these data recorders which act like a logbook, storing relevant information to shed light on what went on in the decisive seconds before the crash.
Whilst the US is shifting more towards the prospect of manufacturer liability, recent reports by the National Transport Commission suggest Australia could be headed in a different direction, where the human driver or “operator” is strictly liable.
To navigate the legal liability web, Australian lawmakers will first need to ensure that the proper testing, introduction and operation of automated vehicles (and the potential associated risks) are reflected in the national legislative framework.
In May 2016, recognising that Australia’s laws are not really ready for driverless vehicles, the NTC issued a discussion paper where it made a number of recommendations to safeguard the public and consider the liability position.
One of those recommendations was that lawmakers may wish to consider whether liability for crashes caused by a failure of the automated driving system should be borne by the registered operator of the vehicle, the individual operating the vehicle at the time or the company that manufactures, supplies, installs and/or maintains the automated driving system.
More recently, in November 2016, the NTC released a further report suggesting that the question of liability would be clarified if we can establish whether human monitoring of the automated driving system constitutes control of an automated vehicle and recognise the automated driving system entity in the road rules.
According to the NTC, the factors which complicate liability when it comes to automated cars are the:
Establishing a new, national legal framework to deal with automated vehicles (and the different levels of automation) could be an important step in avoiding confusion around liability.
In the meantime, we need to be vigilant about how we define the terms “driver” and “control” to wade through the muddy waters of legal liability.
2020 is not far away and the law has a small window of time to catch up.
The law typically moves at a slow and conservative pace but the tech world and its futuristic cars wait for no one.