If anything defines America, it’s innovation. We’re constantly pushing for safer, more effective products to make life easier.
Self-driving, “automated” vehicles are rolling out of the lab and onto our streets. These futuristic modes of transport promise to free up our time and make roads safer.
But are self-driving vehicles really going to create fewer accidents and injuries? Or will they simply generate a new set of problems involving driver inattention, experimental design failures, and inadequate testing?
1 in 4 Driverless Cars by 2030
Before we know it, we’ll be sipping coffee, browsing the morning news, and catching up on emails during our commute to work – all in the privacy of our own vehicles. Researchers estimate that in just 12 years, one out of every four U.S. cars could be self-driving.
Google has been a major player in autonomous vehicle development, with manufacturers like Audi, Mercedes, Volvo, and Tesla quickly following suit. Their self-driving cars use cameras, radar, and lasers to collect data about the car’s surroundings. Software then combines this info with data from sensors and digital maps to tell the car what to do.
Is Autonomous Car Testing on Public Roads Safe?
Designers anticipate several benefits of autonomous vehicle technology, including enhanced public road safety, increased traffic efficiency, and reduced carbon emissions.
But innovation inherently involves risk.
Just last month, a Tesla Model X electric SUV driver died in a tragic crash while the car was on autopilot.
According to Tesla, the car gave the driver several audio and visual warnings before crashing into a highway barrier in Northern California and bursting into flames.The driver didn’t respond to the warnings. Sensors show his hands were off the wheel at least six seconds before the collision.
Driverless vehicles are expected to enhance public road safety, but perfecting the fully-automated vehicle will take years. Until then, vehicle manufacturers will be testing experimental versions on our streets.
And whether operating a self-driving test vehicle, driving a traditional car, walking, or cycling, everyone’s safety is at risk.
People injured in car accidents have the right to hold those at fault responsible. And while not all aspects of liability lawsuits will change,here are five ways self-driving cars could affect liability cases:
#1. More Potentially Liable Parties
Consider the recent death of 49-year old Elaine Herzberg, killed by a self-driving Uber test car while walking her bike across a Tempe, Arizona road.
Uber’s Light Detection and Ranging (LiDAR) sensors created a 360-degree map of the Volvo XC90 SUV’s surroundings. In addition, the car was outfitted with stereo and radar sensors that detect objects around the vehicle.
Each of these sensors should have detected Herzberg crossing the street, and instructed the car to avoid hitting her. The fact that she was walking a bicycle should have increased her visibility. What went wrong, and who is ultimately responsible?
Numerous parties could be liable for injuries caused by a driverless car, including:
- Vehicle manufacturers
- Radar, laser, camera, and/or sensor manufacturers
- Third-party software manufacturers
- Installation companies
National Transportation Safety Board (NTSB) investigators are currently examining the Uber case. Uber settled with the family just two weeks after the accident, so there will be no trial. But it will be interesting to learn what the investigation reveals.
#2. Shift Toward Manufacturer Liability
Cases involving operator liability will largely rely on National Highway Traffic Safety Administration (NHTSA) automation levels. In May 2013, the NHTSA released a “Preliminary Statement of Policy Concerning Automated Vehicles” that separated vehicle automation into five levels ranging from Level 0 (no automation) to Level 4 (full self-driving automation).
A driver’s liability in an accident will generally decrease with increases in the vehicle’s level of automation. Liability shifts away from the driver toward the manufacturer as the vehicle’s level of automation increases.
In the future, the “drivers” of driverless cars may have no liability at all in accidents.
#3. Greater Post-Sale Manufacturer Responsibilities
The unavoidable issue of software upgrades presents a whole new question around warnings associated with post-sale manufacturer responsibilities.
Think of it like expected or “planned” recalls. Autonomous vehicles will need regular software upgrades. If manufacturers fail to warn owners of the importance of these upgrades, fail to provide upgrades promptly after sufficient testing, or fail to provide quality installation, injured vehicle owners may be able to file claim for damages based on a failure to warn.
#4. Unclear Terminology
Initially, the terminology used by manufacturers could be the focus of liability lawsuits. Test cars are often Level 2 or Level 3, meaning the driver must be somewhat involved in the operation of the vehicle.
For example, in Level 2 (combined function automation) cars, the driver is still responsible for “monitoring the roadway and safe operation and is expected to be available for control at all times and on short notice.”
What exactly is “short notice?” If the driver doesn’t respond in time, an injured party could argue a design defect – that the car should have been designed to give more advanced warning.
In addition, if the manufacturer claims that the driver will have “adequate notice” when he needs to take over the wheel, and the driver gets a two-second warning before a collision, the car owner could potentially file a claim for damages involving misrepresentation.
#5. New Evidence Requires Broad Spectrum of Expertise
Like the “black box” in an airplane crash investigation, product liability cases involving driverless cars are going to rely heavily on data collected by the vehicle itself. Interior camera footage of the driver, steering wheel and pedal sensor data, and external sensor data will be vital to demonstrating how operator and vehicle were behaving at the time of the accident.
This presents issues with expert witnesses and investigators.
Accident reconstruction experts may not understand the function of radar equipment or software components. It’s possible that self-driving vehicle accidents will prove challenging for investigators. New fields of expertise will be necessary to prove liability in these types of cases.
It may be decades before car manufacturers work out all the kinks, but that doesn’t mean they won’t put the cars on the road. It will take time to map things out and establish the proper liability legislation that will help keep our roads safe.