On Tuesday, September 25, at Google’s Mountain View headquarters, California governor Jerry Brown signed a law legalizing the testing of driverless cars on public roads.  While California is not the first state to legalize driverless cars, this step is significant because the state’s large consumer market often drives innovation in the automobile industry.

Auto makers have been developing driverless technology for some time.  Nonetheless, Google was the driving force in pushing through this new legislation.  Google has already developed a fleet of autonomous vehicles, and its employees having been using them to commute to work.  Google’s driverless technology uses radar and laser sensors to react to changing road conditions.  So far, Google’s seven-car test fleet has logged 1,000 miles without human intervention, and over 140,000 miles with only sporadic human operation. The only accident Google reported was a rear end collision by another car at a stop light.

Nevada and Florida also permit driverless cars on the road, but impose more
restrictions
on their use.  For example, Nevada requires special license plates for autonomous vehicles and limits their hours of use.  California’s law has its own restrictions, however.  Notably, a licensed, human driver must be in the driver’s seat to take control of the vehicle if needed.  The bill punts on specific limitations on the technology, requiring the Department of Motor Vehicles to develop regulations by January 1, 2015.

The Department of Transportation is also concerned with the safety of new vehicle technologies.  The National Highway Traffic Safety Administration (NHTSA) has recently launched a road test of vehicle-to-vehicle communication systems in Ann Arbor, MI.  Over this year-long trial, nearly 3,000 vehicles equipped with technology that allows vehicles to communicate with one another through Wi-Fi technology will demonstrate the software’s ability to avert safety risks in real-time.

Google claims that its driverless vehicle technology, like the vehicle-to-vehicle communication system, will reduce automobile accidents and allow for less congestion on the roads.  Additionally, Google co-founder Sergey Brin notes that the driverless car allows drivers to be more productive during their commutes, create avenues of transportation for people with disabilities, and can reduce pollution through lighter-weight vehicles that require fewer safety features.  While these benefits are certainly alluring, California’s law currently requires a driver behind the wheel who is alert and ready to take over driving the vehicle if necessary.  Therefore, the driverless car is not yet the state’s solution to driving while texting or intoxicated.

Though driverless cars were not explicitly illegal before California’s new law, this measure eliminates some legal uncertainty surrounding use of the new technology.  Nonetheless, a good deal of legal uncertainty still remains.  For example, automobile insurance will undoubtedly get more complicated as insurers determine whether to hold the driver or the manufacturer liable for accidents involving driverless vehicles.  In light of product liability concerns, California’s new law requires autonomous vehicle manufacturers to hold a $ 5 million insurance policy in order to test the cars on public roads.

Another legal concern is over the data the vehicle collects as it operates.  It is unclear to whom that data would belong.  Courts and legislatures will no doubt have to grapple with these liability and privacy concerns as this driverless technology becomes more widespread.

However the legal issues surrounding this technology are settled, however, California’s embrace of the driverless car signals that this technology is here to stay.

Danielle Barav

Image Source

Tagged with:
 

One Response to California Allows Public Testing of Driverless Cars, but Legal and Safety Concerns Remain

  1. Brad Edmondson says:

    I find this area fascinating because it brings new risks and new opportunities to road travel. On the one hand, I think we will see a clear difference in safety results if and when human drivers can start to rely on indefatigable (though not unerring) computers to prevent accidents. Like power steering and anti-lock brakes, this technology will probably demonstrably improve overall safety.

    At the same time, however, it may expose us to new types of risks, or at least risks new to road transport. The more reliance we place on driverless or human-augmented driving, the more we risk that those systems may go awry, either accidentally or intentionally.

    First, inadvertent computer response: we do not yet have the ability to design computer systems with the sort of sanity check that is hardwired in (most) humans. When confronted with significant new situations or new sensory inputs, humans are not great but they tend to be better than machines. It may be worth the trade-off, but we should expect the computers to make mistakes and cause accidents.

    Second, intentional manipulation of computer response by a non-occupant. Driverless systems that read inputs from their surroundings, and especially those that communicate with other vehicles, will be vulnerable to intentional manipulation of those sensory and digital inputs. Will attackers (in the digital sense) be able to take over driverless systems and tell the car what to do? Will they be able to cause accidents, kidnap or ransom occupants, or create a barricade of driverless cars to block pursuing police? The strength and weakness of computers is that they do what they are told; since malicious software can wreak havoc with anything its host computer controls, if it controls four wheels and an engine, they will be at the malware’s disposal.

    This doesn’t mean we shouldn’t proceed. The ultimate question is not necessarily whether this sort of risk exists but whether running the risk is worth it. The CEOs of every major company and the President of the United States all use phones and email which are not perfectly secure, because they add value that is thought to be greater than their risk. Like these other areas, with driverless cars we should evaluate the risk on the one hand and the added value on the other, make a judgment call about whether they are worth it (in general or in specific situations–the President reportedly uses specially designed and secured smartphone software), and proceed accordingly.

    In my view, the things to remember with digital risk are (1) that it exists in all digital systems, even though it is not necessarily perceptible at first; and (2) that it is not a binary all-or-nothing issue. When evaluating a system, we just need to keep all of its concomitant risks in mind when we make judgments about its likely costs and benefits.