What Happens When Google's Robot Car Kills?

NEW YORK (TheStreet) -- Google (GOOGL) recently reported 11 minor accidents with its self-driving cars over six years and nearly two million miles driven -- and the company says that none of the scrapes were its fault. That sounds like a solid safety record. But when the first person gets killed because of a decision the Google robot car makes, who will be held responsible for that death?

Should they become popular (and if Carl Icahn and others are right, they will), robot cars will face this scenario a large number of times, as there are more than 5.5 million car accidents in the U.S. every year, resulting in nearly 33,000 deaths, according to the Insurance Institute for Highway Safety.

Consider the following scenario: The car in front is stopping too rapidly. The self-driving car is blocked from changing lanes due to other traffic. And there is a car approaching very fast from behind without enough time to stop. What decision does the robot car make? Does it plow into the car in front? Does it brake and let the car behind it, hit it? Does it protect its passanger even if it knows its decisions may be fatal for the other individuals? If so, which driver does it choose to take the brunt of the hit? What if one of the vehicles has many passengers? What if one of the vehicles is a school bus?

Then the question is: If there is a fatality and the robot car had independently made the decision, who is liable for the fatality. Is it the "driver" of the robot car? Is it the robot car itself? Is it the manufacturer of the robot car? Is it the company owning the robot car, if it is a corporate car? Is it the software developer who wrote the program that governed the decision?

There is really not a question of whether a robot-responsible fatality will happen. It has already happened. The first recorded death of a human by a robot was in 1979, when an assembly-line worker in a Ford (F) plant was killed instantly when the robot's arm crushed him while they were working side-by-side. His name was Robert Williams and his family was awarded $10 million in 1983 as a result of the accident. There have been other fatalities. And there are hundreds of robot-related accidents every year now.

In the case of the Google robot car, none of the accidents were caused by the robot car, Google has said. The robot cars will be safer: They will not be drinking, texting, eating or snoozing while driving. I would personally trust the robot car more than a human driver.

When first confronted with a self-driving car, others may not feel the same. In the movie "iRobot," you can see the shock and fear the character played by Will Smith switches his car from automatic to manual: this is likely to be common. (And here's a scary scene with robot cars actually attack.) Google, with its motto "don't be evil," has been testing for years and is likely struggling with these questions. But I am not so sure about the other robot car manufacturers. Will they design their self-driving cars to make the same set of decisions?

A secondary question would be, did the robot car make the right decision.

The scenarios faced by the car will be very similar to the classic moral dilemma called the trolley problem. A runaway trolley is rushing down a set of railway tracks. Ahead, on the same tracks, there are five people tied up and stuck. You have access to a lever. If you move this lever, you can switch the trolley to another set of tracks. But there is another person on this alternate track, who is not aware of the danger. There are two options: (a) Do nothing, and let the trolley kill the five people on the current track. Or (b) move the lever, shifting the trolley to the side track where it will kill one person. Which is the correct choice? 

Given the various scenarios, one can actually conceive a situation where the car has to put the driver at risk to save some others. Do we as drivers need to know before-hand the priority and logic of the car? Should we expect the same behavior from all the robot cars so we can react appropriately? Does it mean new standards as to how robot cars can behave in given situations?

This is an incredibly hard problem to solve, and there may not be a right answer. The law is going to have a hard-time figuring out how to rule in these cases. In the above case, the court ruled against the maker of the robot to the tune of $10 Million. During the transition period, there will be a real question of whether the juries will favor a robot car's judgement or human judgement in an accident involving both.

Not only law, but many industries are going to struggle in the transition. What happens to individual auto insurance when individuals are not responsible for accidents anymore? With individuals not making decisions, will car companies that make the robot cars be taking on the liability for accidents? There could be pressure to dismantle laws that require auto insurance, and the entire revenue stream will dry up, replaced by auto manufacturers perhaps self-insuring their liability.

We may need ethicists, car companies and the transport authorities to start meeting to lay out the ground rules to make this transition easier.

 

  This article is commentary by an independent contributor. At the time of publication, the author held no positions in the stocks mentioned.

More from Opinion

Throwback Thursday: Intel Edition

Throwback Thursday: Intel Edition

Intel's Next CEO Should Try Harder to Protect Its Flanks Against AMD and Others

Intel's Next CEO Should Try Harder to Protect Its Flanks Against AMD and Others

3 Warren Buffett Stock Picks That Could Be Perfect for Your Retirement Portfolio

3 Warren Buffett Stock Picks That Could Be Perfect for Your Retirement Portfolio

Wednesday Wrap-Up: GE and Facebook

Wednesday Wrap-Up: GE and Facebook

PayPal Strikes Again, Facebook, and AT&T -- 3 Tech Stories You Must Know

PayPal Strikes Again, Facebook, and AT&T -- 3 Tech Stories You Must Know