Regarding the Cars of Tomorrow
With the rise of autonomous technology, it was only a matter of time before someone decided it was time for a self-driving car. With multiple companies now pursuing the goal of bringing a car to life that operates itself with no input from its passengers, some very real questions are being asked in regards to the safety, effectiveness, and ethical reasoning of these computer-controlled vehicles.
Are Self-Driving Cars Safe?
The most prominent developer of a self-driving car, Google Inc., has been publicly testing their prototypes since 2009. They were first vetted on complex public city streets in 2012, and in 2015 a new fully-automated prototype was created and sent out for road tests. This project, now dubbed WAYMO, has a combined 2 million miles of real-world self-driving. In total, WAYMO cars have been in 14 non-fatal accidents since 2014.
For comparison, in 2009 the number of crashes per 100 million miles driven was 185. After factoring in the appropriate adjustments for unreported crashes and the large discrepancy in total miles driven, a Virginia Tech study found that human operated vehicles find themselves in 4.2 crashes per million miles with WAYMO vehicles only being in 3.2 crashes per million miles.
It’s highly unlikely that Google cars will be the only automated vehicles on the roads of tomorrow, and the safety of these vehicles could vary wildly based on manufacturer. There are very few laws on the books for self-driving cars, with only a handful states even addressing their use on public roads. In order for these vehicles to be reliably safer than a human operated automobile, laws will need to be implemented to set standards for software and hardware testing, road tests, and accountability across the U.S. How these laws are implemented moving forward will determine just how safe the average autonomous vehicle will be.
The Ethical Dilemma
Machines don’t think for themselves. Every action is decided for it by its programming. This raises huge ethical questions that, so far, designers have been unable to answer. If a self-driving car is forced into a position where a fatal accident involving a choice between two different individuals is inevitable, how does it choose which one lives and which one dies? What if one is a child? These are choices designers will have to account for when programming their self-driving vehicles for use on the road. Even more disconcerting is the knowledge that if you are a passenger in a self-driving car, this decision will be entirely out of your hands. Loss of life in a self-driving car is, in essence, in the hands of its designers and programmers.
How does litigation proceed in the event of a horrific accident? If your family is killed in a crash caused by a self-driving car, who do you tell your lawyer to hold accountable? It’s difficult to say the driver would be at fault, as they had no control over the vehicle at the time of the crash. It’s equally challenging to blame the designers themselves. If they had made their best effort to ensure the vehicle was safe, and extreme circumstances outside of their control ended in the loss of human life, it becomes difficult to say just who is to blame.
Ease the Transition
Before self-driving cars can find a place in the day-to-day life of the American people, these concerns will need to be addressed with laws, regulations, and guidelines. It’s become very clear that the use of these vehicles on the road is unavoidable. Ensuring procedures are put in place prior to the unthinkable occurring will help to alleviate many of the concerns about the automated vehicles of tomorrow.