The Associated Press reported on the number of accidents that autonomous cars have been in since September 2014, when California officially issued permits for companies to test autonomous cars on public roads. At first glance, the accident rate is alarmingly high: four cars have been in accidents out of the 50 currently have on the road including three of Google's vehicles and one belonging to Delphi, resulting in an accident rate significantly higher than is typical for a vehicle driven by a human. This sounds bad, but if you look at what actually happened, it’s nothing to worry about at all.

So, why is this nothing to worry about? Let's look at the facts from the AP report: Two accidents happened while the cars were in control; in the other two, the person who still must be behind the wheel was driving. That makes the latter two just car accidents, not autonomous car accidents. Google and Delphi said their cars were not at fault in any accidents, which the companies said were minor.

In other words, someone else crashed into the autonomous car, meaning a human was at fault, which suggests that the whole autonomous car aspect is irrelevant. We don't know this for sure, of course, and it's possible that the fact that the car was autonomous did contribute in some way to the accidents, but Google doesn't seem to think so, as the company referenced the accidents in a statement as "minor fender-benders, light damage, no injuries, so far caused by human error and inattention."

This just emphasizes one of the reasons why autonomous cars are so important: they’re better drivers than we are. They’re always paying attention, and never get tired or distracted or bored. Having said that, like any robotic system that depends on a lot of complicated hardware and software working together, autonomous cars are vulnerable to errors, and even if an accident hasn’t happened yet, it’s definitely going to.

Let’s just assume for the sake of argument that one of these accidents was explicitly caused by an autonomous car driving in autonomous mode. How would that change things?

The fantastic thing about robotic cars is that they’re recording what’s going on around them, as well as what they’re thinking, all the time. After an accident, it would be possible for engineers to replay what happened in detail, and follow the chain of logic that the car followed to reach the decision that led to the accident. The specific cause of the accident could then be identified, and then, more than likely, engineers could develop a way of making sure that the car would never, ever have that accident again. Furthermore, the update could be instantly propagated to every other autonomous car, making them all that much safer.

Needless to say, humans don’t work this way, and we just keep having the same sorts of accidents over and over again.

The other way an autonomous car accident will change things, particularly if it’s an accident that results in an injury, is that it’s going to be a public relations nightmare, and possibly a legal nightmare as well. Nobody is going to care how safe autonomous cars have been, or will be, because as soon as that first major accident happens, the headline is going to be about “the dangers of robotic cars,” or something like that.

Some amount of time from now, fifty years perhaps, it’ll probably be illegal for humans to drive on public roads. Until that happens, it’s important to understand that autonomous cars are a developing technology that will be an enormous benefit to all of us, both in terms of safety and convenience, but that as a developing technology, it’s going to take a lot of patience, effort, understanding, and acceptance before we’re finally ready to give up the wheel completely.

 

Source article by Evan Ackerman in IEEE Spectrum

http://goo.gl/WlGg5J

http://spectrum.ieee.org/cars-that-think/transportation/self-driving/why-you-shouldnt-worry-about-googles-selfdriving-car-accidents