Autonomous Cars: Safe at Any Speed, or Lack Thereof?

In November, a police officer pulled over one of Google’s prototype self-driving cars that the company was testing near its headquarters in Mountain View, California. The reason? Reckless driving? Running a red light? Texting? Road rage? Or how about plain old speeding? The answers are no, no, no and not even close. 

In fact, the rule the car ran afoul of was 22400(a) of the California Vehicle Code, which states in part that “no person shall drive at such a slow speed as to impede or block the normal movement of traffic,” which in layman’s terms basically translates into the Sunday Driver Law.   

The Google car, which was motoring along at a Yugo-like 24 mph in a 35-mph zone, was causing a lengthy backup on a busy thoroughfare where speeds are typically 10 mph more than the posted limit rather than the opposite. 

This is all starting to sound like one of The Onion’s satirical news stories or the premise for a "Saturday Night Live" skit. Google even joked about it on its company blog: "Driving too slowly. Bet humans don’t get pulled over for that too often.” Not necessarily something you brag about to friends. But in the end, no ticket was issued to the engineers who were riding along as passengers to evaluate the vehicle. So no harm, no foul. Right?

Autonomous cars tested in California currently are only allowed to operate on streets with a posted speed limit of 35 mph or less. Google takes this a step further by restricting the top speed of its vehicles to 25 mph. 

The thinking is to play it safe until the technology is fully vetted, and any potential safety issues are addressed and fixed. After all, improved safety is one of the primary drivers—along with better traffic flow and allowing drivers to relax, work or do other things while their cars are taking them from A to B—of self-driving vehicles in the first place.  

But can autonomous cars really do everything a human can do, including reacting quickly and logically to a virtually infinite range of possible situations that drivers can encounter and routinely handle? And who is liable if things do go wrong?

Driving too slow is way down the list of concerns of what can go wrong with self-driving cars. But it underscores just how important it is to fully test emerging technologies under all possible scenarios.

Longtime consumer advocate Ralph Nader, whose seminal 1965 book "Unsafe at Any Speed" helped launch the automotive safety movement 50 years ago, warns that automated technologies could have the reverse effect on safety than what’s intended. While recognizing the benefits of adaptive cruise control, blind-spot detection and collision-avoidance systems, Nader and other skeptics point out the dangers of temporarily removing people from the driving process and turning cars into entertainment pods and mobile offices. 

Until vehicles become fully autonomous, which isn’t expected anytime soon, control will pass back and forth between drivers and on-board computers depending on where the vehicle is being driven and other factors. Not only does this increase the potential for driver distractions, Nader notes, it also could diminish a driver’s skill level—especially when it comes to dealing with emergency situations.

Tesla ran into some problems shortly after downloading its Autopilot software—which enables automated steering, braking, throttle control and lane changes during highway driving—into 40,000 of its Model S electric cars in October.  After several motorists posted videos of the system not functioning properly and driving with their hands off the wheel or while sitting in the back seat, Tesla CEO Elon Musk vowed to implement additional safety guards to prevent drivers from doing “crazy things." 

Musk says there are no reports of Autopilot causing any accidents and he suggests there is already evidence the system has helped prevent some crashes. But he also cautions drivers to be careful when using automated features and emphasizes that users are ultimately responsible for their own safety. 

Sounds like good advice. As technology continues to progress, perhaps everyone should take a page from Google’s playbook and go slow when it comes to implementing autonomous features until everything is properly engineered, tested, validated, repeated and confirmed under all conditions so we can all reap the safety benefits.