How Safe?

“Our goal is to create a car that will never be responsible for causing a crash, whether it’s driven by a human or a computer.”

How Safe?

Last year at CES, the Toyota Research Institute (TRI) was given its official coming out party.  And now, one year later, Dr. Gill Pratt, CEO of the research organization, gives a brief reminder of what TRI is all about, explaining that the mission is focused on artificial intelligence (AI), and that within that mission there are four goals:

  1. Enhancing vehicle safety with the objective of “someday” creating a car “incapable of causing a crash.”
  2. Increasing mobility access for people who can’t drive.
  3. Working in the field of robotics devices that will transport people within their homes, thereby allowing the infirm to live at home
  4. Developing new materials, using AI and machine learning techniques.

While Pratt is making these remarks, he’s standing on a stage next to the Toyota Concept-I, a concept vehicle that Toyota senior vice president, Automotive Operations, Bob Carter revealed a few minutes earlier, describing the vehicle as featuring “advanced automated driving technologies.”

You might think that Pratt would be gung-ho about charging forward into an AI-enabled, self-driving future.

And if you did, you would be wrong.

Pratt proffers a simple question: “How safe is safe enough?”

He points out that last year in the U.S. there were some 35,000 traffic fatalities.  “Every single one of those deaths is a tragedy.”

But what if those deaths were the consequences of autonomous vehicles?  We accept “human error.”  Machine error is something else, entirely.

Pratt posits: what if the machine was twice as safe as a human?  Would that be acceptable?

So how safe is safe enough?

“In the very near future,” he says, this question will need an answer.  “We don’t yet know for sure.”

Know that TRI is staffing up—they’ve hired more than 100 people, have added 50 more from Toyota and are going to hire another 100 this year.  They’re working closely with researchers at Stanford, The University of Michigan and MIT.

They are pursuing the technology that will deliver on autonomous capabilities.

Yet for all that, Pratt is still cautious.

He cites the SAE International J3016, which defines the five levels of autonomy, with level 5 being the gold ring: a vehicle that can drive itself under any conditions, anywhere, anytime.

“I need to make this perfectly clear,” Pratt says.  “This is a wonderful goal.”

But then he explains, “However, none of us in the automobile…or IT industries are close to achieving true level 5 autonomy. Collectively, our current prototype autonomous cars can handle many situations.  But there are still many others that are beyond current machine competence.  It will take many years of machine learning and many more miles than anyone has logged of both simulated …and real-world testing to achieve the perfection required for Level 5 autonomy.”

Years.  Not months.  Not weeks.  Years.

(What of those companies talking about having “autonomous vehicles on the road by 2020”?  Pratt says that those are Level 4 vehicles, which have limited areas of operation, limited speeds, limited time of day and maybe only when the weather is good.  Not anywhere, anytime, autonomy.)

Make no mistake: Pratt and his TRI colleagues are working hard on creating technologies that will aid and assist drivers.  They are taking two tracks.  One is “Guardian,” which can be thought of supplemental assistance to the driver.  The other is “Chauffeur,” which is what it sounds like: the tech that will do the driving.

“Our goal is to someday create a car that will never be responsible for causing a crash, whether it is driven by a human being or by a computer,” he says.

But they’re not going to rush to get there.