Self-Driving Car Laws Take the Road

Michigan on Friday became the first state in the US to pass extensive statewide regulations related to self-driving vehicles, and one of only eight in total to ratify any laws governing the technology.

The laws are significant in that for the first time a state has attempted to define the infrastructure of the self-driving vehicle. They set requirements for how such cars can be tested on public roads. They revise a prior law that mandated a “backup” driver be available to take over the controls at any time, paving the way for a car without a steering wheel or pedals.

When the self-driving feature is engaged, the law establishes the vehicle itself as the “driver” for the purposes of obeying all traffic rules. (What’s not clear is whether such a “driver” can be suspended from the road.) And for those wondering, my read of the law is s that it still requires the self-driving car to contain at least one designated licensed “driver.”

Importantly for automakers, the law indemnifies autonomous vehicle OEMs from liability stemming from changes made to the system without manufacturer consent. But less appealing is the new state ban on non-auto-makers from using their autonomous technology on Michigan roads. In short, Apple, Google and Lyft, for example, would either have to partner with the Fords and GMs of the world, or create their own car companies. Let’s hope that doesn’t slow innovation in this key emerging area the way Ma Bell’s monopoly on telecom did throughout much of the 1900s.

 

Full Autonomous Autos: Decades Away and In Need of Unprecendented Reliability

Folks,

Since writing my last blog post, there continues an unending litany of articles about the imminent arrival of the self-driving car. I stand by my position that a fully functional self-driving car is decades away. Let me discuss why.

I was recently asked about Google’s efforts amide claims of tens of thousands of hours of self-driving.  Wikipedia has the best answer:

As of August 28, 2014, according to Computer World Google’s self-driving cars were in fact unable to use about 99% of US roads.[51] As of the same date, the latest prototype had not been tested in heavy rain or snow due to safety concerns.[52] Because the cars rely primarily on pre-programmed route data, they do not obey temporary traffic lights and, in some situations, revert to a slower “extra cautious” mode in complex unmapped intersections. The vehicle has difficulty identifying when objects, such as trash and light debris, are harmless, causing the vehicle to veer unnecessarily. Additionally, the LIDAR technology cannot spot some potholes or discern when humans, such as a police officer, are signaling the car to stop.[53] Google projects having these issues fixed by 2020.[54]

Ford claims it will have self-driving cars deployed by 2020. However, a quote by Jim McBride, Ford technical lead, sheds some light:

“Q: What are the big technical challenges you are facing?

“A: When you do a program like this, which is specifically aimed at what people like to call ‘level four’ or fully autonomous, there are a large number of scenarios that you have to be able to test for. Part of the challenge is to understand what we don’t know. Think through your entire lifetime of driving experiences and I’m sure there are a few bizarre things that have happened. They don’t happen very frequently but they do.”

Level four is indeed impressive, but it is not full autonomous as described by SAE:

SAE automated vehicle classifications:

Level 0: Automated system has no vehicle control, but may issue warnings.

Level 1: Driver must be ready to take control at any time. Automated system may include features such as Adaptive Cruise Control (ACC), Parking Assistance with automated steering, and Lane Keeping Assistance (LKA) Type II in any combination.

Level 2: The driver is obliged to detect objects and events and respond if the automated system fails to respond properly. The automated system executes accelerating, braking, and steering. The automated system can deactivate immediately upon takeover by the driver.

Level 3: Within known, limited environments (such as freeways), the driver can safely turn their attention away from driving tasks.

Level 4: The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.

Level 5: Other than setting the destination and starting the system, no human intervention is required. The automatic system can drive to any location where it is legal to drive.”

The difference between level 4 and 5 is enormous.  Just a few days ago I drove a level 2 Volvo SC90. It was a lot of fun. It had autonomous steering and acceleration/breaking. It worked very well, but it needed the lane markers, a not insignificant requirement.

Level 4 could not take you on a trip from my house, in Woodstock, VT, to a meeting in downtown Boston. To start, some of the trip is on roads without lane markers. Let’s also assume that there is construction with hand written signs directing the cars to a detour. There is also a traffic cop who signals you to stop and roll down the window to listen to instructions, a huge pot hole that has a hand-made warning sign is in downtown Boston, etc. None of these challenges would be unusual for a human, but a challenge for Level 4 autonomy.

Ford’s self-driving car has the equivalent of 5 laptop computers.

 

Singapore has implemented what appears to be level 3 vehicles, but there is a human backup and the route is specially selected.

All of this is exciting news.  But getting a vehicle that can handle 99% of human driving tasks with 99.99% reliability (let’s call it Phase I) will be easier that getting the last 1% with 99.99999% reliability (Phase II).  I agree that Phase I may be only years away, but Phase II is decades away.  Without Phase II, the driverless car that has no steering wheel or gas pedal is not achievable.

How does all of this affect us in electronics assembly?   It will be an interesting adventure to work with the auto industry on the extreme reliability required.  My guess is that this reliability need will be a dominant theme in the future.

Note: Probably the best article on this topic was in the June 2016 issue of Scientific American.

Cheers,

Dr. Ron

 

Robots and the Law

In the April issue of PCD&F/CIRCUITS ASSEMBLY, I wrote about the need for a balance between autonomous machinery and human-operation equipment. I wrote the piece in the aftermath of the Malaysia Airlines Flight 370 disappearance, and referenced, among other things, the Toyota sudden unintended acceleration problems and the self-driving cars that are beginning to appear on US streets.

Seems I’m not the only one working their way through this. On May 5, a pair of researchers at the Brookings Institution began a series of papers (The Robots Are Coming: The Project On Civilian Robotics) that considers the legal ramifications of driverless cars.

That led me to Google, which uncovered a few more references to potential tort roadblocks.

While my work considered the technical and emotional issues that always factor into to any major technology shift, the legal aspects are equally in play here. For those interested in the subject, the Brookings Institution project is especially worth a read.