Government's Self-Driving Car Rules Don't Say When Your Car Should Kill You

Illustration for article titled Government's Self-Driving Car Rules Don't Say When Your Car Should Kill You

Today is an important day in the development of autonomous cars, because today the National Highway Safety Administration published the Federal Automated Vehicles Policy, a 116-page document that you can read here designed to establish an actual Federal-level policy and safety standards for self-driving vehicles.


You know they mean business because there’s a futuristic car on the cover with seats that face backwards. They’re not playing around here.

I haven’t read every last exhaustive word of the document just yet, but I wanted to take a moment to tell you about some of the parts that seem most important from a first reading. There’s a lot here, and we’ll have more analysis soon, but right now let’s see what jumps out at us, much like an autonomous car gone rogue.

I should also mention, for clarity, that NHTSA likes to use the term ‘highly automated vehicles’ (HAVs) for autonomous cars. Since that’s so much easier to type than ‘autonomous car,’ I’ll give it a go, too.

• The NHTSA has adopted the SAE’s scale of autonomy.

This one we actually knew about yesterday, but it’s important. Finally, there’s one accepted standard for autonomy that everyone will (hopefully) agree on. These levels need to be learned by American drivers, so we can all talk about autonomous cars and have some reasonable idea we know what everyone is referring to.

In case you forgot it, here’s a chart:

Illustration for article titled Government's Self-Driving Car Rules Don't Say When Your Car Should Kill You

• A 15-point Safety Assessment for HAVs will be established

Right now, there is no standard rating for testing the behavioral safety of HAVs. We know how a Tesla Model S does in a crash test, physically, but we don’t really know how its Autopilot system performs, safety-wise.


By establishing some standards, in this case that means 15 separate categories to test, the NHTSA can then use the results from these tests to establish some overall rating. It’d be like the 5-star rating system for crash safety is used today.

Here are the categories and description from the report:

D. Safety Assessment Letter to NHTSA

To aid NHTSA in monitoring HAVs, the Agency will request that manufacturers and other entities voluntarily provide reports regarding how the Guidance has been followed. This reporting process may be refined and made mandatory through a future rulemaking. It is expected that this would require entities to submit a Safety Assessment to NHTSA’s Office of the Chief Counsel for each HAV system, outlining how they are meeting this Guidance at the time they intend their product to be ready for use (testing or deployment) on public roads. This Safety Assessment would assist NHTSA, and the public, in evaluating how safety is being addressed by manufacturers and other entities developing and testing HAV systems.

The Safety Assessment would cover the following areas:

• Data Recording and Sharing

• Privacy

• System Safety

• Vehicle Cybersecurity

• Human Machine Interface

• Crashworthiness

• Consumer Education and Training

• Registration and Certification

• Post-Crash Behavior

• Federal, State and Local Laws

• Ethical Considerations

• Operational Design Domain

• Object and Event Detection and Response

• Fall Back (Minimal Risk Condition)

• Validation Methods

I’m glad to see Privacy and Cybersecurity in there. Crashworthiness includes the current sort of physical crash safety ratings, as well as the idea of ‘compatibility,’ which is specifically for unmanned cargo vehicles (like my Apple Car idea), and their behavior in a crash situation.

Illustration for article titled Government's Self-Driving Car Rules Don't Say When Your Car Should Kill You

The Ethical Considerations section is especially interesting. It addresses conundrums like the Trolley Problem, but at this point the NHTSA doesn’t seem to be setting any standards, rather suggesting that these conflicts need to be studied and acceptable behaviors developed:

Similarly, a conflict within the safety objective can be created when addressing the safety of one car’s occupants versus the safety of another car’s occupants. In such situations, it may be that the safety of one person may be protected only at the cost of the safety of another person. In such a dilemma situation, the programming of the HAV will have a significant influence over the outcome for each individual involved.

Since these decisions potentially impact not only the automated vehicle and its occupants but also surrounding road users, the resolution to these conflicts should be broadly acceptable. Thus, it is important to consider whether HAVs are required to apply particular decision rules in instances of conflicts between safety, mobility, and legality objectives. Algorithms for resolving these conflict situations should be developed transparently using input from Federal and State regulators, drivers, passengers and vulnerable road users, and taking into account the consequences of an HAV’s actions on others.


• Federal Motor Vehicle Safety Standards (FMVSS) will change

Not surprisingly, the legally required equipment that a passenger car must have will be different for a fully autonomous vehicle. You’re not going to need a steering wheel or mirrors if you’re never going to be able to actually drive a car. With that in mind, the report suggests:

Consider Updates to FMVSS:

Additional standards could be provided by, among other possibilities, a new FMVSS to which manufacturers could certify HAVs that do not have controls to permit operation by a human driver (i.e., no steering wheel, brake pedals, turn signals, etc.). Such a standard would not apply to vehicles with lower levels of automation. A new standard could prescribe performance requirements for multiple types of equipment to ensure the safety of these vehicles on roadways in the United States.


My personal opinion is that we’ll probably still want things like turn signals and brake lights that communicate what a robotic car is doing even to those of us that don’t think in 1s or 0s and yet may still be driving a car.

Illustration for article titled Government's Self-Driving Car Rules Don't Say When Your Car Should Kill You

• There’s now a defined set of behavioral competencies

This one is a big deal. For the first time, the NHTSA will categorize the actual behaviors that define what HAV is (or should be) capable of. Here’s what an autonomous car should be able to do, according to research by the California PATH project at Berkeley:

•Detect and Respond to Speed Limit Changes and Speed Advisories

•Perform High-Speed Merge (e.g., Freeway)

•Perform Low-Speed Merge

•Move Out of the Travel Lane and Park (e.g., to the Shoulder for Minimal Risk)

•Detect and Respond to Encroaching Oncoming Vehicles

•Detect Passing and No Passing Zones and Perform Passing Maneuvers

•Perform Car Following (Including Stop and Go)

•Detect and Respond to Stopped Vehicles

•Detect and Respond to Lane Changes

•Detect and Respond to Static Obstacles in the Path of the Vehicle

•Detect Traffic Signals and Stop/Yield Signs

•Respond to Traffic Signals and Stop/Yield Signs

•Navigate Intersections and Perform Turns

•Navigate Roundabouts

•Navigate a Parking Lot and Locate Spaces

•Detect and Respond to Access Restrictions (One-Way, No Turn, Ramps, etc.)

•Detect and Respond to Work Zones and People Directing Traffic in Unplanned or Planned Events

•Make Appropriate Right-of-Way Decisions

•Follow Local and State Driving Laws

•Follow Police/First Responder Controlling Traffic (Overriding or Acting as Traffic Control Device)

•Follow Construction Zone Workers Controlling Traffic Patterns (Slow/Stop Sign Holders).

•Respond to Citizens Directing Traffic After a Crash

•Detect and Respond to Temporary Traffic Control Devices

•Detect and Respond to Emergency Vehicles

•Yield for Law Enforcement, EMT, Fire, and Other Emergency Vehicles at Intersections, Junctions, and Other Traffic Controlled Situations

•Yield to Pedestrians and Bicyclists at Intersections and Crosswalks

•Provide Safe Distance From Vehicles, Pedestrians, Bicyclists on Side of the Road

•Detect/Respond to Detours and/or Other Temporary Changes in Traffic Patterns

The full list of behavioral competencies a particular HAV system would be expected to demonstrate and routinely perform will depend on the HAV system, its ODD, and the fall back method. Manufacturers and other entities should consider all known behavioral competencies and document detailed reasoning for those which they consider to be inapplicable. Further, they should fully document methods by which they implement, validate, test and demonstrate applicable behavioral competencies.


This seems like a pretty comprehensive list, though there’s some hedging at the end just in case these behaviors aren’t enough. Still, it seems like a very reasonable place to start.

There’s lots more in here, of course, and we’ll be digging into it further. Overall, this is something that needed to happen. Autonomous vehicle or HAVs or whatever you want to call these robot cars will need a framework in which to operate, and standards will need to be developed to prevent chaos and to give consumers some means of knowing what the hell is going on, and able to make informed, rational decisions.


One thing I haven’t seen just yet involves interactions between human and robotic cars; I’ll look for that specifically next.

We have a long way to go with all of this, but the technology is moving very fast, and things are already happening. We need this now, so I’m glad it’s here.

Senior Editor, Jalopnik • Running: 1973 VW Beetle, 2006 Scion xB, 1990 Nissan Pao, 1991 Yugo GV Plus, 2020 Changli EV • Not-so-running: 1977 Dodge Tioga RV (also, buy my book!:


this is one argument i have been having for quite a while and can’t seem to find an applicable solution to. simply said no one would like to buy a car programmed to kill the owner, but they would have to, simply said the car should always choose to kill the least amount of people, without consideration for age, gender, social or economical standing. just the least possible damage. also this is something that might just be instituted when autonomous is the norm, and not a minority, so that an accident is very unlikely, and hopefully, sidewalks have barriers as an added safety measure.