I’ve driven a number of cars that attempt to rate your driving on some criteria: safety, efficiency, performance, whatever. And you know what I’ve learned? These practices do not make you a better driver. What they do is train you to drive to fit the idiosyncratic criteria of whatever algorithm is evaluating you, and that almost never translates to better driving in the real world. Tesla’s new judgey system, Safety Score Beta, is looking like it won’t change how I feel about these things.
The Safety Score is part of Tesla’s deployment of a Request FSD Beta button that would allow any driver to participate in Tesla’s Full Self-Driving (FSD) Beta test, which, it’s worth mentioning, is a test of unfinished software designed to pilot a 4,000+ pound car on public roads without the opt-in of any of the people or other drivers sharing those roads with them.
While any Tesla owner with properly updated software can push that little button and request access to be a Beta tester of FSD (though I think it’s limited to those who have paid $10,000 for the promise of Full Self-Driving, but honestly, I have yet to absolutely confirm this), once you push that button to opt-in, you don’t just then get to test out that Beta.
Tesla is being understandably careful here, which is why when you opt in, your past 30 days of driving behavior are sent to Tesla, and for at least the next seven days your driving will be evaluated by the Safety Score system.
Here’s what the dialog box that pops up when you opt in says:
“Thank you for your interest in limited early access Full Self-Driving Beta! The Tesla team is analyzing your vehicle driving data and Safety Score to determine eligibility. You can view your Safety Score from the Tesla app at any time (version 4.1.0 or newer). If eligible, you will receive a software update as part of the limited early access.
☑ I consent to the collection and review of ongoing VIN-associated vehicle driving data while enrolled.
☑ I understand that when using FSD Beta I am responsible for remaining alert with my hands on the wheel, and must be prepared to take action at any time. FSD Beta does not make my car autonomous.
☑ I understand that FSD Beta can be revoked at any time.
So, what exactly is the Safety Score using to evaluate how safe a driver you are or aren’t? According to Tesla’s site, it’s five factors:
There are five Safety Factors that impact your Safety Score. These are measured directly by your Tesla vehicle using various sensors on the vehicle and Autopilot software.
Forward Collision Warnings per 1,000 Miles
Forward Collision Warnings are audible and visual alerts provided to you, the driver, in events where a possible collision due to an object in front of the vehicle is considered likely without your intervention. Events are captured based on the ‘medium’ Forward Collision Warning sensitivity setting regardless of your user’s setting in the vehicle. Forward Collision Warnings are incorporated into the Safety Score formula at a rate per 1,000 miles.
Hard braking is defined as backward acceleration, measured by your Tesla vehicle, in excess of 0.3g. This is the same as a decrease in the vehicle’s speed larger than 6.7 mph, in one second. Hard braking is introduced into the Safety Score formula as the proportion of time (expressed as a percentage) where the vehicle experiences backward acceleration greater than 0.3g relative to the proportion of time where the vehicle experiences backward acceleration greater than 0.1g (2.2 mph in one second).
Aggressive turning is defined as left/right acceleration, measured by your Tesla vehicle, in excess of 0.4g. This is the same as an increase in the vehicles speed to the left/right larger than 8.9 mph, in one second. Aggressive turning is introduced into the Safety Score formula as the proportion of time (expressed as a percentage) where the vehicle experiences lateral acceleration greater than 0.4g, in either the left or right direction, relative to the proportion of time where the vehicle experiences acceleration greater than 0.2g (4.5 mph in one second), in either the left or right direction.
Your Tesla vehicle measures its own speed, the speed of the vehicle in front and the distance between the two vehicles. Based on these measurements, your vehicle calculates the number of seconds you would have to react and stop if the vehicle in front of you came to a sudden stop. This measurement is called headway. Unsafe following is the proportion of time where your vehicle’s headway is less than 1.0 seconds relative to the time that your vehicle’s headway is less than 3.0 seconds. Unsafe following is only measured when your vehicle is traveling at least 50 mph and is incorporated into the Safety Score formula as a percentage.
Forced Autopilot Disengagement
The Autopilot system disengages for the remainder of a trip after you, the driver, have received three audio and visual warnings. These warnings occur when your Tesla vehicle has determined that you have removed your hands from the steering wheel and have become inattentive. Forced Autopilot Disengagement is introduced into the Safety Score formula as a 1 or 0 indicator. The value is 1 if the Autopilot system is forcibly disengaged during a trip, and 0 otherwise.
Now, on the surface, there doesn’t really appear to be anything wrong with these factors, but like so many other things related to driving in, you know, reality, it’s more complex than it seems.
The Unsafe Following one I think is the least problematic. Don’t tailgate. That’s fine. And the Forced Autopilot Disengagement one is okay, in that it’s tattling on you if you’re being stupid and using autopilot without your hands on the wheel, which you absolutely need to have your hands on, because FSD/Autopilot is not in any way a fully automated driving system, and requires, like all Level 2 systems, you to be engaged and ready to take over at a moment’s notice.
This is, of course, a huge problem for all Level 2 systems, and why I think they’re inherently flawed, since humans don’t work well in the peculiar pay-attention-but-you-don’t-have-to-pay-attention gray area L2 systems operate in.
Sure, 20 seconds is a long time to keep driving with a potentially inattentive or asleep driver, but at least it’s something, and they’re penalizing you for abusing the system.
Okay, so those two aren’t bad, but the Forward Collision Warning, Hard Braking, and Agressive Turning ones give me some pause, because if a driver is incentivized to avoid these behaviors, they may make some bad decisions as a result.
The problem with these behaviors is that while none of them are the byproduct of a nice, easy, totally safe drive, they are sometimes the result of actions taken by a driver to avoid very unsafe situations.
If a kid or a dog or a monitor lizard bolts out in front of your car, then absolutely hit those brakes, hard! That’s what they’re for! There are all kinds of reasons why you may want or need your car to stop in a hurry, and hard braking can be the exact right thing to do, so nobody should be trying to deliberately avoid a potentially life-saving, if dramatic, action.
The same thing goes for Agressive Turning; if a motorcyclist next to you hits an oily patch of pavement and swerves into your lane, then, yeah, turn aggressively to avoid them! Like hard braking, there can absolutely be times where an agressive turn is the proper reaction, and drivers shouldn’t be reluctant to take those actions because they’re thinking about some dumb score.
Also, Forward Collision Warnings can be triggered by many things outside of the driver’s control, and using that as a penalty seems pretty iffy to me.
I can’t say for certain that every driver who opts in to have their driving evaluated by this algorithm will start driving to fit the rules instead of what reality demands, but we’re already seeing some hints that that’s exactly what’s happening:
...just in case that tweet gets deleted, here’s a screenshot:
I suppose there’s a chance the tweeters there are just joking, but it’s not like the behaviors he’s describing wouldn’t improve the Safety Score. Rolling through stop signs, blasting through yellow lights, not braking for cyclists and so on all would help your Safety Score, because the criteria of what defines this score is less about “safety” and more about smooth, consistent, and undramatic driving.
Yes, that often means safe driving, but not always, and sure as hell not when actions taken to achieve that sort of driving include shit like yellow light/stop sign running and not giving a shit about cyclists.
The fundamental problem here is that this system incentivizes drivers to drive according to a set of criteria and not the situation around them, and I can’t think of a context where that makes sense.
Defenders of the system remind us that this test, like everything else in Tesla’s FSD arsenal, isn’t done yet:
I’m not saying I disagree with this—it’s definitely a work in progress—but the fundamental concept is flawed. If you want to grade a driver for safety, you can’t do that just based on driving behaviors completely removed from their context. It doesn’t work.
We’ve seen some of this before with other driver-rating systems with potential rewards like Progressive Insurances’s OBD-port monitoring device known as Snapshot.
That system also evaluates drivers on a similar hard-braking bad/zero context-type of algorithm, and the flaws there have been noted for some time, along with people devising methods to game the system.
Unless Tesla wants to undertake a far more comprehensive evaluation process, where perhaps video clips of driving situations are evaluated along with driving behavior to get an actual sense of the context of those behaviors, the Safety Score will always be flawed, and always will be open to being gamed by drivers, very likely resulting in worse driving than if they, you know, just fucking drove.
When you’re driving, not driving like an idiot is its own reward, and knowing that will always be better than trying to please some stupid algorithm that’s never driven a car.