Last night, Elon Musk gave a surprise lecture in media ethics to the gathered reporters in the midst of a Q&A with reporters about Tesla’s new autonomous driving technology set to be deployed on its cars from now on. If you’re skeptical of self-driving technology, Musk said, “you’re killing people.

This was horseshit, born out of self-interest. Musk went on the mini-tirade after a relevant question about Tesla’s degree of responsibility if one of its self-driving cars was involved in a crash (emphasis ours):



One of the things I should mention that frankly has been quite disturbing to me is the degree of media coverage of Autopilot crashes, which are basically almost none relative to the paucity of media coverage of the 1.2 million people that die every year in manual crashes. [It is] something that I think does not reflect well upon the media. It really doesn’t.

Because, and really you need to think carefully about this, because if, in writing some article that’s negative, you effectively dissuade people from using an autonomous vehicle, you’re killing people.

Next question.

Musk’s argument seems to be based on utilitarian ethics. (Or, more likely, much of the inaccurate headlines he has dealt with as of late related to Autopilot being in the news.) Despite being a passion enthusiasts pursue every day, non-autonomous driving is one of the most dangerous things that people do, causing more than 30,000 road deaths each year in the United States alone. The goal of developing autonomous vehicles is to eliminate the huge component of human error that feeds that death toll, to say nothing of the wasted time, productivity, and health caused by traffic. Anything that delays the arrival of that autonomous future, by this line of thinking, is prolonging human drivers’ exposure to harm.

But Musk isn’t an ethical philosopher; he’s a manufacturer of autonomous cars. And his leap from the abstract to the specific—that writing critically about Elon Musk’s cars is tantamount to killing—is wrongheaded and arrogant in a way that captures the worst attitudes of Silicon Valley.


It seems to be true, so far, that Tesla is the company with the most sophisticated autonomous-driving system currently on the road, which is commendable. Statistically—again, so far—self-driving Teslas can be said to have been safer than human drivers on the highway.

But it’s too soon, and there are far too many variables in play, to take that as a certain conclusion. We only know of one person who has died in an autonomous driving crash thus far. We don’t know for sure what role his Tesla’s Autopilot played in the crash—whether, for instance, the autonomous system allowed him to become more distracted than he otherwise would have been. We do know, however, know that the autopilot system was not up to the task of detecting and reacting to a truck trailer turning across the roadway.


Reporting on that failure is not a rejection of the premise that autonomous cars can save lives. It’s asking whether this particular life could have been saved if this particular driving system—the driving system Elon Musk sells—worked differently.

Musk’s outlook—that mere skepticism of autonomous driving technology puts blood on the hands of those who question it—is an abdication of responsibility. It’s hard to tell whether it’s the result of pathological defensiveness or deep confusion about how criticism or journalism works, but it reeks of the technology sector’s belief that the thinkfluencers already know what’s best, and that to question or regulate their actions is to reject progress.


Every new technology deserves scrutiny, from the way it’s developing to the way society interacts with it. Just because it has the potential to reduce deaths doesn’t mean that it’s manna from heaven, or in this case, from Tesla’s labs. It doesn’t mean that, for instance, a major blindspot in the tech shouldn’t be called out, for fear that it might lead people to distrust this good, yet very imperfect innovation.

Autonomous driving will, in principle, save drivers’ lives. It will also, in practice, kill some people, and it will kill some of them in new ways. It’s not enough for an autonomous-vehicle developer to point to the new death toll being lower than the old death toll was. The developer is responsible for doing everything possible to prevent those new deaths, too.


But if your technology can’t withstand the slightest bit of questioning, then it might not be ready at all. The thing is, however, that Tesla’s Autopilot and autonomous systems in general can withstand scrutiny. They don’t need this dumb, defensive, Trumpian media criticism, an incredibly poor read of what the press is or what it does. The point of journalism is not to maintain a comprehensive (and positive) chronicle of everything happening in the world, always.

To put it in terms Musk might understand, that model would introduce a complete noisy system, where none of the actually important data ever makes it where it should be.

The reason why we don’t cover every single one of the 1.3 million road deaths that occur, the vast majority of which occur under regular (yet still incredibly, deeply tragic) circumstances, is because they are regular circumstances. We know that people die because of human error. The public knows that people die of human error. They don’t need forced re-education every day. And if you ended up writing a million stories saying “Car Does Not Crash Today,” no one’s going to pay attention to, let alone read, the stories that do matter about cars not crashing.


While he’s known to be a voracious consumer of news, Musk is often not adept at dealing with the criticism that comes with it. But he’s going to have to learn that if he wants his company to be the foremost leader on autonomous driving—which, so far, it is—there is a price that comes with that.