MIT Develops Autonomous Vehicles That Can Figure Out How Big Of An Asshole You Are

We may earn a commission from links on this page.
Image for article titled MIT Develops Autonomous Vehicles That Can Figure Out How Big Of An Asshole You Are

We keep hearing that autonomous vehicles are coming soon, so very soon, but those of us who actually live and drive in the real world often smirk to ourselves knowingly, thinking about all the assholes that drive cars on a regular basis and what they’re like to share a road with. Good luck, robot, dealing with all those jackasses. It seems that MIT researchers have been thinking the same thing, because they’re developing systems to help autonomous vehicles determine, essentially, how big of an asshole you are behind the wheel.

Of course, the MIT researchers at the Computer Science and Artificial Intelligence Laboratory (CSAIL) don’t actually use term “asshole,” preferring instead to classify driving behavior as cooperative, altruistic, or egoistic, calling the first two “prosocial” and the latter being their nicer name for “asshole.”

Advertisement

Essentially, what these researchers are attempting to do is to give the cars some crude analog to something we’re generally all born with: social awareness.

Advertisement

Graduate student Wilko Schwarting describes, to those of you unfamiliar, what it’s like driving around humans:

“Working with and around humans means figuring out their intentions to better understand their behavior. People’s tendencies to be collaborative or competitive often spills over into how they behave as drivers. In this paper, we sought to understand if this was something we could actually quantify.”

Advertisement

So, collaborative or competitive. That pretty much checks out—people either let you merge in front of them in traffic or they pretend they don’t fucking see you and block you out even though you know they see you.

MIT News describes the situations ands learning process:

To try to expand the car’s social awareness, the CSAIL team combined methods from social psychology with game theory, a theoretical framework for conceiving social situations among competing players.

The team modeled road scenarios where each driver tried to maximize their own utility and analyzed their “best responses” given the decisions of all other agents. Based on that small snippet of motion from other cars, the team’s algorithm could then predict the surrounding cars’ behavior as cooperative, altruistic, or egoistic — grouping the first two as “prosocial.” People’s scores for these qualities rest on a continuum with respect to how much a person demonstrates care for themselves versus care for others.

In the merging and left-turn scenarios, the two outcome options were to either let somebody merge into your lane (“prosocial”) or not (“egoistic”). The team’s results showed that, not surprisingly, merging cars are deemed more competitive than non-merging cars.

Advertisement

The team also took pains to determine when it made sense for a driver to act more “egoistic,” since there are situations where decisive action is essential to keeping the flow of traffic moving.

While the system is not yet ready for actual deployment, this is really the exact type of research that needs to happen, because it acknowledges that there’s so much more to driving than being able to see road lines and identify cars; there’s innumerable “soft” problems that humans, by our nature as social beings, can manage intuitively, but require a lot of work for machines to be able to deal with.

Advertisement

Ignoring things like this leads to issues like the one Uber had when one of their test vehicles wasn’t aware that humans don’t always choose to use crosswalks, for example.

I know the goal of EVs is to make cars that drive better than people do, but, paradoxically, to get there, we need to make them, at least a little bit, think like people.

Advertisement

(I happen to know a book where this gets discussed a lot more that you could buy!)