Blog

News and Updates

Intel seeks patent on resolving moral conflicts in autonomous cars

 

US patent application 20170285585 published today. The title of the application sounds promising: “TECHNOLOGIES FOR RESOLVING MORAL CONFLICTS DURING AUTONOMOUS OPERATION OF A MACHINE”. And the introduction is even more promising. It explains that compute systems (such as those used in autonomous cars) can experience a moral conflict for which the hard-coded rules do not define or provide a clear action to be taken. “A typical compute system of a controlled machine will shut down or return control to a human user. For example, in a situation in which an autonomous vehicle is faced with the two decisions to either impact a jaywalking person who has jumped into the roadway or swerve onto a nearby sidewalk to miss the jaywalking person while striking a bystander, the autonomous vehicle may be unable to make such a decision based on the standard operation rules. As a result, the autonomous vehicle may simply return control to the driver to deal with the complicated moral decision of choosing which person to possibly injury.”

I am excited: Intel’s moral agent can decide whether to run over a jaywalker or a bystander. How do they do that?

Unfortunately, the answer is technically pretty boring: Intel describes a system that makes decisions based on weighting factors, which they say can be updated over the air. Seems like the default answer that would first come to any engineer’s mind.

What makes Intel’s application interesting is that they assign clear values to individual players. For example, a “person” is worth slightly less than a “child”. And Intel prefers sports cars: A corvette is worth twice as much as a Ford Focus – but only one fifth of what a dog is worth.

In other words: An autonomous vehicle with “Intel Inside” will opt to run into a Ford Focus over running into a Corvette. And it will opt to run into a person breaking the law over running into the person not breaking the law.

As interesting as the problem that needs to be solved is, the Intel application ignores that computing systems today are utterly incapable of assessing complex situations that would allow an application of simple rules such as those Intel describes. Maybe they should ask their new friends at Mobileye what it will take to distinguish a K9 Officer from a regular dog. And the idea of letting a car distinguishing a person breaking the law from one not breaking the law is just an illusion: A victim running onto a street to escape an assailant would inevitable be classified as “the person breaking the law” by a computer.

I suggest we keep moral decision making with the humans for now.