Autonomous Vehicles Might Develop Superior Moral Judgment

Autonomous car concept

Martellaro’s Four Laws for Autonomous Vehicles

First, some definitions for this thought experiment using Method #1 above.

  • System. The Autonomous driving software agent controlling a vehicle.
  • Minor Injury. An injury to a human that does not require professional medical care.
  • External Human. A human being who is outside the autonomous vehicle.
  • Passenger. A human being inside the autonomous vehicle.

Just for the sake of exploration, here’s how, in the spirit of Asimov’s laws, one might construct the autonomous vehicle morality rules, to first order.

I, Robot (the movie)
Image credit: “I, Robot” 20th Century Fox

Law #1. Priority number 1. The System must use all available means to avoid death or serious injury to the human passengers.

Law #2. The System must use all available means to avoid death or serious injury to External Humans unless that conflicts with the first law.

Law #3. The System must use all available means to avoid death or serious injury to a passenger’s animal pets unless that conflicts with the first or second law.

Law #4. The System must use all available means to protect itself from damage unless that conflicts with the first three laws.

Discussion

Implicit in the second law is that minor injury to a Passenger is acceptable to avoid death or serious injury to an External Human.

Implicit in the third law is that minor injury to an External Human is acceptable to avoid death or serious injury to a passenger’s pet.

Implicit in the fourth law is that a car can be replaced by insurance while a human being or beloved pet cannot. The vehicle must be capable of sacrificing itself. On the other hand, there’s no point in driving into a ditch and destroying the vehicle because a squirrel ran across the road. (An elk is a different matter when there are passengers on-board.)

Clearly, this simple set of laws has problems. They are incomplete. For example, some may consider that a careless and belligerent pedestrian deservers a minor injury so that a beloved pet onboard isn’t killed. I alluded to that varying set of values on page one. Moreover, there may be logical inconsistencies in the laws I listed that could be revealed by analysis or found wanting by a sophisticated Morality Engine. However….

Who Has The Final Say?

I presented these trial laws as a straw man for discussion about how humans might dictate their own terms. However, just as with the case of Asimov’s Laws, tech developers show little inclination to place this morality in the hands of individuals, science fiction authors, readers, professors or the motoring public. Is that the way we want to proceed?

In other words, is there a definitive set of prescribed values that society can clearly agree on, given the time to hash out the details? That would certainly be something legislators can get their heads around. Or is the myriad of possible situations so convoluted that only a machine can consistently get it right?

Do humans flatter themselves that they’re always the superior judge? We might, in the long run, be proven wrong.

Even so, are we going to bake the morality of autonomous vehicles so deeply into software that no engineering audit can ever discover flaws and rectify them? That’s a conscious decision for our future, but no one will ever get to vote on it.

Someday, there could be a headline. “Autonomous car kills child on bicycle in crosswalk. No one knows why.

Soon, I surmise, autonomous car will be delivered to customers. Most customers won’t have the foggiest idea how they work or the basis for its moral decisions.

That could be a dangerous road ahead, or it could be the only workable path we can all agree on. We’ll soon find out.

9 thoughts on “Autonomous Vehicles Might Develop Superior Moral Judgment

  • @gnasher729: I agree with your like of thinking. Much is said of the “morality” of the machines and worry that it will be less than that of a human driver. We all see (nearly everyday) human drivers operate their current machines in arguably immoral ways. Speeding, sudden lane changes, ad nausium. The vast majority of us really try to do the best we can when driving but as humans, when faced with the time constrained choice of “who do I hit?” are unlikely to make a substatially better choice than a machine following yours or some other set of rules.

    I quibble with the order of your set of rules, however. (This is where we get into societal norms) I would move rule 1 to position 2 or even 3. The passengers are responsible for choosing to go by way of car (autonomous or otherwise ). I think they bear the primary responsibility for initiating the journey and should bear the greater risk.

  • I keep seeing articles positing moral choices for automated vehicles, but they miss the point. We would no more program a vehicle to make such choices than we would teach drivers to. We don’t tell teenagers in Driver’s Ed to hit one pedestrian in order to avoid a group of three. We teach teenagers to drive responsibly in order to avoid such dilemmas.

    Automated systems need to be (and in practice are) designed to operate within safe boundaries or not to operate at all. The whole challenge is in determining the conditions under which the system can function safely. Tesla’s mistake was in designing a system that requires human supervision without designing into the system a mechanism to ensure that the driver was always paying attention. This is why we have regulations that set minimum performance requirements for safety.

  • Morally speaking, the only moral death is the one you choose for yourself that saves the lives of those with you.

    Machines aren’t alive and thus cannot make a moral choice of death. All machine choices that end in death are by definition immoral.

  • Your’e looking at the problem from the complete wrong angle. There is no matter of morality here. There are some people _talking_ very loudly about moral problems, and for some reason they get a lot of attention, but looking at the problem this way is wrong.

    The first law and only law of a self driving car is: Avoid hitting things. That’s it.

    To implement the first law of driving, the self driving car isn’t allowed to drink and drive, and it’s not allowed to drive blindly, and it isn’t allowed to go into tricky situations at high speed. You may say that hitting a lamp post and hitting a child are different things, but as long as you avoid hitting them you are fine and no distinction is needed.

    Now collisions will not be totally avoidable, because the self driving car is surrounded by idiots. Every driver knows the feeling 🙂 By driving carefully, the self driving car will avoid situations where it causes damage. It may react better in situations where a human driver just thinks “oh shit oh shit oh shit what am I doing now” – I’ve had situations like that, and I suppose a self driving car would know at all times what’s left, right and behind and would often be able to take evasive action that I couldn’t. Now accepting that collisions are not totally avoidable, we can add a rule: “When hitting things, minimise the damage. Damage is calculated from speed, whether the thing looks human, and whether the thing looks big and hard”.

    If done right, a self driving car will never get into a situation where “morality” would come into play. If it does: The highway code (or whatever it’s called in the USA) doesn’t mention morality. The rules that I learned: Don’t hit people. Don’t put people into danger. Watch out for elderly people and children. That’s all humans need to know, no morals needed. That’s all that a self driving car is needed.

  • I don’t know how many times this needs to be said, but I guess it bears repeating because engineers seem to either keep forgetting or fail to grasp the concept all together: morality is derivative of human compassion and it is not a faculty of logic. AI will never be in possession of ‘morality’, just its facsimile. *Anything* that it does will be the result of programming by us, at least at the root level, autonomy is a misnomer (something the silicon valley hype train excels at). So the answer to this question is a resounding no. By its very nature (which is math-based and therefore logic based) software will never be capable of ‘morality’, only what we tell it morality is (and yes, human beings are capable of spontaneous compassion in spite of their ‘programming’, software never will be capable of spontinaety, period). Look no further than collateral damage from drone strikes for evidence, and those are by and large still mostly human controlled. It’s science fiction, folks, and it would be the epitome of stupid to think otherwise.

  • Consumer Watchdog is petitioning the NHTSA to slow down on its fastback stance of letting Autonomous Vehicle development continue without much in the way of regulatory oversight.

    http://www.consumerwatchdog.org/resources/ltrrosekind072816.pdf

    At he end of the day, consumer rights organizations and the insurance industry will slow Autonomous vehicle development down to a 10 to 15 year introductory schedule instead of the 5 to 10 year time plan that you have stated in an earlier podcast. Can’t see the trucking industry getting on board with this due to union issues as well as insurance and safety issues for another 10 to 15 years at least. Not sure how you came up with a 5 year time plan. Way too ambitious of a time plan.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

WIN an iPhone 16 Pro Max!