Science fiction writer Isaac Asimov developed three laws of robotics, trying to create an ethical system for humans and robots. In San Francisco, however, one group of robots is now completely immune to those three laws. San Francisco has approved the use of robots to deploy deadly force in certain circumstances.
A Possibly Too-Rapid Approval of Deadly Force Using Robots
It was just a week ago, on Nov. 23, when Engadget reported the San Francisco Police Department had raised the issue. It petitioned the California city’s Board of Supervisors for permission to use robots to use deadly force on suspects.
The draft policy sought to give SFPD permission to deploy robots to kill suspects considered a sufficient threat to the life of people nearby. One member of the Board of Supervisors, unwilling to give the police that option, inserted a line in the policy stating, “Robots shall not be used as a Use of Force against any person.”
The SFPD crossed out that sentence with a red line and returned the draft. The altered proposal outlined in vague terms when robots could be used to deploy deadly force.
Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers are imminent and outweigh any other force option available to the SFPD.
The Fictional Three Laws of Robotics
In science fiction, Asimov defined laws robots had to adhere to. These laws first appeared in his 1942 short story “Runaround,” and eventually permeated much of the science fiction genre. The laws are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Later, Asimov added yet another rule, known as the fourth or zeroth law, superseding all the others.
A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Apparently the San Francisco Police Department doesn’t see the wisdom in these laws.
A Last-Resort Line of Defense
While the Board of Supervisors ultimately approved the measure, it did make further amendments. The approved language says officials can only use robots equipped with explosive charges after exhausting all other alternative de-escalation or force tactics.
Three of San Francisco’s supervisors opposed the policy, Shamann Walton, Dean Preston and Hillary Rosen. Preston called the policy “deeply disturbing” and a “sad moment” for the city.
@Lee:
As Ars Technica point out, the explosives are intended to be lethal https://arstechnica.com/gadgets/2022/11/san-francisco-allows-police-to-remotely-kill-suspects-with-robots/
Collateral damage, optional.
Jeff:
I’m surprised that you associate this SFPD decision with a lack of wisdom.
After all, it’s not like these robots will be kitted with firearms. Heaven forbid. These robots will not contribute one iota to gun violence, in fact, they may even prevent it. They’ll only be armed with explosives.
Brilliant, eh? The public no doubt feel safer already!
Are the explosives the flash-bangs already in use by law enforcement?
I doubt they are just flash bangs, since those wouldn’t constitute lethal force. My understanding is they would be powerful enough to kill the perp.