Institute for Legal, Legislative and Educational Action
A trio of computer scientists from the Rensselaer Polytechnic Institute in New York recently published research detailing a potential AI intervention for murder: an ethical lockout.
The big idea here is to stop mass shootings and other ethically incorrect uses for firearms through the development of an AI that can recognize intent, judge whether it’s ethical use, and ultimately render a firearm inert if a user tries to ready it for improper fire.
That sounds like a lofty goal, in fact the researchers themselves refer to it as a “blue sky” idea, but the technology to make it possible is already here.
According to the team’s research:
Predictably, some will object as follows: “The concept you introduce is attractive. But unfortunately it’s nothing more than a dream; actually, nothing more than a pipe dream. Is this AI really feasible, science- and engineering-wise?” We answer in the affirmative, confidently.
The research goes on to explain how recent breakthroughs involving long-term studies have lead to the development of various AI-powered reasoning systems that could serve to trivialize and implement a fairly simple ethical judgment system for firearms.
This paper doesn’t describe the creation of a smart gun itself, but the potential efficacy of an AI system that can make the same kinds of decisions for firearms users as, for example, cars that can lock out drivers if they can’t pass a breathalyzer.
In this way, the AI would be trained to recognize the human intent behind an action. The researchers describe the recent mass shooting at a Wal Mart in El Paso and offer different view of what could have happened:
The shooter is driving to Walmart, an assault rifle, and a massive amount of ammunition, in his vehicle. The AI we envisage knows that this weapon is there, and that it can be used only for very specific purposes, in very specific environments (and of course it knows what those purposes and environments are).
At Walmart itself, in the parking lot, any attempt on the part of the would-be assailant to use his weapon, or even position it for use in any way, will result in it being locked out by the AI. In the particular case at hand, the AI knows that killing anyone with the gun, except perhaps e.g. for self-defense purposes, is unethical. Since the AI rules out self-defense, the gun is rendered useless, and locked out.
This paints a wonderful picture. It’s hard to imagine any objections to a system that worked perfectly. Nobody needs to load, rack, or fire a firearm in a Wal Mart parking lot unless they’re in danger. If the AI could be developed in such a way that it would only allow users to fire in ethical situations such as self defense, while at a firing range, or in designated legal hunting areas, thousands of lives could be saved every year.
Of course, the researchers certainly predict myriad objections. After all, they’re focused on navigating the US political landscape. In most civilized nations gun control is common sense.
The team anticipates people pointing out that criminals will just use firearms that don’t have an AI watchdog embedded:
In reply, we note that our blue-sky conception is in no way restricted to the idea that the guarding AI is only in the weapons in question.
Clearly the contribution here isn’t the development of a smart gun, but the creation of an ethically correct AI. If criminals won’t put the AI on their guns, or they continue to use dumb weapons, the AI can still be effective when installed in other sensors. It could, hypothetically, be used to perform any number of functions once it determines violent human intent.
It could lock doors, stop elevators, alert authorities, change traffic light patterns, text location-based alerts, and any number of other reactionary measures including unlocking law enforcement and security personnel’s weapons for defense.
The researchers also figure there will be objections based on the idea that people could hack the weapons. This one’s pretty easily dismissed: firearms will be easier to secure than robots, and we’re already putting AI in those.
While there’s no such thing as total security, the US military fills their ships, planes, and missiles with AI and we’ve managed to figure out how to keep the enemy from hacking them. We should be able to keep police officers’ service weapons just as safe.
Realistically, it takes a leap of faith to assume an ethical AI can be made to understand the difference between situations such as, for example, home invasion and domestic violence, but the groundwork is already there.
If you look at driverless cars, we know people have already died because they relied on an AI to protect them. But we also know that the potential to save tens of thousands of lives is too great to ignore in the face of a, so far, relatively small number of accidental fatalities.
It’s likely that, just like Tesla’s AI, a gun control AI could result in accidental and unnecessary deaths. But approximately 24,000 people die annually in the US due to suicide by firearm, 1,500 children are killed by gun violence, and almost 14,000 adults are murdered with guns. It stands to reason an AI-intervention could significantly decrease those numbers.
You can read the whole paper here.
Published February 19, 2021 — 19:35 UTC