The United States government is on the verge of deploying new artificial intelligence technology (AI) weapons that can make decisions on whether to kill human targets.
The frightening lethal autonomous weapons, which are being developed in the United States, China, and Israel, will automatically select humans deemed a “threat” to the system and eliminate them.
Some critics have voiced fears that the deployment of AI weapons would entrust machines to make decision whether to kill human targets, with no human oversight, The New York Times reported.
This is starting to sound like Skynet.
The Mirror reported that Numerous countries, including Austria, reportedly want the United Nations to pass a legally binding resolution that would restrict or outlaw the use of AI killer drones. However, a number of countries—including the US, Russia, Australia, and Israel—are opposed to them and would rather see a non-binding resolution.
“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, said in an interview.
“What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue, and an ethical issue.”
According to a notice published earlier this year, the US government is working on deploying swarms of thousands of AI-enabled drones.
US Deputy Secretary of Defense Kathleen Hicks said technologies such as AI-controlled drone swarms will allow the US to o balance China’s manpower’s Liberation Army’s (PLA) numerical superiority.
“We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat,” she said, according to Reuters.
Interesting Engineering reported: The Pentagon is reportedly developing a network of hundreds or even thousands of AI-enhanced, autonomous drones that could be rapidly deployed near China in the event of conflict.
These drones would carry surveillance equipment or weapons and would be used to take out or weaken China’s extensive network of anti-ship and anti-aircraft missile systems along its coasts and artificial islands in the South China Sea. This development could potentially be a major shift in military strategy.
Frank Kendall, the US Air Force secretary, said AI drones would need to have the capability to make lethal decisions under human supervision.
“Individual decisions versus not doing individual decisions is the difference between winning and losing — and you’re not going to lose,” he said.
“I don’t think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves.”
The New Scientist noted that Ukraine used AI-controlled drones in its conflict with Russia in October.
However, it’s not known if the drones caused human casualties.
A senior AI scientist at the University of California, Berkeley, Stuart Russell, will screen the video on Monday during an event held to Stop Killer Robots at the United Nations Convention on Conventional Weapons.
The campaign issued a warning:
“Machines don’t see us as people, just another piece of code to be processed and sorted. From smart homes to the use of robot dogs by police enforcement, AI technologies and automated decision-making are now playing a significant role in our lives. At the extreme end of the spectrum of automation lie killer robots.”
“Killer robots don’t just appear – we create them,” the campaign added.
“If we allow this dehumanisation we will struggle to protect ourselves from machine decision-making in other areas of our lives. We need to prohibit autonomous weapons systems that would be used against people, to prevent this slide to digital dehumanisation.”
According to Russell, creating and deploying autonomous weapons would be disastrous for human security.
“The technology illustrated in the film is simply an integration of existing capabilities. It is not science fiction. In fact, it is easier to achieve than self-driving cars, which require far higher standards of performance,” Russell said.
The campaign also highlights that due to AI-powered machines being “relatively cheap to manufacture, critics fear that autonomous weapons could be mass-produced and fall into the hands of rogue nations or terrorists who could use them to suppress populations and wreak havoc, as the movie portrays.”
“A treaty banning autonomous weapons would prevent large-scale manufacturing of the technology,” BKR notes.
“It would also provide a framework to police nations working on the technology, and the spread of dual-use devices and software such as quadcopters and target recognition algorithms.”
“Professional codes of ethics should also disallow the development of machines that can decide to kill a human,” Russell added.
Much like SkyNet, a “Black Mirror” episode from 2017 depicts AI robot dogs roaming the earth in pursuit of humans labeled a “threat.”
The creator of the episode, Charlie Brooker, explained his thinking behind the “killer robot dogs,” which is becoming somewhat prophetic.
“That’s actually scarily correct,” Brooker said.
“It was from watching Boston Dynamics videos, but crossed with — have you seen the film All Is Lost? I wanted to do a story where there was almost no dialogue. And with those videos, there’s something very creepy watching them where they get knocked over, and they look sort of pathetic laying there, but then they slowly manage to get back up.”
AI is also already used in law enforcement.
Just last week, the Los Angeles Police Department used a robot dog to end an armed standoff.
A SWAT squad member remotely controlled the robot as it approached the bus and entered through the entrance.
Artificial intelligence is also being utilized in aerial drones.
For example, DroneSense – a public safety drone software platform – transforms raw data gathered by drones into actionable intelligence for police.
The DroneSense OpsCenter allows several drone users to work together, examine what each drone observes, and track flight paths in real-time.