Have autonomous robots started killing in war? The reality is messier than it appears

It’s the kind of thing that can barely pass through background noise these days. There are several publications tentatively announcing that a killing robot may have hunted humans for the first time, based on a UN report on the civil war in Libya last week. Time. In one headline, “The era of autonomous killer robots may already be here.”

But is it? As you might have guessed, this is a difficult question to answer.

A new report has sparked controversy among experts heading to the heart of our problem facing the rise of autonomous robots in warfare. Some said the story was wrong and sensational, while others suggested there was a chunk of truth in the discussion. Even if you dive into the subject, the world is quiet Terminator Timeline for the year 2020. But it points to a more monotonous and perhaps even more depressing truth. What a killer robot is that you can’t agree on. to be, And if we wait for this to happen, their presence in the war will have been normalized for a long time.

Do you feel good? At least it will take your mind off the global pandemic. Let’s dive in:

The source of all this story is a 548-page report from the United Nations Security Council detailing the end of the Second Libyan Civil War, covering the period from October 2019 to January 2021. The report was published in March has been announced. Read full here. To save time: So thorough explanation So It details the wars and skirmishes that took place between the various forces in the complex conflicts, the movement of various armies, the transfer of arms, raids and wars.

However, the paragraph we are interested in describes the offensive that took place near Tripoli in March 2020. reported as Haftar Affiliated Forces or HAF). The full relevant passages are:

Logistic convoys and retreating HAFs were tracked and remotely engaged by unmanned combat aircraft or lethal autonomous weapons systems such as the STM Kargu-2 (see Annex 30) and other roaming ammunition. Lethal autonomous weapons systems are programmed to strike targets without a data connection between the operator and the ammunition. It’s actually a true “shoot, forget and find” feature. “

The Kargu-2 system mentioned here is a quadcopter made in Turkey. It is essentially a consumer drone used to submerge targets. You can operate it manually or adjust it yourself using machine vision. The second paragraph of the report points out that retreating forces were “constantly harassed by unmanned combat aircraft and lethal autonomous weapons systems” and that the HAF “inflicted significant casualties” as a result.

The Kargu-2 drone is essentially a quadcopter that bombards enemies.
Image: STM

But that’s all. That’s all we have. What the report doesn’t say (at least not as obvious) is that humans have been killed by autonomous robots that act without human supervision. humans and vehicles attack A mix of drones, quadcopters and “roaming ammunition” (which will be discussed later) have been programmed to make quadcopters work offline. However, it is unclear whether the attack occurred without a connection.

These two paragraphs were brought to the mainstream press with the following story: new scientist, “Drone may have attacked humans completely autonomously for the first time.” that much NS military drone power acted autonomously and power Killed, but later lost this nuance in reports. One headline read: “Autonomous drones have attacked all of Libya’s soldiers.” “For the first time, a drone has autonomously attacked a human.”

Let’s make it clear. The United Nations has not made it clear whether drones autonomously attacked humans in Libya last year, but it clearly suggests that this could happen. the problem is did For many experts, this is not news.

Some experts have questioned these stories because they followed the UN’s wording that doesn’t make a clear distinction between roaming ammunition and lethal autonomous weapon systems, or LAWS (a policy term for killer robots).

For beginners, the roaming ammo is the same weapon as the seagull on the beach. They hang around a specific area, float above the masses, and wait to attack their targets. Usually one kind or another kind of military hardware (it’s not impossible that it can be used to target individuals).

A classic example is the Israeli IAI Harpy, developed in the 1980s aimed at anti-aircraft defense. The harpy looks like a cross between a missile and a fixed-wing drone and launches into a target area that can stay on the ground for up to nine hours. It scans for radar emissions from anti-aircraft systems and drops them wherever they find them. The roaming aspect is important because the military will often turn off these radars as they act like guidance signs.

The IAI Harpy is launched from the ground and can stay in the target area for hours.
Image: IAI

“The question is, how about the first time?” Ulrike Franke Tweet, Senior Policy Fellow of the European Commission on Foreign Relations. “Wandering ammunition has been on the battlefield for a while, especially at Nagorno-Karaback. What is new here is not the incident, but it seems that the UN report is calling them lethal autonomous weapons systems.”

Jack McDonald, a lecturer in the Department of War Studies at King’s College London, says the difference between the two terms is controversial and constitutes an unresolved issue in the world of arms regulation. “There are those who call ‘roaming ammunition’ ‘lethal autonomous weapon systems’ and those who call ‘roaming ammunition’. The Verge. “This is huge and long lasting. Because the boundaries between autonomy and automation have changed over the decades.”

So is Harpy a lethal autonomous weapon system? killer robot? It depends on who you are asking. IAI’s own website calls it “autonomous weapon for all weather,” and Harpy certainly fits the makeshift definition of “a machine that targets combatants without human oversight.” But if this is your definition, then you have created a very broad church for killer robots. Indeed, in this definition, landmines are killer robots because they autonomously target combatants in warfare without human supervision.

If killer robots have been around for decades, why has there been so much discussion in recent years as groups like the Campaign to Stop Killer Robots have pushed for regulation of the technology in the United Nations? And what makes this event in Libya so special?

The rise of artificial intelligence plays a big role, says Zak Kallenborn, a policy fellow at the Schar School of Policy and Government. Advances in AI over the past decade have given weapon builders access to inexpensive vision systems that can select targets as soon as their phone identifies pets, plants and familiar faces from their camera rolls. While these systems promise subtle and accurate identification of targets, they are much more prone to mistakes.

“Roaming ammunition usually responds to radar emissions. [and] A kid walking down the street probably doesn’t have a high-powered radar in his backpack,” says Kallenborn. The Verge. “However, current AI systems are very fragile, so AI targeting systems could misclassify children as soldiers. One study found that even a single pixel change could lead to radically different conclusions about what a machine vision system sees. An open question is how often these errors occur during real-world use. “

This is why the incident in Libya is interesting, Kallenborn said, as the Kargu-2 system mentioned in the UN report appears to use AI to identify targets. According to STM, the manufacturer of the quadcopter, a demo video that uses “machine learning algorithms built into the platform” to “effectively respond to stationary or moving objects (such as vehicles, people, etc.)” appears to demonstrate exactly this. In the clip below, the quadcopter is mounted on a fixed group of mannequins.

But should you trust the manufacturer’s demo reels or brochures? And does the UN report make it clear that machine learning systems were used in the attack?

Kallenborn’s reading of the report is “extremely suggestive” that this is true, but McDonald is more skeptical. “I think it’s reasonable to say that Kargu-2 as a platform is: open It is used autonomously,” he says. “But we know that was.” in Twitter, he also pointed out that this particular battle involved long-range missiles and howitzers, making it much more difficult to divert casualties into one system.

What is left of us is perhaps, of course, the fog of war. Or, more precisely, the fog of law. We can’t say with certainty what happened in Libya, and the definition of what a killer robot is and isn’t is so fluid that even if we know, there will be disagreements.

For Kallenborn, this is kind of the point. This highlights the challenges we face to create meaningful oversight in future AI-assisted battles. Of course, the first use of autonomous weapons on the battlefield won’t be announced in a press release, he says. Because if the weapon works as expected, you won’t see anything mediocre at all. “The problem is that autonomy is at its core a matter of programming,” he says. “The autonomously used Kargu-2 is exactly the same as the manually used Kargu-2.”

Elke Schwarz, Senior Lecturer in Political Theory at Queen Mary University London, who is part of the International Committee for Robot Arms Control, says: The Verge Discussions like this go beyond “slippery and political” debates over justice and show the need to focus on the specific functions of these systems. What do they do and how do they do it?

“I think we really need to think about the bigger picture. […] That’s why I focus on practice as well as functionality,” says Schwarz. “My work shows that using this type of system is very likely to exacerbate violent behavior as an ‘easier’ option. And, as I’ve correctly pointed out, errors are more likely to prevail. […] This will only be resolved post-mortem.”

Despite numerous difficulties in drafting regulations and opposing the enthusiasm of militaries around the world to incorporate AI into their weapons, Schwarz said, “significant mass construction is taking place to push system bans between states and international organizations. It has the ability to autonomously identify, select and attack targets. ”

In fact, the UN is still in the process of reviewing possible legislation, with results expected to be reported later this year. As Schwarz puts it: “This news article has gone well, so now is a great opportunity to mobilize the international community into awareness and action.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Discover

Sponsor

Latest

Many companies see distributed clouds as a viable solution to a number of challenges.

According to analyst House IDC, use of the public cloud "surged" last year, generating $312 billion in revenue due to the Covid-19 pandemic.This...

Go read iFixit’s damning take on Samsung’s ‘ruined’ upcycling program

in a new blog post iFixit Severely criticizes Samsung's recently announced Galaxy Upcycling program. ArsTechnica), an initiative that repair experts helped...

Amazon gives Alexa a new iOS widget and the ability to assign reminders

Amazon has updated the Alexa app on iOS to give you access to voice assistants right from your home screen via...

Motorola One 5G Ace review: a big phone that doesn’t check enough boxes

The Motorola One 5G Ace at $399.99 makes a rather convincing impression that it is a more expensive phone. Strictly...

Samsung Galaxy Z Fold 3 review: nearly normal

The Galaxy Z Fold 3 is the best case Samsung can make for a smartphone that unfolds into a small tablet....