Sekėjai

Ieškoti šiame dienoraštyje

2024 m. balandžio 24 d., trečiadienis

 Israel Is Letting Terminators Loose. Why Israel? What Is Wrong with This Small Country?

"FOR OVER a decade military experts, lawyers and ethicists have grappled with the question of how to control lethal autonomous weapon systems, sometimes pejoratively called killer robots. 

One answer was to keep a “man in the loop”—to ensure that a human always approved each decision to use lethal force. But in 2016 Heather Roff and Richard Moyes, then writing for Article 36, a non-profit focused on the issue, cautioned that a person “simply pressing a ‘fire’ button in response to indications from a computer, without cognitive clarity or awareness”, does not meaningfully qualify as “human control”.

That nightmarish vision of war with humans ostensibly in control but shorn of real understanding of their actions, killing in rote fashion, seems to have come to pass in Gaza. This is the message of two reports published by +972 Magazine, a left-wing Israeli news outlet, the most recent one on April 3rd. The Israel Defence Forces (IDF) have reportedly developed artificial-intelligence (AI) tools known as “The Gospel” and “Lavender” to “mark” suspected operatives of Hamas and Palestinian Islamic Jihad, two militant groups, as targets for bombing, according to Israeli officers familiar with the systems.

The sources claim that the algorithms have been used to create “assassination factories” in which the homes of thousands of Hamas members, including junior ones, are marked down for air strikes, with human officers providing merely cursory oversight. It is also claimed that the IDF would be willing to risk killing 15-20 civilians in order to strike a Hamas fighter. For Hamas battalion or brigade commanders, that number rose to more than 100 civilians. 

By contrast, in 2003 America’s comparable figure for Saddam Hussein, a head of state, was 30 civilians.

Israel denies these allegations. IDF officials say that AI tools like “The Gospel” and “Lavender” are not used to automatically generate targets. Instead, they were developed for the “target directorate”, a unit of the military intelligence branch tasked with locating and confirming potential targets, to manage huge quantities of data in various formats, collected by different intelligence-gathering agencies. The systems are supposed to fuse this data into a manageable format and present the relevant details to the intelligence analysts whose job it is to “incriminate” targets (or “recriminate” existing ones) for air strikes.

In their telling, the AI tools are “neutral”, used only for solving problems in managing big data, and do not replace intelligence officers, who view the relevant material and reach a decision. This would leave human beings in charge of both the analysis and the decision-making leading up to a strike.

All this has raised questions about how precisely ai is used in warfare. In recent years the public debate has focused on weapons that can choose their own targets in some fashion, such as the cheap drones used by Ukraine which can, in a growing number of cases, identify and strike targets without human approval. The IDF’s use of AI suggests that its larger role is more mundane, though it may include identifying potential targets.

Even before the war in Gaza, experts reckoned that military commanders—who bear ultimate legal responsibility for strikes—have a poor grasp of the intelligence processes that produce their target lists. AI would blur that further. “What AI changes is the speed with which targets can be identified and attacked,” says Kenneth Payne of King’s College London. “That means more targets hit, and all else being equal, more risk to civilians.”

The IDF claims that AI tools not only make target identification quicker, but also make it more accurate. Some Israeli intelligence officials acknowledge this is true only if they are used correctly. “An alert and conscientious officer will use these tools to ensure that the targets being hit are valid,” says an intelligence analyst. “But in war...tired and apathetic officers can too easily just rubber-stamp the targets suggested by the algorithms.”

The biggest problem remains a stark gap between high-level policy and its implementation on the ground, whether by the analysts marking down the targets or the officers in operational headquarters deciding on the actual strikes. “Nothing I have seen of them, up close and personal, leads me to believe that, at the high command level, they are anything other than professional,” says one Western official familiar with Israeli military operations. 

However Israeli soldiers have said that in many cases commanders in different sectors in Gaza have exercised policies of their own, such as designating any adult men remaining in their sector as terrorists and giving orders for them to be fired on (clearly illegal under the laws of war).

Gaza would not be the first war where computers and code have been used to generate targets to kill. In Afghanistan and Iraq, American intelligence officers built highly complex “network diagrams” showing real or purported connections between people and places, with the aim of identifying insurgents. The process was often primitive, notes Jon Lindsay, an academic. “Reliance on telephone communication patterns alone without reference to other social context”, he writes, “might turn mere delivery boys into nefarious suspects.”

Most controversial of all was America’s drone assassination programme in Afghanistan, Pakistan and other countries. Documents leaked in 2015 and published by the Intercept, a news website, suggested that America had collected vast amounts of data on Pakistan’s mobile-phone network to perform what they called “automated bulk cloud analytics”. The leaked slides referred to “courier machine learning models”, suggesting that it was using what would today be called ai to find patterns in this data. Yet the analogy with Lavender is imperfect: drone strikes, though controversial, and shaped by AI-derived intelligence, were far rarer than Israeli strikes in Gaza and involved a slower and more careful targeting process than those reportedly being used by Israel.

During the Vietnam war, American forces built a sophisticated network of sensors along the Ho Chi Minh Trail. They used computers in Thailand to process the information, allowing their planes to drop bombs within minutes of suspected insurgents being detected. The system killed huge numbers of enemy soldiers, encouraging a focus on body counts that doomed the war. Israeli generals might take note." [1]

1.  Who is in control? The Economist; London Vol. 451, Iss. 9392,  (Apr 13, 2024): 48, 49.

Komentarų nėra: