+972 magazine has written a
sensationalist article by Yuval Abraham about how Israel uses artificial intelligence in this war, claiming that it is killing thousands of civilians for no reason because an unfeeling machine is making the decisions and humans are merely rubber stamps approving airstrikes against those people identified or misidentified - and their families.
It conflates several different issues and topics in order to build an anti-IDF narrative.
First, it describes an database called "Lavender" that uses AI to generate large target lists much faster than humans could. It takes information from many disparate sources to figure out where to attack.
That is not a secret - the
IDF published an article about this targeting database in early November, saying it had generated 12,000 potential targets at that time.
Then Abraham describes a different tracking system to identify where specific terrorists are, based on cell phone and other sources, that they said was called "Where's Daddy?" Intelligence officers would input names generated by Lavender into systems like Where's Daddy to track the human targets. Then, when the target was identified, approval would be given to hit the target. The article describes how the Hamas members were often hit in their homes.
According to IDF procedures, the intelligence officers are supposed to vet the information generated by Lavender before inputting them into Where's Daddy. One of the intelligence officers interviewed claimed that this wasn't done: "'You put hundreds [of targets] into the system and wait to see who you can kill,' said one source with knowledge of the system. 'It’s called broad hunting: you copy-paste from the lists that the target system produces.'"
Another source reveals that he himself made decisions to kill people with no vetting: “One day, totally of my own accord, I added something like 1,200 new targets to the [tracking] system, because the number of attacks [we were conducting] decreased, That made sense to me. In retrospect, it seems like a serious decision I made. And such decisions were not made at high levels.”
The article describes an army where previous standards of calculating targets and proportionality in attacks have been thrown out the window and a vengeful IDF is just trying to cause as much pain as possible.
Finally, the people interviewed claim that the IDF would use "dumb bombs" to then kill the targets in their homes with little regard to the number of civilians who would be there. But the intelligence officers interviewed wouldn't make the decision of what weapon to use - that is a completely different department. As we've seen, "dumb bombs" are
deployed in a smart way, so this part of the article is mostly sensationalism and supposition.
So what is the truth?
This is a different war than any previous war in Gaza. While the goals of the previous wars were to deter Hamas from wanting to attack Israel, in this war the goal is to destroy Hamas and Islamic Jihad. With different goals come different policies: the killing of a low-ranking Hamas member would not be a priority if the goal is deterrence but it becomes more important when the goal is to ensure Hamas cannot stage another October 7.
This would prompt the IDF to loosen Israel's proportionality calculations since October 7. Today a low-level fighter's family would be considered to be acceptable "collateral damage" while in the past it wouldn't have been.
Moreover, people who participated in October 7 itself - and there were thousands - would be a higher priority even if they are low-level members of Hamas, or "civilians" who enthusiastically took part in the pogrom.
And all of this is legal and still proportional under the laws of armed conflict.
As I noted previously, a
German court decision gave as an example of what is clearly disproportionate to destroy an entire village of hundreds of people to kill a single fighter.
[An] infringement [of the law] is only to be assumed in cases of obvious excess where the commander ignored any considerations of proportionality and refrained from acting “honestly”, “reasonably” and “competently” … This would apply to the destruction of an entire village with hundreds of civilian inhabitants in order to hit a single enemy fighter, but not if the objective was to destroy artillery positions in the village
This is the equivalent of destroying a high rise apartment building to kill a single low level fighter. The IDF doesn't do that - even this article says it would level a private residence but only hit larger targets for much higher level Hamas targets.
Killing a family that a fighter is hiding with is a tragedy but it is legal. The rules changed because the circumstances changed and military goals are different, but international law is still being followed.
The article says:
The sources said that the approval to automatically adopt Lavender’s kill lists, which had previously been used only as an auxiliary tool, was granted about two weeks into the war, after intelligence personnel “manually” checked the accuracy of a random sample of several hundred targets selected by the AI system. When that sample found that Lavender’s results had reached 90 percent accuracy in identifying an individual’s affiliation with Hamas, the army authorized the sweeping use of the system. From that moment, sources said that if Lavender decided an individual was a militant in Hamas, they were essentially asked to treat that as an order, with no requirement to independently check why the machine made that choice or to examine the raw intelligence data on which it is based.
The article also says that there was a requirement to vet the targets from Lavender, even though it claims it was cursory, where the officers would check to make sure that the person is an adult male, for example.* This contradicts the claim that there was "approval to automatically adopt" these "kill lists." But the important question, which of course Abraham never asks, is what is the accuracy of human-only intelligence? 90% may be far better than humans can do on their own based on lower amounts of data. The 90% number in a vacuum doesn't mean anything.
An army makes decisions based on the best information it has at the moment. These systems help commanders make these decisions with far more information than was available before. I agree that blindly following the recommendations of AI should never happen: a human must check how the target was identified, and as I've reported before,
Israel's AI systems can be queried to understand the process behind their decisions.
I would not be surprised if mistakes were made in the weeks after October 7. No one should be surprised. It was a new kind of war, with new rules, and new procedures being made on the fly. There is also no doubt that the shock and anger from October 7 could and would affect human decision making.
Abraham writes:
“It has proven itself,” said B., the senior source. “There’s something about the statistical approach that sets you to a certain norm and standard. ... I have much more trust in a statistical mechanism than a soldier who lost a friend two days ago. Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.”
Even this article shows that as the IDF learned more, it adjusted its procedures to minimize unnecessary damage. The statistics bear this out - the number of casualties and the amount of damage has gone down significantly since the first chaotic weeks of the war.
The IDF adjusts and changes its procedures in real time in a war it did not plan, perhaps faster than any army in history.
The process of identifying military targets in the IDF consists of various types of tools and methods, including information management tools, which are used in order to help the intelligence analysts to gather and optimally analyze the intelligence, obtained from a variety of sources. Contrary to claims, the IDF does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist. Information systems are merely tools for analysts in the target identification process. According to IDF directives, analysts must conduct independent examinations, in which they verify that the identified targets meet the relevant definitions in accordance with international law and additional restrictions stipulated in the IDF directives.
The “system” your questions refer to is not a system, but simply a database whose purpose is to cross-reference intelligence sources, in order to produce up-to-date layers of information on the military operatives of terrorist organizations. This is not a list of confirmed military operatives eligible to attack.
According to international humanitarian law, a person who is identified as a member of an organized armed group (like the Hamas’ military wing), or a person who directly participates in hostilities, is considered a lawful target. This legal rule is reflected in the policy of all law-abiding countries, including the IDF’s legal practice and policy, which did not change during the course of the war.
For each target, IDF procedures require conducting an individual assessment of the anticipated military advantage and collateral damage expected. Such assessments are not made categorically in relation to the approval of individual strikes. The assessment of the collateral damage expected from a strike is based on a variety of assessment methods and intelligence-gathering measures, in order to achieve the most accurate assessment possible, considering the relevant operational circumstances. The IDF does not carry out strikes when the expected collateral damage from the strike is excessive in relation to the military advantage. In accordance with the rules of international law, the assessment of the proportionality of a strike is conducted by the commanders on the basis of all the information available to them before the strike, and naturally not on the basis of its results in hindsight.
As for the manner of carrying out the strikes – the IDF makes various efforts to reduce harm to civilians to the extent feasible in the operational circumstances ruling at the time of the strike.
In this regard, the IDF reviews targets before strikes and chooses the proper munition in accordance with operational and humanitarian considerations, taking into account an assessment of the relevant structural and geographical features of the target, the target’s environment, possible effects on nearby civilians, critical infrastructure in the vicinity, and more. Aerial munitions without an integrated precision-guide kit are standard weaponry in developed militaries worldwide. The IDF uses such munitions while employing onboard aircraft systems to calculate a specific release point to ensure a high level of precision, used by trained pilots. In any event, the clear majority of munitions used in strikes are precision-guided munitions.
The IDF outright rejects the claim regarding any policy to kill tens of thousands of people in their homes.
There is a tendency to sensationalize anything about AI and treat is like science fiction autonomous robots going amok. Reality is quite different, and facts matter.
____________________________________
* I wrote this post with the assumption that the +972 article is slanted but honest. But this part of the article makes no sense:
However, sources said that the only human supervision protocol in place before bombing the houses of suspected “junior” militants marked by Lavender was to conduct a single check: ensuring that the AI-selected target is male rather than female. The assumption in the army was that if the target was a woman, the machine had likely made a mistake, because there are no women among the ranks of the military wings of Hamas and PIJ.
Are they saying that the AI system looks at hundreds of disparate pieces of information but cannot tell whether someone is a male or female?
This makes me wonder whether the author didn't understand what the interviewees were saying to begin with, or whether his understanding of technology is so poor that he wrote without knowing a thing about what he is writing about.
Buy the EoZ book, PROTOCOLS: Exposing Modern Antisemitism today at Amazon!
Or order from your favorite bookseller, using ISBN 9798985708424.
Read all about it here!
|
|