Monday, December 30, 2024

  • Monday, December 30, 2024
  • Elder of Ziyon


The Washington Post headline: "Israel built an ‘AI factory’ for war. It unleashed it in Gaza."

The phrase "AI factory" implies that the IDF is blindly relying on AI for decisions that then recklessly kill innocents in Gaza that would have been safe if Israel relied only on humans.

When you separate the facts from the bias in the article, you see the opposite: AI is a tool that helps save lives and more accurately target terrorists.

Here's an example of how AI helps identify the terrorists who murdered 1.200 Israelis on October 7 and how the Post spins this as a bad thing:
People familiar with the IDF’s practices, including soldiers who have served in the war, say Israel’s military has significantly expanded the number of acceptable civilian casualties from historic norms. Some argue this shift is enabled by automation, which has made it easier to speedily generate large quantities of targets, including of low-level militants who participated in the Oct. 7 attacks.
The implication is that murderers, kidnappers and rapists shouldn't be targeted because they are merely "low-level." 

The WaPo is defending rapists as not being worthy of being targeted in war.

It says what the New York Times said a few days ago, that Israel loosened up the rules for proportionality calculations in the wake of October 7. As I and others have written, this is entirely appropriate - and legal - since the nature of this war is different from the previous limited wars in Gaza that were meant only to dissuade but not destroy Hamas. 

And a careful reading of the article shows that Israel uses AI as a tool, not as a replacement of human decision-making. There are checks and balances in the system. Even the people they interviewed anonymously agree:

“The more ability you have to compile pieces of information effectively, the more accurate the process is,” the IDF said in a statement to The Post. “If anything, these tools have minimized collateral damage and raised the accuracy of the human-led process.”

The IDF requires an officer to sign off on any recommendations from its “big data processing” systems, according to an intelligence official who spoke on the condition of anonymity because Israel does not release division leaders’ names. The Gospel and other AI tools do not make decisions autonomously, the person added.
Reviewing reams of data from intercepted communications, satellite footage, and social networks, the algorithms spit out the coordinates of tunnels, rockets, and other military targets. Recommendations that survive vetting by an intelligence analyst are placed in the target bank by a senior officer.

Using the software’s image recognition, soldiers could unearth subtle patterns, including minuscule changes in years of satellite footage of Gaza suggesting that Hamas had buried a rocket launcher or dug a new tunnel on agricultural land, compressing a week’s worth of work into 30 minutes, a former military leader who worked on the systems said.
Contrary to the impression that the WaPo tries to give, its own details show that the AI systems are far more effective and accurate than humans alone. There is no indication that the AI is doing anything that human analysts wouldn't do if they had infinite processing speed and could hold billions of pieces of information in their heads at once. AI is doing exactly what humans would do given enough resources, and humans check on its work.

Even the specific criticisms of AI that may be valid are minor in comparison with the benefits, and there is no indication that people were mistakenly targeted because of them.
An internal audit found some AI systems for processing the Arabic language had inaccuracies, failing to understand key slang words and phrases, according to the two former senior military leaders.

...For example, Hamas operatives often used the word “batikh,” or watermelon, as code for a bomb, one of the people familiar with the efforts said. But the system wasn’t smart enough to understand the difference between a conversation about an actual watermelon and a coded conversation among terrorists.

“If you pick up a thousand conversations a day, do I really want to hear about every watermelon in Gaza?” the person said.
And wouldn't you want a system that picks up and doers an initial analysis analyzes every mention of "watermelon" to check out which of those may be a plan to murder Israelis?

I had looked at a similar article by +972 in April. It added a detail that the Washington Post chose not to mention: before the IDF increased its reliance on AI to determine targets, it  manually checked its results against human analysts and only signed off on the AI system when it achieved 90% reliability. What it didn't mention is whether humans reach that 90% reliability threshold. 

In that same vein, the WaPo parrots criticism of AI without comparing how humans would do the job better. An anonymous soldier - not an intelligence analyst - offers his own criticism, someone almost certainly found by the WaPo reaching out to +972 to find disgruntled soldiers for the article:

At one point, the soldier’s unit was ordered to use a software program to estimate civilian casualties for a bombing campaign targeting about 50 buildings in northern Gaza. The unit’s analysts were given a simple formula: divide the number of people in a district by the number of people estimated to live there — deriving the former figure by counting the cellphones connecting to a nearby cell tower.

Using a red-yellow-green traffic light, the system would flash green if a building had an occupancy rate of 25 percent or less — a threshold considered sufficient to pass to a commander to make the call about whether to bomb.

The soldier said he was stunned by what he considered an overly simplified analysis. It took no account of whether a cellphone might be turned off or had run out of power or of children who wouldn’t have a cellphone. Without AI, the military may have called people to see if they were home, the soldier said, a manual effort that would have been more accurate but taken far longer.
And if they called people whose cellphones were off, how exactly would that help the analysis? And does he really think the Ai system assumes all children have cellphones?

The most substantive criticism of how the IDF relies too much on AI comes not from AI being too aggressive but not aggressive enough, missing what humans could find.

The phrase "AI factory" in the headline comes from a quote of someone who says that a culture that relies too much on technological factors is what ended up with dead Jews, not dead Arabs:
Two former senior commanders said they believe the intense focus on AI was a significant reason Israel was caught off-guard that day. The department overemphasized technological findings and made it difficult for analysts to raise warnings to senior commanders.

This was an AI factory,” said one former military leader, speaking on the condition of anonymity to describe national security topics. “The man was replaced by the machine.”
If that is true, it is clearly a huge mistake by the intelligence community. Obviously, October 7 was an unprecedented  intelligence blunder and it will be investigated. But it isn't a mistake made by AI, but a mistake of imagination combined with ignoring internal warnings. If AI had been trained properly to look for evidence of an invasion by Hamas, October 7 could have been avoided. It was humans who didn't think that this was in the realm of possibility and humans who relied on non-AI technology, like cameras and fences, to defend Israel properly. 

Again, AI is a tool. It is a tool that makes it possible for humans to make faster, better, more accurate decisions. From the very start, before the world even considered AI as anything but science fiction, Israel has been ensuring that people are the ones who make the ultimate life and death decisions. In the cases that humans themselves treat AI as authoritative and don't bother to do their own checking that is the fault of the person, not AI.  

Despite the tone of the article, it says nothing that indicates that Israel is using this tool inappropriately. As with any other tool, over time people learn its limitations and then adjust for them, and the IDF updates its procedures in real time as it gets more information. 

Finally, the article shows that the charge of  "genocide" is a lie. If Israel is using AI and humans to target terrorists as accurately as possible and minimize collateral damage - which is the entire point of using it - then that proves that Israel is not targeting civilians, it is not firing indiscriminately, and it is adhering to international law. 




Buy EoZ's books  on Amazon!

"He's an Anti-Zionist Too!" cartoon book (December 2024)

PROTOCOLS: Exposing Modern Antisemitism (February 2022)

   
 

 



AddToAny

Printfriendly

EoZTV Podcast

Podcast URL

Subscribe in podnovaSubscribe with FeedlyAdd to netvibes
addtomyyahoo4Subscribe with SubToMe

search eoz

comments

Speaking

translate

E-Book

For $18 donation








Sample Text

EoZ's Most Popular Posts in recent years

Hasbys!

Elder of Ziyon - حـكـيـم صـهـيـون



This blog may be a labor of love for me, but it takes a lot of effort, time and money. For 20 years and 40,000 articles I have been providing accurate, original news that would have remained unnoticed. I've written hundreds of scoops and sometimes my reporting ends up making a real difference. I appreciate any donations you can give to keep this blog going.

Donate!

Donate to fight for Israel!

Monthly subscription:
Payment options


One time donation:

Follow EoZ on Twitter!

Interesting Blogs

Blog Archive