Wednesday, February 28, 2024

  • Wednesday, February 28, 2024
  • Elder of Ziyon


SpyTalk writes:

What has made the war in Gaza so much deadlier and destructive than Israel’s previous operations against Hamas is the combination of its use of artificial intelligence, which generates more targets than ever before, and the IDF’s relaxation of rules limiting strikes against non-military targets and civilians,  according to little noticed statements by current and former IDF officials, as well as an investigation by the Israeli-Palestinian +972 online magazine and the Hebrew-language news site, Sikha Mekomit, or Local Call. 

According to the IDF’s official website (which was down on Saturday), the military’s intelligence branch created its AI-assisted targeting directorate in 2019. The website disclosed it employs an AI-assisted target creation platform called Habsora in Hebrew (the Gospel, in English) in the IDF’s war against Hamas “to produce targets at a fast pace.”

In an interview published a few months before the Gaza war, retired Lt. Gen. Aviv Kochavi, who stepped down as the IDF’s chief of staff last year, described the AI-assisted targeting platform as “a machine that processes vast amounts of data faster and more effectively than any human, translating them into actionable targets.”

To illustrate the impact that the system has had on targeting, Kochavi said that before the platform was created, the IDF’s intelligence branch would produce 50 targets in Gaza in a year. “Once this machine was activated, it generated 100 new targets every day,” Kochavi said.

There have been a number of articles like this that claim that AI is helping the IDF kill more civilians.  They base their claims by relying on the ignorance of people about AI, about the IDF and natural hatred for Israel that overwhelms any attempts at fairness. 

All of the articles mention this much larger number of potential targets being identified. What very few say is that the ultimate decision on what will be hit is made by humans. 

As Bloomberg reported in its own biased piece before October 7:

Though the military won’t comment on specific operations, officials say it now uses an AI recommendation system that can crunch huge amounts of data to select targets for airstrikes. Ensuing raids can then be rapidly assembled with another artificial intelligence model called Fire Factory, which uses data about military-approved targets to calculate munition loads, prioritize and assign thousands of targets to aircraft and drones, and propose a schedule.

While both systems are overseen by human operators who vet and approve individual targets and air raid plans, according to an IDF official, the technology is still not subject to any international or state-level regulation. Proponents argue that the advanced algorithms may surpass human capabilities and could help the military minimize casualties, while critics warn of the potentially deadly consequences of relying on increasingly autonomous systems.

Generating targets based on AI is exactly the same as generating targets based on other kinds of intelligence, just much faster. There is no change in procedures. There is no change in policy. If anything, these AI tools can minimize mistakes.

These articles all have the same pattern. They call upon "experts" who know literally nothing about the IDF or its policies, and then spin nightmare scenarios where "with the push of a button" the IDF could choose to have the machines take over and make all the decisions. 

The SpyTalk article relies on two retired, anonymous CIA officials who compare Israel's AI with Vietnam War-era technology used by the US.  Really. 

They say things like:

“In war, AI systems require the ingestion of quality inputs of intelligence at a massive level,” he said. “This is particularly true for intelligence, surveillance and reconnaissance systems input.  But that’s not effective when you have little current intel on the enemy, who is hiding underground and who doesn’t maneuver as armies do, but rather as insurgents amidst a civilian population.”

 If the old method took a week to identify a target, how timely would that information be by the time the target is attacked? Isn't a system that identifies them faster going to be inherently more accurate?

And, yes, targets are now underground. AI doesn't make that issue worse than it was, and arguably it can identify entrance and exit points of tunnels much better than humans can by identifying people entering one building and exiting another, for example, of seeing when cell phone signatures jump from one place to another. Of course computers would do a better job than humans would. 

And, again, humans review the AI data for accuracy, just as they would have reviewed the recommendations of humans beforehand. There is literally no difference - except that this is much better.

Another criticism often leveled at AI is that it is sometimes impossible to understand how it reaches a decision. But this is something that Israel recognized and appears to have fixed long ago. Bloomberg says:

Many algorithms are developed by private companies and militaries that do not disclose propriety information, and critics have underlined the built-in lack of transparency in how algorithms reach their conclusions. The IDF acknowledged the problem, but said output is carefully reviewed by soldiers and that its military AI systems leave behind technical breadcrumbs, giving human operators the ability to recreate their steps.  

In other words, the "experts" are basing their knowledge of AI on playing with ChatGPT rather than understanding the state of the art. 

These same "experts" predicted thousands of IDF deaths and Gaza cities being deathtraps. Ai isn't the only reason that the IDF performance has surpassed expectations, but it is a major one.

The effectiveness cannot be denied. Hamas deaths outnumber IDF deaths by a 50-1 ratio even when Hamas deploys teams of only two or three people at a time while there will be at least 10 IDF soldiers deployed on any given mission. 

West Point's Articles of War blog summarized things nicely last year:

One encouraging aspect is that it seems the IDF seeks to use tools that complement human decision making, rather than as substitutes for the human factor. It is important to maintain a human in the loop in order to promote accountability, because we are not fully aware of the capabilities and the risks of AI tools. In this regard, Israel is setting a positive example that should be followed.

Too many articles on Israel's use of AI are based on science fiction scenarios of robots running amok. The IDF has been researching AI for over a decade, and it applies its own moral code to the tools as it does for every other tool. They've thought through these problems, and solved them, way before the "experts" started charging consulting fees for their uninformed opinions. 

(h/t Martin)



Buy the EoZ book, PROTOCOLS: Exposing Modern Antisemitism  today at Amazon!

Or order from your favorite bookseller, using ISBN 9798985708424. 

Read all about it here!

 

 



AddToAny

EoZ Book:"Protocols: Exposing Modern Antisemitism"

Printfriendly

EoZTV Podcast

Podcast URL

Subscribe in podnovaSubscribe with FeedlyAdd to netvibes
addtomyyahoo4Subscribe with SubToMe

search eoz

comments

Speaking

translate

E-Book

For $18 donation








Sample Text

EoZ's Most Popular Posts in recent years

Hasbys!

Elder of Ziyon - حـكـيـم صـهـيـون



This blog may be a labor of love for me, but it takes a lot of effort, time and money. For 20 years and 40,000 articles I have been providing accurate, original news that would have remained unnoticed. I've written hundreds of scoops and sometimes my reporting ends up making a real difference. I appreciate any donations you can give to keep this blog going.

Donate!

Donate to fight for Israel!

Monthly subscription:
Payment options


One time donation:

Follow EoZ on Twitter!

Interesting Blogs

Blog Archive