Monday, July 17, 2023

  • Monday, July 17, 2023
  • Elder of Ziyon


Bloomberg published an article yesterday, "Israel Quietly Embeds AI Systems in Deadly Military Operations."

While the reporting isn't too bad, it still relies too much on sensationalist science fiction fears of autonomous killer robots rather than what Israel is actually doing.

The Israel Defense Forces have started using artificial intelligence to select targets for airstrikes and organize wartime logistics as tensions escalate in the occupied territories and with arch-rival Iran.

Though the military won’t comment on specific operations, officials say it now uses an AI recommendation system that can crunch huge amounts of data to select targets for airstrikes. Ensuing raids can then be rapidly assembled with another artificial intelligence model called Fire Factory, which uses data about military-approved targets to calculate munition loads, prioritize and assign thousands of targets to aircraft and drones, and propose a schedule.

While both systems are overseen by human operators who vet and approve individual targets and air raid plans, according to an IDF official, the technology is still not subject to any international or state-level regulation. Proponents argue that the advanced algorithms may surpass human capabilities and could help the military minimize casualties, while critics warn of the potentially deadly consequences of relying on increasingly autonomous systems.

“If there is a mistake in the calculation of the AI, and if the AI is not explainable, then who do we blame for the mistake?” said Tal Mimran, a lecturer of international law at the Hebrew University of Jerusalem and former legal counsel for the army. “You can wipe out an entire family based on a mistake.”

And the entire point of AI is to reduce the chances of such mistakes, because humans are far more error-prone than AI is. This point is made, but downplayed, in articles like this.

"Experts" have been warning about the dangers of automated war systems for a long time. And they get media exposure for their warnings, so they have no incentive to say how such technological advances can actually help save lives. Nowadays, "human rights" experts advance in their organizations based on how much media exposure they get so they sensationalize the negative and ignore the positive.

Choosing targets during wartime depends on intelligence, and intel is rarely perfect. Under international law, a military commander may make a decision on firing at a target based on the best information he or she has. As long as that decision was not reckless in ignoring evidence that the target was not military, or that it would cause a disproportionate amount of damage to civilians, that commander's decisions are legal under the laws of armed conflict.

From everything I have read about Israel's use of AI in choosing targets, it is increasing the amount of information available to the commander by orders of magnitude. It can consider thousands of factors that humans would not even be aware of. It can find patterns that people could not. As with US AI military research,  It helps the commander make better decisions. As such, as long as a human is the one who makes the ultimate decision to actually fire a weapon, AI will save lives, not endanger them. 

Yes, there could be mistakes - just as there have been rare mistakes with autonomous vehicles. But autonomous vehicles make far fewer mistakes than human drivers do, and far more lives are saved than lost when they are properly deployed.

All public statements by Israeli military officials about their use of AI emphasize that they do not rely on AI to be autonomous and to both decide and act on that decision. There is always a human making life or death decisions. AI makes those decisions easier with fewer mistakes. This should be celebrated.

The legal issues of who is held responsible for a self-driving vehicle killing a pedestrian are not the same as the military scenarios described in these articles - because the car is truly autonomous and makes its own decisions, while Israel is careful to ensure that life or death decisions are made by a person. That commander is just as responsible for the decision whether it is made with or without AI. The decisions are likely to be better with AI. 

Similarly, the Israeli system to use AI to help calculate munition loads is meant to save lives. The AI would choose the smallest possible munition to accomplish the mission - which means fewer innocent civilians at risk. What could possibly be bad about that?

One legitimate issue is that many AI systems are opaque, and it is often important to know how the AI made a decision. But Israeli military AI is designed to make the process more transparent:

Another worry is that the fast adoption of AI is outpacing research into its inner workings. Many algorithms are developed by private companies and militaries that do not disclose propriety information, and critics have underlined the built-in lack of transparency in how algorithms reach their conclusions. The IDF acknowledged the problem, but said output is carefully reviewed by soldiers and that its military AI systems leave behind technical breadcrumbs, giving human operators the ability to recreate their steps.

In that sense they are ahead of most commercial AI systems used by businesses today. 

Curiocial adds an interesting detail:

The IDF's operational use of AI remains shrouded in secrecy, with many details classified. However, hints from military officials indicate that AI has been employed effectively in conflict zones such as Gaza, Syria, and Lebanon.

In these regions, the IDF frequently faces rocket attacks, and AI has enabled rapid and precise responses to these threats. Additionally, AI is utilized to target weapons shipments to Iran-backed militias in Syria and Lebanon, showcasing the IDF's growing prowess in AI warfare.

Again, there is an enormous amount of information to sift through, and AI is simply a tool - like automated algorithms in the past - to quickly sift through huge amounts of video, image and electronic intelligence data and find things that humans would miss. 

There are indeed moral and legal implications to the use of AI. As far as I can tell, Israel isn't ignoring those implications - it is way ahead of the rest of the world in making the proper decisions that ultimately save the lives of innocent people while going after the targets that threaten innocent people on their own side. 




Buy the EoZ book, PROTOCOLS: Exposing Modern Antisemitism  today at Amazon!

Or order from your favorite bookseller, using ISBN 9798985708424. 

Read all about it here!

 

 



AddToAny

EoZ Book:"Protocols: Exposing Modern Antisemitism"

Printfriendly

EoZTV Podcast

Podcast URL

Subscribe in podnovaSubscribe with FeedlyAdd to netvibes
addtomyyahoo4Subscribe with SubToMe

search eoz

comments

Speaking

translate

E-Book

For $18 donation








Sample Text

EoZ's Most Popular Posts in recent years

Hasbys!

Elder of Ziyon - حـكـيـم صـهـيـون



This blog may be a labor of love for me, but it takes a lot of effort, time and money. For over 19 years and 40,000 articles I have been providing accurate, original news that would have remained unnoticed. I've written hundreds of scoops and sometimes my reporting ends up making a real difference. I appreciate any donations you can give to keep this blog going.

Donate!

Donate to fight for Israel!

Monthly subscription:
Payment options


One time donation:

subscribe via email

Follow EoZ on Twitter!

Interesting Blogs

Blog Archive