U.S. tech giants have quietly empowered Israel to track and kill many more alleged militants more quickly in Gaza and Lebanon through a sharp spike in artificial intelligence and computing services. But the number of civilians killed has also soared, fueling fears that these tools are contributing to the deaths of innocent people.
Militaries have for years hired private companies to build custom autonomous weapons. However, Israel’s recent wars mark a leading instance in which commercial AI models made in the United States have been used in active warfare, despite concerns that they were not originally developed to help decide who lives and who dies.
“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute and former senior safety engineer at OpenAI. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”
What does she mean by "directly"? Israel would not use commercial software for actual warfare, for the simple reason that commercial software is not designed for that - the requirements in warfare are far stricter.
As the article says later, Israel's use of Microsoft Azure is mostly for translation, transcription and searching huge amounts of data. That is not "direct" use in war. The quote is either a lie on her part of AP's not telling her what it knew itself.
But Khlaaf is not exactly an impartial scientist.. She called the war in Gaza a genocide - on October 13, 2023.
As U.S. tech titans ascend to prominent roles under President Donald Trump, the AP’s findings raise questions about Silicon Valley’s role in the future of automated warfare.
The Israeli military says its analysts use AI-enabled systems to help identify targets but independently examine them together with high-ranking officers to meet international law, weighing the military advantage against the collateral damage. A senior Israeli intelligence official authorized to speak to the AP said lawful military targets may include combatants fighting against Israel, wherever they are, and buildings used by militants. Officials insist that even when AI plays a role, there are always several layers of humans in the loop.“These AI tools make the intelligence process more accurate and more effective,” said an Israeli military statement to the AP. “They make more targets faster, but not at the expense of accuracy, and many times in this war they’ve been able to minimize civilian casualties.”
It’s extremely hard to identify when AI systems enable errors because they are used with so many other forms of intelligence, including human intelligence, sources said. But together they can lead to wrongful deaths.
“Should we be basing these decisions on things that the model could be making up?” said Joshua Kroll, an assistant professor of computer science.
The Israeli military said any phone conversation translated from Arabic or intelligence used in identifying a target has to be reviewed by an Arabic-speaking officer.
(I often use automated translation tools for my writing, but if anything seems off, I will verify with either other tools or with a human expert. My writings are not life or death. Anyone assuming that an army would kill someone based on a single tenuous piece of information is not an honest person.)
The article lists potential mistakes that anonymous IDF officers admit to have seen in AI, like mistranslations or mislabeling a spreadsheet. But these are errors that humans can do too - and worse - and in these cases, the human oversight found the errors. If IDF workers don't follow procedure and ensure that data is verified, that is a problem that must be solved, but it is no different from any other army grunt not following proper procedures.
AP also seems shocked that Israeli employees at Microsoft might be - shudder - patriotic:
Microsoft also operates a 46,000-square-meter corporate campus in Herzliya, north of Tel Aviv, and another office in Gav-Yam in southern Israel, which has displayed a large Israeli flag.
Horrors! Israeli employees are Zionist!
Finally, the article quotes anti-Israel tech people to inject more fear in the reader:
Former Google software engineer Emaan Haseem was among those fired. Haseem said she worked on a team that helped test the reliability of a “sovereign cloud” — a secure system of servers kept so separate from the rest of Google’s global cloud infrastructure that even the company itself couldn’t access or track the data it stores. She later learned through media reports that Google was building a sovereign cloud for Israel.
“It seemed to be more and more obvious that we are literally just trying to design something where we won’t have to care about how our clients are using it, and if they’re using it unfairly or unethically,” Haseem said.
The entire point of sovereign clouds is for regulatory and security reasons. Some data is not allowed to cross national lines for various reasons, including higher security. It is not something Google created for Israel: all major cloud companies offer this option and it is meant to safeguard the data. Haseem is saying that cloud companies should be responsible for how their services are used and act as Big Brother to ensure that it fits their own standards.
If that was true, they would have no customers.
Altogether, this is a hugely biased article of the sort we've seen often in major media. There are no direct lies, but it was written deliberately to imply that something really evil is happening when there is none.
"He's an Anti-Zionist Too!" cartoon book (December 2024) PROTOCOLS: Exposing Modern Antisemitism (February 2022) |
![]() |
