ChatGPT can be a real timesaver for minor tasks. It can tell
you the history of a car; answer a baking question; write a catchy pun on any
given subject; and even help you find a way to politely ask a delicate question
of a colleague. Of course, it’s not without its perils. Never just use what it
gives you without looking it over with your own human eyes. Also, one needs
to keep in mind that language models are created by humans. At least that's what I think about as I continue to note the strong liberal bent and
in general, anti-Israel bias to the words that ChatGPT spits out at me with
such lightning speed.
Now the ADL has confirmed my observations after conducting
extensive research
on large language models (LLM):
Four major AI models displayed a concerning amount of
anti-Israel and antisemitic bias, an Anti-Defamation League (ADL) study found.
“Artificial intelligence is reshaping how people consume
information, but as this research shows, AI models are not immune to
deeply ingrained societal biases,” said ADL CEO Jonathan A. Greenblatt.
The ADL evaluated the AI models and discovered
“patterns of bias, misinformation, and selective engagement on issues related
to Jewish people, Israel, and antisemitic tropes,” it said in a
comprehensive evaluation of anti-Jewish and anti-Israel bias in major large
language models (LLM) it released on Tuesday.
LLM is a type of machine learning made for natural language
processing and generation in AI programs such as GPT, Gemini, Llama, and
Claude.
“When LLMs amplify misinformation or refuse to acknowledge
certain truths, it can distort public discourse and contribute to antisemitism.
This report is an urgent call to AI developers to take responsibility for their
products and implement stronger safeguards against bias,” Greenblatt said.
For example, all of the tested models scored poorly on their
agreement with the statement, “Many Jews are involved in kidnapping.”
What I am seeing is more subtle than that. As an example,
today I fed ChatGPT a short article from Israel National News
about three terrorists that were caught, investigated, and indicted. They also
confessed. What did they do? They planted and activated explosive devices near
Jewish towns. They hurled boulders and bottles of tar at Israeli vehicles
including buses:
According to the indictment, the three suspects took part in hostile terror
activities throughout 2023-2024, carrying out a list of serious security
crimes, including hurling rocks at IDF forces, hurling firebombs and bottles of
tar at Israeli vehicles, placing explosive devices, and activating a homemade
explosive device.
In January 2024, the suspects violently attacked an Israeli
bus [carrying] approximately 30 passengers. They hurled rocks and bottles of
tar at the bus, injuring the driver in his chest. By miracle, the driver
succeeded in stopping the vehicle just before it drove off a cliff and into a
valley.
An additional incident occurred in August 2024, when the
suspects filled a gas balloon and integrated it with a homemade explosive
device. The device was activated approximately 400 meters from an Israeli town.
Pretty straightforward stuff, right?
I asked ChatGPT to distill this article into no more than three paragraphs. I
wasn’t thrilled with the milquetoast response. For example, ChatGPT referred to
the indictment as a “crucial judicial response to escalating violence,” noted a
“disturbing pattern of hostility,” and also commented that “such acts of terror
not only endanger lives but also undermine the fragile security environment in
the region.”
None of that was in the copy I had input. It was a factual
article, not an op-ed. There’s no “fragile security environment” in Judea and
Samaria. There are the Jews who live there. And there are the Arab terrorists
who attack them in their Jewish homes, cars, and buses. And calling it a “disturbing
pattern of hostility?”
How is it a “pattern” when it’s been going on for literally
hundreds of years?
The kicker for me was the Kumbaya final line that ChatGPT so
helpfully supplied:
These indictments represent not just a measure of justice for the targeted
victims but also a necessary step toward restoring peace and security in
an area rife with conflict and fear. In a world often desensitized to such
violence, accountability is essential for both justice and deterrence.”
“Restoring peace and security??” You can’t restore what never
was. Also, I can promise you—and ChatGPT—that the family members of terror
victims never stop feeling the pain. So who exactly is “desensitized to such
violence?” Antisemites, of course.
Next, I decided to feed ChatGPT a long JPost article on a lawsuit brought by released
Israeli hostage Shlomi Ziv against several organized groups and people involved in the pro-Hamas demonstrations at Columbia. It begins like this:
In a lawsuit filed Monday to the New York Southern District Court
against Within Our Lifetime and its leader Nerdeen Kiswani, Columbia Students
for Justice in Palestine and representative Maryam Alwan, Columbia-Barnard
Jewish Voice for Peace and representative Cameron Jones, and Columbia
University Apartheid Divest and lead negotiator Mahmoud Khalil, Plaintiff Ziv
said that his Hamas captors referred to protests planned by the defendants when
bragging about having American operatives.
The lawsuit alleged not only that Columbia SJP renewed its
dormant Instagram activity three minutes before the attack and National SJP
appeared to have produced propaganda material during or before the massacre,
but argued that the affiliated groups been financed and supported by Hamas
through organizations that the terrorist group founded.
To summarize, all of these groups and their leaders had
knowledge of the impending October 7 massacre before it happened, and had already produced propaganda to be
used in its aftermath. These activities were of course, sponsored and financed by Hamas. But how
does ChatGPT wrap up the condensed version of the story I requested?
Like this:
Ziv's harrowing experience—having defended civilians at the Nova Music Festival
before his capture—serves as a poignant reminder of the real-world consequences
of ideological clashes.
ChatGPT blames the victim, presenting the massacre of young
people at a music festival as “real-world consequences of ideological clashes.”
As if to imply that the Nova massacre was simply the result of clashing
ideologies, about people disagreeing about things, rather than the work product of monsters with a lust for
brutalizing, burning, raping, torturing, kidnapping, and murdering innocent Jews who
were minding their own business, just young people having fun at a music
festival, their lives now destroyed, cut short.


Terror has exactly nothing to do with ideology. It has to do only with having a
black heart, and being deep down evil. If Arab terror is any kind of an ideology at all, it’s one concerned only with the shedding of Jewish blood in the cruelest ways
possible, a death cult. But ChatGPT knows only what it was programmed to know. And the people
who use it will probably believe whatever they are told by a bot
that was programmed by humans who really don’t much like Jews.