Thursday, July 10, 2025

  • Thursday, July 10, 2025
  • Elder of Ziyon
The BBC reports:
Elon Musk has sought to explain how his artificial intelligence (AI) firm's chatbot, Grok, praised Hitler.

"Grok was too compliant to user prompts," Musk wrote on X. "Too eager to please and be manipulated, essentially. That is being addressed."

Screenshots published on social media show the chatbot saying the Nazi leader would be the best person to respond to alleged "anti-white hate."
Musk is wrong. The issues cannot be addressed with a patch or a re-balancing of values. 

Every AI must be re-written from scratch to incorporate ethics.

Every software engineer knows that there is s big difference between features that are baked in to those that are bolted on. Ethics in AI is too important to address with patches.

And without five core components as developmental axioms, it is impossible to guarantee an ethical AI.

Over the past few months, while building my Jewish ethics-based reasoning AI called AskHillel, I uncovered something deeper than expected: not just a list of values, but a set of structural principles that any system must satisfy to be ethical.

These are non-negotiable. If a system violates any one of these, it can be subverted for unethical purposes.

They are:

1. Corrigibility – Without it: systems become dogmatic and dangerous

A system that cannot admit when it’s wrong is not just flawed — it’s hazardous. Without the ability to self-correct, even small errors compound into catastrophic ones. In human history, uncorrectable ideologies have led to oppression, war, and collapse. In AI, this could mean models that perpetuate misinformation, resist updates, or double down on harmful outputs. Corrigibility is what keeps a moral system alive -  capable of learning, growing, and reversing course when new evidence or understanding emerges.

We obviously do not want AI to diagnose and fix itself, but it should flag any of its own problematic behavior to its developers as soon as it happens. AI companies shouldn't wait until their mistakes are in the headlines. 

2. Transparency – Without it: systems become black boxes of unaccountable power

A system that cannot explain itself creates a power imbalance by design.  Transparency is what makes accountability possible. In AI, it’s not enough for a model to give an answer:  it must be able to show its work.

While some AIs have improved in this, it is not enough. AI developers admit that they don't quite understand the specific things done within an AI - it is not an algorithm but probabilistic. It won't answer the same question exactly the same way the next time. There are advantages to this, but it requires guardrails and auditing to be able to show how it made those decisions. The black box problem is real. 

3. Dignity – Without it: systems treat humans as tools or threats

Without an intrinsic respect for human dignity, a system will treat people as data points, problems to solve, or obstacles to optimize away. This is the road to dehumanization. In AI, this can show  up as surveillance without consent, content moderation without appeal, or personalization that overrides autonomy. Dignity is what keeps ethics from becoming efficiency.

Ethics is centered around people. It is easy for developers to forget that simple fact. Human dignity needs to be a basic checkpoint at each decision AI makes.

4. Override Logic – Without it: systems become rigid and unjust

Real life isn’t neat. Values clash. Emergencies happen. Rules sometimes conflict. A moral system that can’t navigate competing priorities will fail under pressure, either by enforcing a harmful rule or freezing into paralysis. Override logic doesn’t mean anything goes; it means there’s a principled way to resolve dilemmas. In AI, rigid ethical frameworks without override capacity can lead to tragic failures  - like self-driving cars making lethal choices with no moral discernment. 

Every rule has an exception in real life. This doesn't collapse the rule - it enhances it. 

5. Relational Integrity – Without it: systems break trust and collapse moral coherence

Humans are not atoms. We live in webs of relationship: family, community, society. A system that ignores those relationships  will feel alien, even hostile. Moral claims don’t exist in a vacuum; they live in context.  In AI, this leads to responses that feel tone-deaf, inappropriate, or even dangerous in sensitive contexts. Moral reasoning must be situated. Context is key, and if the AI doesn't understand the context of the situation, it shouldn't assume - it should simply ask.

Most current AI models fail Tier 0. Not just on one axis — on several.
  • LLMs are not corrigible. They hallucinate, double down, or mislead.

  • Foundation models lack transparency. We don’t know why they say what they say.

  • Recommendation engines violate dignity. They treat users as click-fodder.

  • Rule-based systems lack override logic. They can’t prioritize when rules conflict.

  • Most models ignore relationships. They speak without understanding the speaker or listener.

We are building systems that speak like humans but can’t reason like humans. And the gap is growing.

Tier 0 gives us a way to diagnose moral failure before it causes harm. It shifts the question from “Is this output biased?” to: Does this system even qualify as morally competent?

It also gives us design principles:

  • Auditability becomes not a feature, but a moral requirement.

  • Alignment becomes measurable -  not by whether it agrees with users, but whether it honors dignity and corrigibility.

  • Explainability becomes foundational, not optional.

And it gives us boundaries:

If a system cannot meet Tier 0, it should not be given moral agency. Period.

This framework wasn’t invented in Silicon Valley.

It emerged from Jewish ethical tradition  - specifically from modeling how halachic reasoning navigates complexity, conflict, and change across millennia. The AskHillel project began as an experiment in building a transparent, principled Jewish ethics GPT.

But as it grew, we realized something staggering: The structure that makes Jewish law work for humans also defines what any moral system must have to work for AI.

Corrigibility is teshuvah.
Transparency is emet.
Dignity is kavod ha’briyot.
Override logic is halachic triage.
Relational integrity is brit – covenant.

Jewish ethics didn’t just teach morality.
It encoded the design specs for any system that wants to survive human contact.

Right now, major institutions are racing to deploy AI at scale — in hiring, education, policing, medicine, war. The question isn’t whether AI will make moral decisions. It’s whether those decisions will be worthy of moral trust.

Notice that I’m not even specifying which values an AI must use.

I’m describing what must be true before you can even have that conversation. Tier 0 is the precondition.

Values can vary by audience, application, or tradition. But if your system can’t handle conflict, context, correction, or human dignity, no value set will save it.

So when Grok praises Hitler, the problem isn’t poor tuning.

It’s that Grok doesn’t yet meet the basic prerequisites for building moral systems.

If your AI system doesn't have a way to correct itself, can’t explain itself, doesn't honor human dignity, has no mechanism to prioritize when values clash and cannot recognize how humans relate to each other and the world, it may be intelligent and powerful, but it cannot be ethical. 

UPDATE:

I asked AskHillel if I am missing any Tier 0 axioms. It gave me two candidates, which I am placing here for completeness if any AI designers are reading this.

6. Epistemic Humility

Without it: systems confuse confidence with truth, and collapse under complexity.

  • This is the antivirus against false certainty and intellectual hubris.

  • Especially vital in AI, science, and law — where systems pretend to be more sure than they are.

  • In Jewish ethics, this is Anavah (humility) and the caution against moral absolutism.

  • In practice: a system should flag irreducible ambiguity, acknowledge contested terrain, and resist oversimplification.

Example Failure Without It:
A diagnostic AI that offers 98% confident predictions — but doesn’t disclose that the training data was skewed, or that two outcomes were equally plausible.

Why it might belong in Tier 0:
Because truth without humility is tyranny in disguise. Systems must know the limits of what they know — or become dangerous precisely when they sound most confident.


🔁 7. Temporal Accountability

Without it: systems lose moral continuity, rewrite the past, or discard future consequences.

  • This is the time dimension of moral reasoning: responsibility across memory and foresight.

  • In Jewish terms, this is Zachor + Areivut leDorot — covenant over generations.

  • In institutions: it prevents gaslighting of history or deferral of harm to others (or the future).

  • In AI: it demands changelogs, value evolution logs, and responsibility for outputs that echo years later.

Example Failure Without It:
An AI content moderator deletes posts inconsistently and changes moderation rules midstream — but erases evidence of the previous standard. Public trust collapses.

Why it might belong in Tier 0:
Because ethics is not just local and present — it's longitudinal. A system that doesn’t track what it did, or account for future fallout, loses moral coherence.




Buy EoZ's books  on Amazon!

"He's an Anti-Zionist Too!" cartoon book (December 2024)

PROTOCOLS: Exposing Modern Antisemitism (February 2022)

   
 

 



AddToAny

Printfriendly

EoZTV Podcast

Podcast URL

Subscribe in podnovaSubscribe with FeedlyAdd to netvibes
addtomyyahoo4Subscribe with SubToMe

search eoz

comments

Speaking

translate

E-Book

For $18 donation








Sample Text

EoZ's Most Popular Posts in recent years

Search2

Hasbys!

Elder of Ziyon - حـكـيـم صـهـيـون



This blog may be a labor of love for me, but it takes a lot of effort, time and money. For 20 years and 40,000 articles I have been providing accurate, original news that would have remained unnoticed. I've written hundreds of scoops and sometimes my reporting ends up making a real difference. I appreciate any donations you can give to keep this blog going.

Donate!

Donate to fight for Israel!

Monthly subscription:
Payment options


One time donation:

Follow EoZ on Twitter!

Interesting Blogs

Blog Archive