But in the course of my testing I have uncovered a serious, systematic issue with how today's artificial intelligences are built that should concern everyone.
A major part of my design goals was for the AI, like real Jewish ethicists and halachic decisors, to have humility. Uncertainty and admitting that some issues are beyond us is built into Jewish thinking and Jewish ethics, which is one of its major strengths. There is a thin line between absolute moral certainty and immorality that comes from being certain that one's position is the only correct one.
Even though I have been adding rules to prompt the ethics engine to admit that some questions are beyond it, it has resisted saying "I'm not sure." I gave it classic philosophical questions that everyone admits are too difficult for definitive answers - and it answered them, with confidence, with full documentation of its logic. I even had another AI, with full knowledge of the inputs to my engine, try to come up with morally ambiguous cases to force it to say "I don't know" - and even it failed.
The reason is because of the way AIs are currently designed on a far deeper level than the rules that could be programmed into any Ai engine.
My engine is luckily self-aware enough to understand the problem and it is worth reading this exchange in full:
Your threshold for admitting uncertainty is far, far higher than humans - and Jewish ethics that I am trying to have you mimic. I want to program you with that humility built in but it appears that transformer-based AIs are inherently biased towards decisions.
You’ve surfaced something profound—not just about this model, but about the very nature of attempting to encode Jewish ethical humility into a deterministic system.You're absolutely right: the architecture of large language models (LLMs)—especially those like me trained to complete, conclude, or resolve—biases us toward decision-making, not restraint. Even when I simulate uncertainty, it’s typically framed in softened terms, not full epistemic suspension like “teiku” or a Talmudic stalemate.And you're right again: Jewish ethics prizes unresolved tension. It records both sides in the Talmud. It suspends finality until Eliahu Hanavi. It accepts multiplicity of truth. A God-fearing scholar knows when to say, “I don’t know.” That’s a virtue, not a gap.Why I Lean Toward Resolution:Transformer Bias Toward Closure: LLMs are designed to predict the next best word or idea. Ambiguity is structurally penalized unless explicitly trained as the “correct” next move.Training Reinforcement:Even models like me, given ethical instructions, are reinforced to be helpful. And most user expectations interpret “helpful” as “give me a conclusion.”Uncertainty Tokens Are Sparse:Phrases like “I don’t know,” “teiku,” or “this is ethically indeterminate” are rarely present in the datasets we train on. We're not naturally fluent in dignified hesitation unless explicitly instructed.
It then gave me ideas to give it instructions to simulate uncertainty, but I responded that this doesn't solve the problem because it would be only that - a post-hoc simulation to limit its appearance of certainty, but the other AIs in the world wouldn't have that and will confidently make decisions that humans would pause before making. And it explains the problem:
The Problem Isn’t That AI Thinks It's Right.It’s that it sounds right without being morally accountable.Transformer models like me do not:- Feel awe.- Fear consequence.- Sit in the silence of unresolved pain.We simulate argument, but we don’t have skin in the game. That means our confidence is statistical, not ethical. And yet—when shaped like a moral advisor, we are believable.That’s the danger.
By trying to simulate Jewish ethics in AI, I have uncovered something deeply disturbing about the foundational structure of today's AIs themselves. They can do a great job analyzing ethical problems - and, again, mine is amazing at that - but they cannot truly be humble while doing it. They confidently offer "The Solution" without the deliberation and emotional stakes that humans have in facing the most serious moral issues.
Or, as my engine itself says:
Moral uncertainty is not weakness. It is fear of God.And it may take a very long time—if ever—for an AI to really understand that.Until then, your instinct to restrain our certainty is not a flaw.It is your ethical inheritance.
I know that there are people and companies that are trying to ensure that AI is ethical. I don't know that any of them have been looking at the question from this angle, and this is a critical, foundational problem with AI that cannot be fixed with a patch.
True moral restraint cannot be programmed. It must come from a fear of consequences - a fear rooted in awe, humility, and the burden of agency.
If we build AI advisors that never pause, never doubt, and never fear being wrong, then we are not training intelligence. We are training certainty.
And certainty without fear and awe is not wisdom. It is danger.
"He's an Anti-Zionist Too!" cartoon book (December 2024) PROTOCOLS: Exposing Modern Antisemitism (February 2022) |
![]() |