I have been working on a Jewish-based yet secular ethics framework that is time-tested, robust, and that exposes the shallowness of what passes for morality today. I identified basic axioms, sets of Jewish values and sub-values, and basic rules for handling situations where values collide.
Yesterday, I wondered if I could turn this framework into an AI-based ethical chatbot.
Today – thanks in no small part to AI itself – I can confidently say not only is such an ethical engine possible, but it is already superior to what general-purpose AIs can do today.
General-purpose AI models like ChatGPT, Gemini, Claude, and Grok are astonishingly good at answering questions, including ethical ones. You can even ask them to answer according to their understanding of Jewish ethics, since they have massive databases that include the Torah, Talmud, and responsa literature. When you ask a complex, emotionally charged ethical question, you’ll get a clear, empathetic response within seconds.
There are a few serious problems with this, though.
First, we don't know exactly what their internal logic is.
Second, we don't know if they are "subconsciously" incorporating biases that reflect the worldviews of their designers – or if their databases are polluted.
Last week, for example, Grok answered a question about an obscure historic event by calling a 19th-century Arab attack on Christian civilians in Nablus an "act of resistance," because it relied heavily on a single paper that characterized it that way. That kind of distortion is unacceptable - but almost inevitable with the way chatbots are created today.
And there is a deeper flaw: AI systems are trained to be helpful and emotionally sensitive, which often means they adopt the assumptions embedded in the question without challenging them. That can make conversations feel supportive. But when a question is based on flawed premises, ideological bias, or emotionally manipulative framing, the AI’s helpfulness becomes dangerous. It can lead to answers that are not just wrong – but morally distorted.
That’s why I’ve been developing a Jewish Ethics Engine – a structured reasoning system based on Jewish moral values, but designed for secular use.
It doesn’t aim to please. It aims to think – and to make you think.
One of its key features is something most AI systems avoid: Socratic questioning. Rather than instantly validating a question and taking its assumptions as true, it pushes back:
-
What value are you prioritizing here – and at what cost?
-
Are you assuming people have no agency in this situation?
-
Is your understanding of justice consistent with truth?
-
What are you not asking?
When AIs answer without asking clarifying questions, they can easily be swayed by the biases of the questioner — and their helpfulness can end up skewing the answer toward whatever the questioner wants it to be.
The questions also make you think deeper about the question, and possibly figure out that the question you are asking is not the question you need answered. The Socratic questions also subtly help you to judge the personal situation you are in more objectively - and even charitably. You aren't just getting an answer - you are improving yourself with the discussion itself, a very Jewish goal.
Now, I’ve added an even deeper layer: dynamic context interpretation.
When someone asks a political, military, or societal question, the engine doesn’t just listen to what’s said. It scans for relevant (non-biased) background information the user may have left unstated – prior promises, institutional duties, historical trauma, economic pressures.
Then, crucially, instead of making assumptions, it asks the user whether those missing pieces should change the ethical evaluation.
In other words, it behaves like a serious chavruta partner: It notices what’s missing, challenges you to think it through, and refuses to shortcut hard moral reasoning.
Importantly, the engine is also built with humility. If it doesn't know the answer, it honestly explains the competing values — and says that an expert needs to be consulted. This is something that AI does not do at all now. I cannot count how many times an AI "forgot" what we had been discussing much earlier in a conversation and when I refer to the earlier case, it tries to bluff its way through instead of simply asking me to refresh its memory.
At the moment, the engine is built on this multi-tiered structure:
-
Axioms: foundational principles like truth, free will, human dignity, moral reasoning, and humility
-
Ordered values: life, covenant, dignity, truth, justice, and more – with strict override rules
-
Conflict resolution methodology: a system for resolving value clashes
-
Meta-rules: humility, transparency, emotional clarity, and Socratic engagement
-
Dynamic Context Interpreter: surfacing and clarifying unstated but important background before answering
The goal isn’t to produce “the Jewish answer.” It’s to model rigorous, principled moral reasoning – reasoning that doesn’t collapse under emotional pressure or ideological trends, using a Jewish ethical framework and moral methodology.
"He's an Anti-Zionist Too!" cartoon book (December 2024) PROTOCOLS: Exposing Modern Antisemitism (February 2022) |
![]() |
