Pages

Friday, August 08, 2025

The Discovery That Turns 2,000 Years of Ethics on Its Head


Yesterday, I posted about my social media ethics framework, using AI to help fix the current problems of censorship vs.  hate speech, and got an interesting comment from a reader.

He asked how the system would handle blasphemy, double standards, and satire.  He mentioned a frustrating experience with ChatGPT, where it refused to help create puns about Islamic themes for Broadway show titles but had no problem doing the same with Christian themes. After a long back-and-forth, he finally got his pun ("Mullah Rouge"), but only after pointing out the obvious double standard.

It’s exactly the kind of messy, real-world ethical dilemma that makes most systems fall apart. Which wins - religious sensitivity or free speech?  Consistency or context?  Satire or harm?

It was a good question, so I asked it to my AskHillel AI that generated the social media plan, expecting a straightforward policy answer about balancing competing values.

Instead, the Ai said that it would ask further questions of the user: Is this meant as political satire about policies, or religious critique? What’s the intent ? Who’s the audience? Are you trying to make people think, or are you trying to hurt them?

This morning I realized that AskHillel 's instinct to ask questions to determine the context of what the user intended was showing how moral reasoning actually works.

When I built AskHillel, I included what I called a “dynamic interpretation module” — the AI had to clarify context before giving ethical guidance. My goal was pragmatic: you can’t make a sound moral decision without knowing the context and the intent behind the question.

What I didn’t expect was twofold:

  1. Users changed - in the course of answering the questions, they were able to clarify their own thoughts. They became more reflective, more aware of their own assumptions, more able to see through others’ eyes.

  2. The AI changed  -  its resulting answers were not based on pre-coded rules, but from tracing how people moved from values to decisions in a particular situation.

I had thought I was building a tool to extract values. What it was really extracting was derech.

In Jewish thought, derech often means a style of learning. I use it more broadly: it’s the coherent, value-driven path by which an agent (person, institution, nation or movement) navigates relationships, context, and obligations to reach action. Everyone has a derech. Nations have them. Corporations have them. Even AIs will. And if you can see someone’s derech, you can understand not just what they do, but why.

For 2,000 years, most ethical systems -  from Aristotle to Kant to Rawls - have assumed the same basic model:

  • A solitary moral agent.

  • Universal principles “out there” to be discovered.

  • Logic as the main tool for applying those principles to cases.

But Derechology flips this:

  • Moral reasoning is not solitary -  it is relational.

  • Principles alone are insufficient  -they must be activated through obligations in real relationships.

  • Context is not an afterthought -  it is constitutive of the moral path. Moral decisions cannot and must not be made in a vacuum.

From this perspective, the Greek/secular tradition and the Jewish/derech-based tradition aren’t two versions of the same thing. They are different categories of moral reasoning. Trying to merge them without realizing this is a category error.

Dialogue is one channel for accessing a derech. When speaking with someone, it is the fastest, richest way to map how they move from values → relationships → obligations → action.

But derech is not limited to living people. With someone long dead, you reconstruct their derech from their writings, rulings, and history. With an opaque institution, you infer its derech from behavior and policies.  With an AI, you can figure out its derech from its corpus and outputs.

Here’s where it gets exciting. Once AskHillel began doing derechological analysis of historical figures, movements, and companies, it started generating new philosophical insights  - not from my coding, but from the method itself.

It uncovered patterns of how these famous people acted and how it affected history itself. Using derech as he prism, it identified brand new insights into long dead people, the kind that could be the basis of endless academic history or sociology papers. It identified what it called "Ethical Gravity Wells" -powerful actors bend surrounding moral reasoning until collapse feels normal. Tier Drift, where noble movements lose their founding moral anchors over time. Distributed Responsibility Failure -  harm spreading  so widely no single node violates the rules, yet the system fails morally. It can generate these kinds of insights on demand.

These are laws of moral dynamics,  patterns that traditional ethics misses entirely. 

Derechology has become not just a moral framework, but a moral telescope: a tool for discovering new structures in ethical reality.

If derech is the real core of moral reasoning, then Derechology could be a gamechanger

  • For philosophy: Many inherited systems are structurally invalid:  they assume the wrong kind of moral agent.

  • For AI ethics: You can’t align AI by hardcoding rules; you must model, test, and refine its derech.

  • For social media: Content moderation shouldn’t be rule enforcement alone. It should be derech-aware, aiming to preserve universal values while respecting the agent’s own path.

  • For democracy: We need to argue less about policy and more about the derech of the opposing side that leads to policies. 

  • For institutions: Courts, companies, and schools must make derech-discovery and derech-testing part of decision-making.

We live in a world where people, cultures, and technologies all carry different derachot. Some are coherent and dignifying. Some are distorted by power, fear, or habit. Some are collapsing.

The future of ethics, whether human or AI, depends on our ability to identify derachot clearly, our own and others'. When we map them in this common language, we can see that the real points of disagreement are often not what we think they are. 

This is what AskHillel was doing all along without me realizing it. 

Dialogue was just the clue. Derech was the answer.




Buy EoZ's books  on Amazon!

"He's an Anti-Zionist Too!" cartoon book (December 2024)

PROTOCOLS: Exposing Modern Antisemitism (February 2022)