Sanger’s Nine Theses are sharp and necessary. He highlights the hollowing-out of neutrality into “consensus,” source blacklists that silence whole swaths of public opinion, anonymous elites exercising power without accountability, and the abuse of “Ignore All Rules” as cover for bias. His reforms - competing articles, transparent leadership, a real legislature for governance, a public feedback system - would go a long way toward restoring integrity.
But procedural fixes alone cannot protect an institution from capture. What’s missing is an ethical backbone. Without one, even Sanger’s reforms would eventually be reinterpreted, bypassed, or gamed by whoever holds the keys.
This is where my recent work in philosophy and ethics comes in. I’ve been building an ethical framework (AskHillel/Derechology) designed precisely to protect systems against drift, capture, and self-deception. It combines transparency, humility, structured pluralism, and override logic into a self-correcting architecture.
Sanger has the right instincts, but what his plan lacks is a way to adjudicate value conflicts. For example: should truth always override harm reduction? When do neutrality and justice clash? Without a structured moral framework, these debates collapse back into power struggles.
That is why ethics isn’t a luxury here. It’s the firewall. It’s the system of accountability that keeps the rules from being bent beyond recognition.
Another project I've been working on, TAMAR, complements the ethics component and is ideally suited to keeping Wikipedia resistant to hijacking. TAMAR is my AI-based toolkit for detecting propaganda techniques, framing bias, and narrative manipulation. TAMAR works at the level of edits, not just policy. Every single change can be scanned, tagged, and evaluated for structural bias before it goes live.
Here’s how TAMAR plus Derechology could transform Wikipedia:
-
Per-Page Derech Declarations
Each article would openly state its interpretive frame (historical-critical, faith-based, political, etc.). That way, readers know what path/perspective (derech) they’re reading, and competing articles can coexist without pretending to represent a single “neutral” voice. True neutrality is impossible, but transparency can mitigate the silent imposing of a single point of view. -
Integrity Scores for Edits
Every edit must pass a TAMAR scan that checks for propaganda markers: selective sourcing, premise smuggling, causality distortion, terminological injection. Each edit gets an integrity score. Low scores are flagged for human review. -
Red Team Clause
Every controversial entry must withstand an inversion test: can its logic survive if flipped? If not, it’s probably engaging in selective framing. This is a structured way to expose double standards, especially in geopolitics. -
Teshuvah Journal
Every major reversion, controversy, or ideological shift is logged as part of Wikipedia’s moral memory. It’s not enough to silently update pages: Wikipedia should admit where it was wrong, and show how it corrected itself. -
Public Rating System With Derech Splits
Readers could rate an article’s framing integrity and even request a “derech split” — asking for parallel articles that present different perspectives rather than endless edit wars. -
Editorial Overview Board
Not an anonymous cabal, but a pluralistic assembly representing different frameworks (liberal, conservative, religious, academic). Their job: oversee override logic, ensure derech diversity, and maintain moral transparency.
Why does this matter so much? Because Wikipedia is not just a website. It has become one of the most important training inputs for artificial intelligence. The distortions of Wikipedia today become the biases of AI today. And make no mistake - AI is deeply affected by its choice of learning modules.
That’s why I would argue Wikipedia is as consequential as AI itself. Both are knowledge systems that shape how billions of people (and now machines) understand the world. Both face the same challenge: how to preserve integrity in the face of ideological capture. Both require not just rules, but ethical architecture.
This is where philosophy proves its real-world worth. Philosophy only matters if it can protect truth in the real world. That’s what Derechology is designed to do. It’s about creating frameworks that protect institutions from drift, bias, and capture - whether it’s Wikipedia, AI, or any other system that claims authority over truth.
Sanger is right that Wikipedia needs structural reform. But structure without ethics is brittle. What’s needed is a fusion: procedural reforms guided by a transparent moral framework like Derechology, operationalized through tools like TAMAR. That is how you build a knowledge system that is both open and resilient, pluralist and trustworthy.
Truth does not fear plurality. But plurality without integrity is just noise.
"He's an Anti-Zionist Too!" cartoon book (December 2024) PROTOCOLS: Exposing Modern Antisemitism (February 2022) |
![]() |