As such, speech is never morally neutral. Words shape behavior, culture, and society. They can build or destroy, clarify or confuse. How can we strengthen speech that contributes to society while combating speech that is detrimental?
Most people understand that free speech is not truly unlimited. Direct incitement to murder or genocide is not free speech in any jurisdiction I am aware of. There are existing laws against those, if only sporadically enforced.
Yet some of the most dangerous speech does not call for violence directly. Instead, it prepares the ground for violence by dehumanizing others, spreading conspiracies, or creating an atmosphere of fear and rage. This kind of speech - what we might call enabling speech - does not always break the law, but it erodes public safety in predictable ways.
When this speech spreads during times of heightened tension or real-world threats, it is not enough to defend it in the name of abstract freedom. If we know that certain patterns of speech regularly precede violence or discrimination, then allowing them to go unchecked is a form of moral negligence. Calling speech a "right" muddies the waters here - when speech creates an environment of hate it cannot be let off the hook as an unchallenged, unlimited value.
This isn't a theoretical concern. Increased levels of hate directly contributed to the deaths of Jews in the fatal firebombing in Boulder and the shooting outside the Jewish museum in Washington. People's lives are at risk, and speech is part of the pattern that lead to murder.
This is where artificial intelligence can play a constructive role. Rather than acting as a digital judge, AI can serve as a kind of moral sensor: tracking when real-world incitement is rising and temporarily limiting the amplification of speech that historically contributes to it.
So, for example, when an AI on a social media platform sees more posts that directly call for harm to a group of people, it can trigger a protocol where posts that demean that group, or that call for attacking a subset of that group, or that in general can contribute to an atmosphere that can prompt viewers towards hate, to put guardrails in place.
These guardrails can include limiting the reach of such posts, telling the posters that their specific post is enabling harm and may be re-written and adding notes to posts pointing out their use of harmful stereotypes. It must be made clear that these steps are temporary, only as long as the hate and incitement are endangering real people.
This is not a system of permanent censorship. It is a form of ethical triage - prioritizing safety and dignity when the moral climate becomes dangerously unstable. The approach is not about banning ideas or silencing people. It is about recognizing patterns of harm and acting with caution when danger levels rise. Just as societies adjust behavior during natural disasters or public health emergencies, we can adjust how speech is managed during periods of heightened social risk.
Critics will ask whether such a system could chill legitimate dissent. That is a fair concern. But the goal is not to suppress criticism or unpopular views. The system focuses only on times and contexts where certain types of rhetoric, even if legal, predictably contribute to real-world danger. It uses moderation tools sparingly, applies them transparently, and provides opportunities for correction.
Speech, in this model, is not treated as untouchable, but as a serious moral act. Like all powerful acts, it carries responsibility. And when the stakes are high - when lives or public trust are on the line - that responsibility must be taken seriously.
In a moral society, no single value can stand entirely alone. Free speech matters deeply, but it must walk alongside other values like human dignity, public safety, and truth. When those values come into conflict, responsible societies do not pick favorites. They balance, they weigh, and they respond with care.
Free speech is not sacred because it is untouchable. It is sacred because of what it protects. And when it stops protecting and starts enabling harm, a moral society must step in: not to silence, but to correct, to heal, and to preserve what really matters.
"He's an Anti-Zionist Too!" cartoon book (December 2024) PROTOCOLS: Exposing Modern Antisemitism (February 2022) |
![]() |
