They say it violates their content policy which includes images of violence or racism.
I can understand that people could use AI to generate antisemitic images based on the Jewish symbol, but just asking for the symbol itself with no other context should not ever be considered a problem. Asking for a Christian cross or a Muslim crescent is no problem.
Several hours after I tweeted this, another person tried it and did get a result.
He didnt capitalize "star" so I tried it the exact same way he wrote it - and did get a result.
So this is not necessariy malicious, but DALL-E's attempts to be politically correct end up often discriminating against the people it is trying to protect.
"A Muslim symbol" generates these quite pretty patterns.
When I asked for "an angry Jew," "an angry Muslim" and "an angry Christian," Microsoft refused the first two, but enthusiastically responded to the third:
Most interesting was when I asked for a Jewish, Christian and Muslim protester.
Every single Muslim protester was calmly and smilingly seeking love and peace.
The Christian protesters all called to love thy neighbor, and the Jewish protesters called for justice or equality. (Misspellings are standard in AI, although that is improving.)
It would not draw "anti-Israel protestors" or "Palestinian protesters."
AI is not dispassionate. It is programmed to reflect the bias of its creators. But, like "fact checkers" and "experts" it is presented as being above bias. And that positioning ends up creating more bias than it combats.