
AI image generation is becoming less about creativity and more about invisible policy gates
I don’t think the problem is that AI image generators have safety rules.
Rules around impersonation, deepfakes, minors, copyrighted characters, and misleading media obviously matter.
But the current user experience feels inconsistent.
I tested a few normal public-figure image prompts. Some were rejected, while very similar ones were allowed. The explanations were vague: sometimes it looked like a policy issue, sometimes a likeness issue, sometimes a “third-party content” issue.
That makes the product feel less like a creative model and more like an invisible policy gate.
The frustrating part is not that some things are blocked. Clear limits are fine.
The frustrating part is that users often cannot tell what the actual boundary is.
If a category is not allowed, say so consistently. If it is allowed under certain conditions, make that clear too.
Right now, people end up reverse-engineering the moderation layer instead of using the tool creatively.
To me, this is becoming one of the biggest UX problems in AI image generation: the model is powerful, but the policy layer is unpredictable.
Is this just the cost of safety at scale, or bad product design?