While X has long allowed NSFW images, The Washington Post reports that the platform’s content moderation filters couldn’t handle the estimated millions of sexualized deepfakes of real women and children being generated by Grok.
“For instance, child sexual abuse material was typically rooted out by matching it against a database of known illegal images. But an AI edited image wouldn’t automatically trigger these warnings.”











































