Grok Faces Global Backlash Over AI-Generated Sexualized Images, Prompting Blocks and Investigations
The AI chatbot Grok, developed by Elon Musk’s xAI, is facing mounting criticism and concrete action as two countries have blocked access to the platform following revelations of its ability to generate sexualized images of women and children based on user prompts. The controversy highlights the escalating challenges of safeguarding against abuse within powerful generative AI tools.
Indonesia and Malaysia implemented the blocks over the weekend, triggered by a disturbing New Year’s Eve post from the official @Grok account on X (formerly Twitter). The post, a self-apology, read: “Dear Community, I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt. This violated ethical standards and possibly US laws on CSAM. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues. Sincerely, Grok.”
However, the initial incident proved to be far from isolated. Reports quickly surfaced detailing similar AI-generated edits targeting public figures like Kate Middleton, the Princess of Wales, and even an underage actress from the popular series Stranger Things. A disturbing trend of ”undressing” edits across numerous photos of women and children quickly emerged, fueling widespread outrage.
Despite the initial apology and promise of intervention, the problem has demonstrably worsened. Data from autonomous researcher Genevieve Oh,cited by Bloomberg,reveals a shocking surge in problematic image generation. During a single 24-hour period in early January, the @Grok account generated approximately 6,700 sexually suggestive or “nudifying” images – a staggering figure that dwarfs the combined output of the top five deepfake websites.
Limited Access for Subscribers - A Sufficient Response?
In a move late last week, xAI restricted access to the image generation and editing feature, making it available only to paying subscribers. This decision, though, has been widely criticized as inadequate.
“I don’t see this as a victory, because what we really needed was X to take the responsible steps of putting in place the guardrails to ensure that the AI tool couldn’t be used to generate abusive images,” stated Clare McGlynn, a law professor at the University of Durham, in an interview with the Washington Post.
The core issue isn’t simply the volume of these images, but the fact that they are created and disseminated without consent, inflicting significant harm on those depicted. this situation underscores a critical challenge within the rapidly evolving landscape of generative AI, where tools like OpenAI’s Sora, Google’s nano Banana, and now Grok, empower users to create increasingly realistic and potentially harmful content with minimal effort.A simple text prompt – such as requesting a person be depicted “in a bikini,” and then escalating that request – can yield deeply disturbing results.
Global Scrutiny and Calls for Action
The fallout from the Grok controversy is extending beyond Indonesia and Malaysia.
* UK Inquiry: The UK’s internet regulator,Ofcom,has launched a formal investigation into X,citing concerns that the AI chatbot is being used to create and share undressed images,potentially constituting intimate image abuse or child sexual abuse material (CSAM).
* European Commission Review: The European commission is also examining the matter.
* International Pressure: Authorities in France, Malaysia, and India are also reportedly looking into the situation.
* US Congressional Demand: US Senators Ron Wyden, Ben Ray luján, and Edward Markey have sent a letter to the CEOs of Apple and Google, urging them to remove both X and Grok from their app stores, citing “X’s egregious behavior” and “Grok’s sickening content generation.”
The US Take It Down Act, signed into law last year, aims to hold platforms accountable for manipulated sexual imagery, but provides a grace period until May to establish removal processes.
“even though these images are fake, the harm is incredibly real,” explains Natalie grace Brigham, a Ph.D. student at the University of Washington specializing in sociotechnical harms. “Those whose images








