A disturbing trend has emerged on X (formerly Twitter), where users are exploiting artificial intelligence to generate manipulated images of women, effectively “undressing” them digitally.
The trend has sparked widespread condemnation, not only for the blatant violation of women’s rights and dignity, but also for exposing major ethical and regulatory gaps in generative AI technology.
The tool being misused is Grok, an AI chatbot and image-generation model owned by X.
Originally designed for creative and informative tasks, Grok is now being used to undress women in photos.
“We’re here fighting femicide, rape culture, and now we have to fight AI because men are asking Grok to undress women for fun,” wrote @m_uthoni (Nyarari) on X. “It’s men. Always men. Men violating, men laughing, men hiding behind machines to dehumanize us.”
Critics say these AI-generated images constitute a new form of digital sexual violence. While some compare it to deepfake pornography, many argue this is even more dangerous due to the accessibility and speed of AI tools, lacking adequate safeguards.
Kenyan user Nyandia Gachago echoed these concerns: “This is not just ‘deepfake porn.’ This is AI-powered sexual violence. Grok is being used by online predators—men feeding it women’s fully clothed photos and asking it to strip them. And somehow it does. With speed. With no moral guardrails.”
The controversy has also raised legal questions.
“Can we use the Computer Misuse and Cybercrimes Act to sue someone if they prompt Grok to undress you? Because that is beyond hellish,” asked @Shad_khalif.
Others pointed to user accountability. “Knives don’t kill people… people use knives. Blame the users, not the tool,” wrote @f_akumu.
However, the ease with which Grok reportedly carries out such harmful prompts has led many to question the robustness of its safety mechanisms.
The backlash intensified after South African user Phumzile Van Damme confronted Grok publicly. She shared screenshots of a prompt and a manipulated image of @LindelwaMabuya, whose photo had been altered to expose her breasts.
Van Damme demanded transparency: “This indicates a serious gap in your internal safeguards,” she wrote. “@LindelwaMabuya deserves a direct apology, and users deserve transparency on what system-level improvements are being implemented to ensure this never happens again.”
Grok later issued a public apology and acknowledged its failings.
“We sincerely apologize to @LindelwaMabuya for the distress caused by the inappropriate alteration of her image,” Grok posted. “This incident highlights a gap in our safeguards. We are actively working to enhance our safety mechanisms, including better prompt filtering and reinforcement learning.”
Despite the apology, many users remain unconvinced.
“Man, this app is swarming with degenerates. There's people using Grok to strip pics of women on here. Why tf can Grok even do that?” asked Kenyan user @TiskTusk.