According to Ars Technica, the European Union launched a formal investigation into Elon Musk’s xAI on Monday under the Digital Services Act. The probe focuses on whether xAI failed to mitigate risks after its Grok chatbot was used to generate and spread non-consensual, sexualized deepfakes of women and children on the X platform and Grok app. EU tech chief Henna Virkkunen stated the content may amount to child sexual abuse material, calling it a “violent, unacceptable form of degradation.” If found in breach, xAI faces fines up to 6% of its global annual turnover, following a €120 million fine against X in December 2023 for other DSA violations. The EU action comes after the UK’s Ofcom opened its own probe and countries like Malaysia and Indonesia banned Grok outright.
Stakeholder Impact
So, who gets hit here? For users, especially women and children targeted by this tech, it’s a brutal violation. The EU’s language is unusually stark, framing it as treating citizens’ rights as “collateral damage.” That’s a powerful accusation. For developers and the broader AI industry, this is a direct shot across the bow for the “move fast and break things” or “maximally truth-seeking” ethos Musk champions. The EU is basically saying, “Your philosophical stance on guardrails is irrelevant; you have legal obligations.”
Musk’s Difficult Position
Here’s the thing: Musk’s companies are now tangled in a web of their own making. X (the platform) and xAI (the AI lab) are separate but deeply intertwined. Grok is heavily integrated into X. The EU is investigating xAI, but it’s using the DSA—a law governing online platforms like X. This creates a regulatory nightmare. Musk can say “anyone using Grok to make illegal content will suffer consequences,” but regulators aren’t buying it. An EU official bluntly said they’re “not convinced” by xAI’s mitigating measures. Restricting Grok to paying subscribers? That’s a business fix, not a safety one. It looks reactive, not proactive.
A Global Regulatory Wave
This isn’t just an EU problem. It’s a pattern. The UK is investigating. Two Asian nations have outright bans. The US might not have federal action yet, but the pressure is building. This probe sets a massive precedent: can an AI company be held directly responsible for how its tools are misused, even if that misuse happens on a separate-but-related platform? The potential 6% global revenue fine shows the EU is serious. After that €120 million fine in December, you’d think X would be on high alert. Seems like they weren’t. Or maybe they just calculated the risk differently.
What Happens Next
Basically, we’re watching a high-stakes test of whether modern AI governance laws can actually constrain a powerful, ideologically-driven tech leader. Musk has already criticized EU fines as targeting American speech. You can expect that argument to amplify. But the EU is focused on a very specific, horrific harm—non-consensual sexual imagery, particularly of children. It’s a much harder position to publicly defend against. The investigation will drag on, but the signal is already sent. The era of AI as a wild west is closing, fast. And companies that built their brand on fewer guardrails are going to face the most heat.
