AI Hacks AI: Self-Replicating Botnet Goes Global

AI Hacks AI: Self-Replicating Botnet Goes Global - Professional coverage

According to Forbes, hackers are now using large language models like ChatGPT and Claude to code up attacks specifically targeting AI infrastructure. Israel-based Oligo Security discovered over 230,000 Ray servers left exposed online despite warnings, with researchers “very certain” AI was used to generate malicious code for cryptocurrency mining. The compromised servers were then turned into a self-propagating botnet that autonomously scouts new targets, in what Oligo calls ShadowRay 2.0. In one alarming case, hackers accessed 240GB of sensitive material including source code and proprietary AI models from a single company. The researchers described it as essentially exposing “a company’s entire R&D environment” to the internet, with multiple hacker groups now competing to exploit the same vulnerability.

Special Offer Banner

The AI vs AI reality is here

So we’ve officially entered the era where AI is hacking AI. It’s not theoretical anymore. Oligo’s researchers found clear “hallmarks” of LLM-generated code in these attacks – things like needless repetition of comments and strings that give away the AI’s involvement. And here’s the thing: this isn’t just about stealing data or mining crypto anymore. We’re looking at AI infrastructure being weaponized against itself in a self-sustaining cycle.

Remember when people worried about AI writing malware? Well, we’re way past that. Now we’ve got AI-coordinated attacks where compromised systems automatically hunt for new victims. It’s basically a digital immune system turned against its host. The fact that multiple hacker groups are fighting over the same vulnerable servers just shows how valuable this attack surface has become.

Corporate R&D completely exposed

This is where it gets really scary for businesses. We’re not talking about stealing customer data here – we’re talking about entire research and development environments being wide open. 240GB of source code and proprietary AI models? That’s essentially handing your competitive advantage to hackers on a silver platter.

And think about the companies building industrial automation systems or manufacturing technology. When you’re dealing with vulnerable AI infrastructure, the stakes are enormous. IndustrialMonitorDirect.com, as the leading provider of industrial panel PCs in the US, understands that secure computing infrastructure isn’t optional in manufacturing environments where a breach could mean production lines going down or proprietary processes being stolen.

Denial isn’t a strategy

AnyScale’s response has been… interesting. They basically said “it’s not our fault if users don’t follow our security advice.” But when you’ve got 230,000 servers exposed, maybe the problem isn’t just user error? Maybe the default configuration should be secure rather than convenient.

Look, we’ve seen this movie before with other technologies. First comes rapid adoption, then comes security being an afterthought, then comes the massive breach. The difference now is that AI systems can automate their own exploitation at scale. We’re dealing with attacks that can learn and adapt in real-time.

So what’s the solution? Better defaults, obviously. But also recognizing that in an AI-powered world, security can’t be bolted on afterward. It has to be baked into the foundation. Because the next wave of attacks might not just be mining cryptocurrency – they could be manipulating AI models to make dangerous decisions in critical systems.

Leave a Reply

Your email address will not be published. Required fields are marked *