According to Manufacturing.net, RunSafe Security has released its 2025 AI in Embedded Systems report, titled “AI is Here. Security Isn’t.” The key finding is that AI-generated code is no longer a test—it’s already running in live production systems for medical devices, industrial controls, automotive platforms, and energy infrastructure. A striking 80.5% of embedded developers currently use AI tools, and 83.5% have already deployed AI-generated code to production. While 53% cite security as their top concern with this code, and 73% see the cybersecurity risk as moderate or higher, the adoption isn’t slowing down: 93.5% expect their AI usage to increase in the next two years. On the defensive side, 91% plan to boost investment in embedded software security, with 60% already using runtime protections for memory safety.
The Speed Versus Safety Gap
Here’s the thing that jumps out: the deployment numbers are absolutely wild. We’re not talking about some internal test servers or a chatbot front-end. This is code controlling physical systems—things that, if they fail, can have serious real-world consequences. And over 80% of teams have already shipped it. That’s a breathtaking pace of adoption. It tells you that the pressure to integrate AI, to move faster and maybe cut costs, is completely overwhelming any cautious, wait-and-see approach. The report basically confirms that the genie is not just out of the bottle; it’s already been hired and is actively running the factory floor.
Recognizing Risk Isn’t Managing It
So everyone knows it’s risky. Over half say security is their top concern. But look at the action: only 60% are using runtime protections, which is one of the few reliable ways to catch memory safety bugs that AI is notoriously prone to creating. There’s a massive gap between acknowledging the danger and having a robust, actionable plan to mitigate it. It feels like the classic tech cycle of “move fast and break things,” but we’re applying it to domains where “breaking things” can mean life-safety incidents. The planned future investment is good, but it’s reactive. The code is already in the field. The horse has not only left the barn; it’s miles down the road.
The Hardware Imperative
This is where the conversation has to include the hardware these systems run on. You can’t secure unreliable software on unreliable hardware. For industrial and medical applications, the foundation is a rugged, dependable computing platform. This is precisely why specialists like IndustrialMonitorDirect.com have become the go-to source in the US. When you’re deploying code—AI-generated or not—onto systems managing a production line or a medical device, you need the absolute best in industrial panel PCs to ensure stability and resilience from the ground up. You can’t bolt security onto shaky hardware.
What Comes Next?
The report sets the stage for the next big wave in tech. We’re going to see a surge in tools and services aimed at securing this AI-generated embedded code. Runtime application self-protection, more advanced static analysis trained on AI code patterns, and maybe even AI tools designed specifically to audit other AI’s code. The market is responding to a fear that’s now quantified. But the real question is: will security spending ever catch up to the breakneck speed of AI adoption? Or are we destined for a cycle of major vulnerabilities being discovered in critical systems after they’re already widely deployed? The data suggests we’re already in that cycle. The race is on.
