1.5 Million AI Agents Are Running Unchecked Inside Enterprises, New Research Warns

A workforce larger than Walmart's entire global employee count is operating inside corporations across the US and UK—and more than half of it has no oversight.
That's the startling finding from a new study by Gravitee, an API management platform vendor, which reveals that approximately 1.5 million AI agents are currently ungoverned and at risk of "going rogue" inside enterprise environments. The research, based on a December 2025 survey of 750 IT executives and practitioners, exposes a growing gap between the rapid deployment of autonomous AI agents and the security measures meant to control them.
"We see stories of AI agents going rogue all the time: deleting codebases, leaking confidential information, inventing fake data," said Rory Blundell, CEO of Gravitee, in an email explaining the research motivation. "Our working hypothesis was that, while agentic deployment is reaching an exciting stage, businesses have not yet caught up with agent governance. The research validates that."
An Invisible Workforce
The numbers paint a picture of an invisible workforce that has grown faster than IT departments can track. Researchers extrapolated that over three million AI agents now operate within corporations, based on government estimates of businesses in the US and UK with 250 or more employees. The mean number deployed per organization: 36.9.
Yet 53% of these agents are not actively monitored or secured. Perhaps more concerning: 88% of surveyed organizations reported experiencing or suspecting an AI agent-related security or data privacy incident in the past 12 months.
David Shipley, head of Canadian-based security awareness training firm Beauceron Security, offered a blunt assessment. "The only thing that shocks me is that people think it's only 53% of agents that aren't monitored," he said. "It's higher."
Shipley drew a striking comparison to the Titanic disaster. "The Titanic didn't happen because they didn't know there would be icebergs," he explained. "They knew it was peak iceberg season, they knew they were going too fast. They thought they'd detect an iceberg; if they didn't, their technology controls would protect them. They trusted their watertight compartments—which weren't watertight at the top—and their wireless communications technology to call for help."
"Wrong then, super wrong now," Shipley added. "We know AI agents are inherently dangerous and unreliable. There are literally math proofs that show it. So we know there are icebergs. Let me repeat this for those at the back of the room: 100% of AI agents have the potential to go rogue."
The Security Gap
The Cloud Security Alliance's recent report on securing autonomous AI agents corroborates these findings. According to the report, security and governance approaches to autonomous AI agents rely on static credentials, inconsistent controls, and limited visibility—none of which are suited for systems that operate continuously and make decisions with business impact.
"The agentic workforce is scaling faster than identity and security frameworks can adapt," said Hillary Baron, AVP of Research at the Cloud Security Alliance. "Success in the agentic era will hinge on treating agent identity with the same rigor historically reserved for human users, enabling secure autonomy at enterprise scale."
The study found that organizations continue to rely on credentialing patterns not designed for autonomous systems—API keys, usernames and passwords, and shared service accounts remain common. Meanwhile, approaches designed for machine identities like OIDC, OAuth PKCE, or SPIFFE/SVID workload identities are less widely adopted.
Manish Jain, principal research director at Info-Tech Research Group, predicted that by 2028, there will be more AI agents globally than human employees. "It would be one of the biggest challenges for business and IT executives to govern them without curtailing the innovation that these AI agents bring," he said.
Jain emphasized that most enterprise AI agents run without oversight. "Many organizations don't even know how many agents they have, where they're running, or what they can touch," he said. "If you don't know how many mules are in the barn, don't act surprised when one kicks the door down."
A New Insider Threat
The experts agree on one thing: the problem isn't just "rogue AI"—it's invisible AI. Unaccounted agents often emerge through sanctioned low-code tools and informal experimentation, bypassing traditional IT scrutiny until something breaks.
"AI agents are no longer helpful bots," Jain warned. "They often operate with delegated yet broad credentials, persistent access, and undefined accountability. This can become a costly mistake—overprivileged agents are the new insider threat."
Shipley was even more direct about the path forward. "We need to define tiered access for AI agents," he said. "While we can't avoid giving a few people keys to our house to speed up things, if you trust every stranger with your house keys, we wouldn't be able to blame the locksmith when things go missing."
For enterprises racing to deploy AI agents for productivity gains, the message is clear: the technology is moving faster than the safeguards meant to contain it. And as Shipley noted, "by the time IT and security roll on an AI agent risk, the damage is done—the ship's sinking too fast and radio isn't going to help because help will be too late."
Sources:
- CSO Online - 1.5 million AI agents are at risk of going rogue
- Help Net Security - AI agents behave like users, but don't follow the same rules
- Business Insider - AI agents failed at real-world consulting tasks
- CrowdStrike — OpenClaw security risks
- News @ Northeastern — Privacy concerns
- Reco.ai — Exposed instances analysis