WASHINGTON — A recent cyber wargame with senior tech industry executives has the US Army considering more autonomy for AI “agents,” especially in wartime, including development of a “risk continuum” policy for when it might have to let agentic AI watchdogs off the leash.
“Should the degree of human involvement vary based on the situation we’re in?” Brandon Pugh, principal cyber advisor to Army Secretary Daniel Driscoll, told reporters Wednesday. “If we’re facing a slew of cybersecurity attacks against us in a time of conflict, perhaps there’s a different risk appetite than in normal peacetime.”
Officially called “AI Table Top Exercise 2.0” (AI TTX), Monday’s wargame deliberately presented its participants with a dire scenario: It’s 2027, and a crisis in the Indo-Pacific has escalated into a cyber war against US military networks. Participants included executives from 14 tech companies as well as the inter-service US Cyber Command.
“The premise was that an adversary was leveraging AI [to] launch salvo after salvo of attacks that continuously adapted to the Army’s defensive posture and did so arguably faster than a human defender could keep up with,” Pugh said.
New AI tools, operating largely autonomously with minimal human oversight, can find cyber vulnerabilities and attack them faster than human defenders can plug the holes, said Lt. Gen. Christopher Eubank, head of Army Cyber Command.
In “this brave new world of agentic AI … to tell somebody to ‘patch faster’ is just unrealistic,” he told reporters.
“So how do you use AI to get to a place where we’re not limited by human speed anymore?” Eubank asked. “Where does AI have autonomy to do things in the cyberspace defense environment?”
AI is already helping detect intrusions on Defense Department networks, Pugh explained, but what’s needed now is agentic AI that can not just warn human operators of an attack in progress but autonomously take action to stop the breach.
“We’re fantastic at leveraging AI for detection,” he said. “How do we now continue to drive that forward with, you know, agentic capabilities to not only detect, but to do a response action?”
The wargame didn’t give definitive answers to these complex questions. Designed and orchestrated for the Army by the independent Strategic Competitive Studies Project, the exercise didn’t attempt a detailed blow-by-blow simulation. Instead, it followed a more freeform seminar format where each of the 14 executives offered their suggestions on how to handle the hypothetical conflict, followed by questions from military officers and officials.
Those private sector perspectives gave the military a lot to think about. Many of the industry executives have “struggled with the same things we struggle with,” Eubank said.
The exercise also suggested some concrete steps the service could take in short order.
For instance, the Army plans to rapidly acquire some AI tools suggested by the exercise and deploy them to two cyber defense units for testing, using rapid-procurement funds set aside beforehand. These capabilities will be picked from what industry has already developed rather than laboriously developed to military specifications, Pugh emphasized.
“We don’t have the luxury of … long acquisition pipelines,” he told reporters. “We don’t need to start from scratch.”
What’s arguably more significant than any specific software, however, is potential changes to policy, procedures, and organization, Eubank said.
“This is much more than technology. This is about the human workforce. It’s about organizational structure,” the general told reporters. “ I wrote down 19 things…and none of them are a product.”
In particular, he said, “I think the biggest thing ARCYBER walked away with was we’ve got to determine risk acceptance in the artificial intelligence environment with the use of agents.” In other words, there may be circumstances where the cyber threat is sufficiently dire and fast-moving that the military may need to let AI agents make high-stakes decisions and take autonomous action in domains that today are the sole purview of human beings.
Eubank and his staff are now figuring out this new cyber warfighting doctrine. “I walked away telling my team, okay, we probably need to come up with what that risk acceptance continuum looks like,” he said. “That was probably one of the biggest things I learned.”
