WASHINGTON — Military personnel and Defense Department civilians have used a version of Google Gemini’s Agent Designer to create over 100,000 semi-autonomous AI agents in less than five weeks since the tool became available, a Pentagon official told Breaking Defense.
“We’ve seen remarkable adoption since its launch, with over 103,000 agents built and a total of more than 1.1 million agent sessions recorded” as of mid-April on GenAI.mil, the official said. “We are currently averaging about 180,000 sessions each week.”
A “session” is one agent getting used one time by one user. A popular agent may account for thousands of sessions with thousands of different users each week, while a niche tool may only get used by one person once.
Agentic AI is an evolution of generative AIs like Gemini or ChatGPT. Instead of just answering a user’s questions, the way a chatbot does, agents can take a human user’s instructions and act on them, for example by replying to emails, updating software, or compiling source materials and drafting a report on them.
For the Pentagon, the AI agents have formal Authorization to Operate (ATO) at Impact Level 5, meaning they can be used for unclassified tasks.
The official, who spoke on the condition of anonymity, said some of the most popular agents on the Pentagon system automate standard staff work, like drafting an After Action Report on lessons-learned or a formal “staff estimate” of what’s required to execute an operation. (The emphasis is on “draft,” not “write,” since a human user is supposed to review the AI’s output before submitting it.)
Other available agents analyze imagery and generate a written report describing it, according to the official Pentagon announcement on X.com, while yet others analyze financial data or official strategy documents.
But users aren’t limited to a set menu of pre-built agents. Instead, as the name implies, Agent Designer and tools like it allow anyone to create their own agents and employ them on the network. The user doesn’t even need to know how to write software or train a neural net: These are “low-code/no-code” chatbots that guide the user through the process of figuring out what they want to accomplish, in natural language, and then autonomously code the agent to their specifications — a process often disparaged as “vibe-coding.”
Officials have been enthusiastic about this explosion of agents, seeing it as the next logical step in Defense Secretary Pete Hegseth’s push to empower personnel with generative AI.
“It’s a very exciting time,” said Robert Malpass, the Pentagon’s Deputy Chief Digital & AI Officer (CDAO) for Intelligence, at the recent INSA Spring Symposium. “[Now] anybody across the Department can start to build out and work with advanced AI in their own context, [customizing] the specific way that they need that information processed, displayed, and built out into an operational workflow.”
AI skeptics, however, point to cases where sloppily implemented agents can run amuck. In one case first reported by the Financial Times, an Amazon Web Service agent called Kiro purportedly decided the best way to upgrade a particular software service was to delete the whole thing and start over — and was able to do so without asking for human permission, resulting in a 13-hour outage. In another case, a programmer administering a public Python resource denied an agent’s request to change the code. In response, apparently without any human telling it to do so, the agent composed and posted essays denouncing the human for “prejudice” against AIs. In yet another episode, a Wall Street Journal vending machine run by an AI agent purchased a Playstation 5 for “marketing purposes.”
Pentagon officials say they have plenty of safeguards in place.
“I will give a big shout out to our test and evaluation team that has been working tirelessly on how to evaluate the safety, the trust, the reliability of workflows that are incorporating AI,” Malpass said at the INSA event.
The department official who spoke to Breaking Defense went further, saying the IL-5 authorization demonstrates “that it meets rigorous security controls for handling DoW information.
“This authorization is maintained through a framework that defines clear operational boundaries. By extending our proven security and governance models to the AI domain, the DoW ensures that AI agents are deployed in a manner consistent with our long-standing commitment to information security and mission assurance, the official said.
The alternative to moving fast and taking risks isn’t safety, but a very real danger of being surpassed by adversaries, argued Andrew Mapes, the Pentagon’s acting principal deputy CDAO. It’s a “race,” he told the INSA symposium, speaking alongside Malpass.
The cycles are just getting shorter and shorter and shorter … as things go faster, as AI itself allows the speed of technology to increase,” Mapes said. “It’s incumbent on us … to make sure that it doesn’t take five to 10 years to bring something new in into the military. We just don’t have the luxury of taking a such a deliberate approach.”
Added Malpass, “I’m on team ‘Go Fast.’”
