As the U.S. military races to adapt to ever-larger amounts of increasingly advanced, and iteratively autonomous AI, how do humans stay in control? The standard answer is that no machine can exercise lethal force without human approval — but this solution is as obvious as it is wrong.
By the time an AI asks its human overseer to approve or veto a specific strike, it’s already far too late in the flow and tempo of the interactive dynamic of human-machine teaming. Having a human make only the final decision could allow algorithms to make significantly impactful decisions well before that, from positioning forces to prioritizing targets, in ways that unacceptably constrain human choices.
Yet, requiring human approval for every intermediary step evidently sacrifices the speed and scope of capabilities that make AI so attractive in the first place. So how then can we reconcile human control and machine speed?
The answer, we believe, requires embedding human preferences in the software itself. Instead of requiring an automated process to halt at some crucial point to request human input (slowing the AI while providing the human with only a narrow set of options), ideas such as a commander’s intent for an operation need to be systematically, deliberately and preemptively integrated within the algorithm.
Building this guidance in early will represent a paradigmatic approach that ensures every automated decision is bounded and guided by human choices, rather than human decisions being constrained and channeled by automated ones.
A Call For Clarity
It’s important to note early that there is no widespread desire to create the sorts of scenarios popularized by fictional accounts such as the Terminator or SkyNet. Indeed, AI enthusiasts and skeptics alike agree that human beings need to be in control of unmanned weapons and military command systems. But there is much less consensus, or even clarity, on how to implement such control.
Despite significant investments, including an almost $55 billion funding request for the Defense Autonomous Warfare Group, and high-profile attention from Defense Secretary Pete Hegseth on down, the current landscape of U.S. military AI remains inchoate.
The extant menu of AI for military use includes automatic, semi-autonomous, autonomous and increasingly agentic autonomous systems, but there continues to be ambiguity in what these terms mean and how they are used. Human decision-makers can be “in the loop” and asked to approve or veto every significant action by the AI; “on the loop,” observing the AI but letting it make its own choices unless they choose or are called to intervene; or “near the loop,” whereby AI functions within a distinct operating “niche” of machine systems, in which humans can be variably engaged (e.g. to be “in” or “on” the loop as circumstances dictate).
But there is no clear standard for what counts as involvement significant enough to adequately and responsibly inform the human overseer, let alone ask their permission before engaging some action. This both risks introducing ambiguity into what constitutes sufficient human involvement, and reinforces the urgency for precise doctrine and clearly defined standards as the U.S. advances its military AI capabilities.
This point is brought into stark relief by Department of Defense Directive 3000.09 [PDF], which formally establishes the need for human involvement in any AI engagements entailing the use of lethal force, yet does not provide a doctrinal paradigm for how such involvement can and should be realized. We believe that a unified AI framework would address these challenges by establishing common principles and enforceable standards across the defense enterprise. Such a framework would ensure that AI systems are developed and deployed effectively and responsibly.
Simply put, the more autonomous these systems are, the greater the need for coherence. Whether in unmanned aerial systems, maritime platforms, or ground-based robotics, autonomy relies on robust data pipelines, validated algorithms, and clear rules of engagement. Above all, the extent and type of human involvement should be defined by doctrine. Ambiguity undermines trust in these systems. A unified framework provides the governance necessary to ensure safety, reliability, and responsibility in mission effectiveness.
Synthesized Command And Control
As autonomy scales and agentic architectures expand, traditional command and control models built on direct human interaction tend to lose the capacity to ensure coherence, accountability, and operational alignment. These models rest on the premise that authority is exercised through discrete human decisions at identifiable moments. That premise can fracture under conditions defined by speed, scale, and machine-driven adaptation.
To mitigate this failing, we propose a model of Synthesized Command and Control (SYNTHComm) that defines authority as a continuously engineered property of the system. SYNTHComm functions to shift control from episodic intervention to persistent governance embedded within system logic. Authority becomes encoded, distributed, and executed across the architecture, shaping behavior from within rather than being applied externally through individual decisions.
SYNTHComm operationalizes intent by translating command authority into terms the machine can understand: structured constraints, weighting functions, and context-responsive control regimes that persist across execution. These elements shape system behavior in real time, ensuring alignment with mission objectives and rules of engagement across changing conditions. Human influence remains present through design, configuration, and accountability structures that define how decisions unfold.
SYNTHComm employs a spectrum-based model of governance. Agentic behavior can be represented as a frequency spectrum, A(ƒ), where the distribution of frequencies reflects how a system balances stability and adaptability. Lower-frequency components — those which change less often and more slowly — capture enduring mission elements such as commander’s intent, policy constraints, and long-horizon objectives. Higher-frequency components — those which need to change rapidly and often — reflect responsive, time-sensitive adaptations to local conditions, uncertainty, and emerging opportunities.
SYNTHComm operates by introducing a weighting function, W(ƒ), which encodes structured authority, constraint, and contextual modulation across the agent’s behavioral spectrum. This function determines how strongly different classes of behavior are expressed as a function of mission priorities, acceptable risk, and operational conditions.
The application of W(ƒ) to the agent activity A(ƒ) produces the governed output:
Here, S(ƒ) represents the system’s realized behavior in practice (i.e. decisions and actions that have been shaped, filtered, and aligned through the imposed governance structure). As shown in Figure 1, low‑frequency components associated with stable objectives are preserved to maintain mission continuity, while higher‑frequency components corresponding to adaptive or reactive behavior are selectively attenuated or amplified in response to context.
This formulation enables control to emerge through spectral shaping of the decision space rather than through discrete commands. Thus, governance is applied continuously and proportionally, allowing adaptability to be constrained without suppressing system autonomy.
Figure 1.

Figure 1. Spectrum model of control in SYNTHComm. Frequency (ƒ) represents variation in operational conditions, while amplitude denotes relative influence. A(ƒ) (autonomy), S(ƒ) (supervisory constraints), and W(ƒ) (contextual weighting) define a continuous control regime bounded by acceptable autonomy. As conditions become more dynamic, influence shifts from supervisory control to autonomous response, with W(ƒ) modulating alignment to mission objectives and rules of engagement. Control persists as an architectural property of the system rather than discrete human intervention.
To understand and extend this framework, our group is advancing the application of fuzzy logic to SYNTHComm. Fuzzy logic provides the mathematical structure needed to represent graded control states across a continuum. Authority operates as a distributed influence rather than a binary allocation between human and machine, with control dynamically adjusted based on context, confidence, and risk. This approach enables adaptive behavior while preserving alignment and constraint integrity under varying operational conditions.
SYNTHComm also defines a measurable control surface within human–AI teaming architectures. As autonomy increases and direct human interaction decreases, the density and precision of SYNTHComm increase accordingly. Control persists through encoded governance structures that operate continuously across the system. Authority shifts from real-time decision-making to the design and calibration of decision environments.
This reframing carries direct doctrinal implications. Responsibility expands beyond individual decisions to include the architecture that produces them. Command authority encompasses the configuration of constraints, validation mechanisms, and transition conditions that govern system behavior across contexts. In this way, SYNTHComm weds technical capability with doctrinal authority, enabling scalable human–AI teaming through engineered governance that preserves interoperability, accountability, and alignment across complex operational environments.
Near-peer competitors are aggressively investing in AI to reshape the character of warfare and challenge U.S. advantages. While the U.S. retains deep talent, substantial resources, and a strong capacity for innovation, these strengths must be incorporated by design into the functional architectures of U.S. military AI systems.
The path forward is urgent and executable. Establishing and enforcing a unified defense AI framework will align strategic intent with operational output, and yoke both to responsible conduct of military operations and accountability.
Leadership in military AI will be determined by the ability to synchronize governance, acquisition, data, and operations into a single, driving architecture. AI then functions as beyond a mere tool, to become a true operational partner that confers advantage by establishing human-machine teaming that directly shapes decision superiority, tempo, and mission success in twenty-first century conflict.
Disclaimer: The views and opinions expressed in this essay are those of the authors and do not necessarily reflect those of the United States government, Department of War or the National Defense University.
Dr. Elise Annett is a Research Fellow in the Program for Disruptive Technology and Future Warfare of the Institute for National Strategic Studies at the National Defense University.
Dr. James Giordano is Head of the Center for Strategic Deterrence and Study of Weapons of Mass Destruction, and Program Lead for Disruptive Technology and Future Warfare of the Institute for National Strategic Studies at the National Defense University.
