DENVER ― The growing use of artificial intelligence to enhance monitoring of adversary activities poses huge interoperability challenges for NATO that require near-term agreements on policies and data standards, NATO’s top intelligence policy officer warned on Monday.
Among the biggest concerns for Maj. Gen. Paul Lynch, a British Royal Marine serving as NATO deputy assistant secretary general for intelligence, is the potential for allied commanders to be faced with conflicting national intelligence reports.
“We have decades of experience or common standards for air defense, maritime awareness, data formats. The question is whether we apply that same rigor to AI before the technology outpaces the frameworks, or after,” Lynch said at the US Geospatial Intelligence Foundation’s annual GEOINT Symposium here. “And the answer will be decided in the next three years.”
Lynch said that it has become clear that “AI-enabled exploitation for imagery analysis, change detection and multisource fusion is genuinely changing what is possible, reducing the time from collection to actionable product and enabling analysts to focus on tasks that require human judgment, rather than pattern recognition at scale.”
However, it raises a whole host of “governance” challenges given that each of NATO’s 32 members are responsible for both developing their own policies, rules and regulations about AI usage, as well as the sharing of how that is done and the products that are created.
Lynch outlined a hypothetical scenario where two different NATO members each have a “national AI model” trained on a national “imagery data set with that country’s labeling conventions and analytical priorities.” Each country the provides an intelligence report to a NATO commander, and they contradict each other.
“Which one does the commander used on what basis, with what confidence? And I think that’s the AI interoperability challenge for allied GEOINT [geospatial intelligence], and no single nation is able to solve that alone. It requires agreed standards for how models are trained and documented, how AI enabled products are attributed, and what confidence thresholds are operationally usable in what context?” he said.
NATO already is struggling with how to incorporate the vast amounts of GEOINT data now available from commercial satellite constellations into military and intelligence community systems in a way that promotes member state interoperability, he said.
GEOINT is primarily about providing location and change detection data about human activities and natural phenomena such as wildfires, using satellite imagery, maps, and other types of data.
“The problem is that our frameworks for incorporating commercial intelligence into allied decision cycles were built for a different world. What the operational environment demands now is a framework in which commercial GEOINT data collected, processed and analyzed by industry can be fused with national imagery, open-source and partner provided intelligence, and then delivered to a commander at the speed of operational need, across 32 national classification systems and a set of legal and contractual frameworks that were written for most of those capabilities existed,” Lynch explained,
“I’m sure that all sounds perfectly straightforward, and I’m using the English phrase of ‘perfectly straightforward,’” he added sardonically. “It means it’s not.”
At the moment, Lynch said, “commercial data enters NATO through intelligence systems, mostly through exceptions and workarounds, not designed pathways.”
NATO last June signed its first commercial space strategy, he said, and now is getting down to the”unglamorous work” needed to develop “data use policies, security classification guides, contract frameworks, [and] releasability rules.”
The advent of AI will complicate those efforts, Lynch warned, especially because while some member states such as the US are already integrating AI processing to produce GEOINT, others are only just contemplating foundational questions about its use.
“This governance challenge becomes significantly more complex when the data being shared is being processed by AI systems, because then we’re no longer simply asking who can share what,” he said. “We’re asking whose model produced it, on what training data, with what documented assumptions, with what confidence threshold is operationally usable in what context?”
“The path to AI enabled, allied intelligence advantage runs primarily through governance, not necessarily through additional capability,” Lynch said. “NATO needs data standards that are designed for the world we’re in now where commercial data, national data, partner data, increasingly processed by AI, all contribute to the same operational picture, common meta-data schemes, common AI model documentation, [and] common interfaces that don’t require bespoke integration every time a new partner or new source joins the enterprise.”
