Washington has opened a new phase in military artificial intelligence by signing classified deployment agreements with OpenAI, Google, Microsoft, Amazon Web Services, NVIDIA, Oracle, SpaceX and Reflection, giving some of the world’s most powerful technology companies a role inside the Pentagon’s most sensitive digital environments.
The agreements allow advanced AI systems and supporting infrastructure to be deployed across Impact Level 6 and Impact Level 7 networks, the high-security environments used for secret and top-secret defence data. The Pentagon says the objective is to speed up data synthesis, improve battlefield awareness and support faster decision-making by commanders, intelligence officers and military planners.
The move marks a significant expansion of Washington’s effort to turn commercial AI advances into military capability. Rather than depending on one vendor, the Pentagon is building a wider technology stack that combines cloud platforms, frontier models, chip infrastructure, space connectivity and specialised AI systems. Microsoft, AWS and Oracle bring large-scale cloud and enterprise computing capacity; NVIDIA supplies accelerated computing hardware and software; OpenAI, Google and Reflection add model capabilities; and SpaceX brings space-based communications and network resilience through its wider defence technology footprint.
The agreements are framed around “lawful operational use”, a phrase that has become central to the Pentagon’s AI procurement language. Defence officials have presented the programme as a way to keep humans in the loop while using AI to process vast quantities of intelligence, surveillance data, logistics inputs and operational signals. The most immediate military applications are expected to include summarising classified material, identifying patterns across data streams, supporting cyber defence, helping mission planning and reducing administrative delays across defence agencies.
IL6 environments are designed for classified secret-level workloads, while IL7 is associated with the most sensitive national security systems. Deploying commercial AI tools in these environments raises the technical bar for security, auditability, access control and model governance. Systems must be isolated from public internet exposure, protected from data leakage and monitored for misuse, hallucinations, adversarial manipulation and unauthorised retention of classified information.
The Pentagon’s announcement also reflects a broader shift from experimental AI pilots to operational deployment. Its secure generative AI platform, GenAI. mil, has already been used by more than 1.3 million personnel, generating tens of millions of prompts and large numbers of agents over five months. That platform operates in lower-security environments for tasks such as drafting, research and data analysis. The new agreements move the effort closer to classified intelligence and military operations.
The company list also signals the Pentagon’s determination to avoid vendor lock-in. Defence leaders have grown wary of relying too heavily on a single model provider or cloud platform, particularly as AI becomes embedded in command, intelligence and logistics systems. A multi-company approach gives military users access to different models, infrastructure layers and deployment pathways, while reducing the risk that one supplier’s policy dispute, outage or technical limitation could affect operational readiness.
Anthropic’s absence is notable. The Claude developer had been involved in defence-related AI work but became embroiled in a dispute over usage restrictions, including concerns over surveillance and weapons-related applications. The disagreement highlighted a widening divide between AI companies that are willing to accept broad government-use clauses and those seeking tighter limits on how their models can be used in military and intelligence settings.
Google’s participation is particularly sensitive because of its history with defence AI contracts. Internal opposition at the company over Project Maven in 2018 led Google to step back from some military AI work and adopt AI principles that restricted weapons-related applications. Its return to classified defence deployment shows how far the technology and security landscape has shifted as competition with China, cyber threats and autonomous systems reshape national security priorities.
OpenAI has also moved closer to government and defence customers after earlier prohibitions on military use were revised. Microsoft, one of OpenAI’s key infrastructure partners, already has extensive defence cloud contracts, while AWS has long been a major provider of secure cloud services for US intelligence and military agencies. Oracle’s addition strengthens the Pentagon’s cloud diversification strategy.
NVIDIA’s role underscores the hardware dimension of military AI. Advanced graphics processing units and AI accelerators are now strategic assets, enabling large models to run at speed across secure environments. Export controls, supply chain concentration and growing demand from defence, cloud and private-sector customers have turned AI chips into a core element of national security planning.
Reflection’s inclusion gives the agreements a start-up dimension. The company is less established than the technology giants on the list, but its presence suggests the Pentagon wants access to emerging model developers rather than relying only on dominant platforms. Such diversification may help defence users test specialised models for coding, cyber operations, intelligence analysis and agentic workflows.
The agreements allow advanced AI systems and supporting infrastructure to be deployed across Impact Level 6 and Impact Level 7 networks, the high-security environments used for secret and top-secret defence data. The Pentagon says the objective is to speed up data synthesis, improve battlefield awareness and support faster decision-making by commanders, intelligence officers and military planners.
The move marks a significant expansion of Washington’s effort to turn commercial AI advances into military capability. Rather than depending on one vendor, the Pentagon is building a wider technology stack that combines cloud platforms, frontier models, chip infrastructure, space connectivity and specialised AI systems. Microsoft, AWS and Oracle bring large-scale cloud and enterprise computing capacity; NVIDIA supplies accelerated computing hardware and software; OpenAI, Google and Reflection add model capabilities; and SpaceX brings space-based communications and network resilience through its wider defence technology footprint.
The agreements are framed around “lawful operational use”, a phrase that has become central to the Pentagon’s AI procurement language. Defence officials have presented the programme as a way to keep humans in the loop while using AI to process vast quantities of intelligence, surveillance data, logistics inputs and operational signals. The most immediate military applications are expected to include summarising classified material, identifying patterns across data streams, supporting cyber defence, helping mission planning and reducing administrative delays across defence agencies.
IL6 environments are designed for classified secret-level workloads, while IL7 is associated with the most sensitive national security systems. Deploying commercial AI tools in these environments raises the technical bar for security, auditability, access control and model governance. Systems must be isolated from public internet exposure, protected from data leakage and monitored for misuse, hallucinations, adversarial manipulation and unauthorised retention of classified information.
The Pentagon’s announcement also reflects a broader shift from experimental AI pilots to operational deployment. Its secure generative AI platform, GenAI. mil, has already been used by more than 1.3 million personnel, generating tens of millions of prompts and large numbers of agents over five months. That platform operates in lower-security environments for tasks such as drafting, research and data analysis. The new agreements move the effort closer to classified intelligence and military operations.
The company list also signals the Pentagon’s determination to avoid vendor lock-in. Defence leaders have grown wary of relying too heavily on a single model provider or cloud platform, particularly as AI becomes embedded in command, intelligence and logistics systems. A multi-company approach gives military users access to different models, infrastructure layers and deployment pathways, while reducing the risk that one supplier’s policy dispute, outage or technical limitation could affect operational readiness.
Anthropic’s absence is notable. The Claude developer had been involved in defence-related AI work but became embroiled in a dispute over usage restrictions, including concerns over surveillance and weapons-related applications. The disagreement highlighted a widening divide between AI companies that are willing to accept broad government-use clauses and those seeking tighter limits on how their models can be used in military and intelligence settings.
Google’s participation is particularly sensitive because of its history with defence AI contracts. Internal opposition at the company over Project Maven in 2018 led Google to step back from some military AI work and adopt AI principles that restricted weapons-related applications. Its return to classified defence deployment shows how far the technology and security landscape has shifted as competition with China, cyber threats and autonomous systems reshape national security priorities.
OpenAI has also moved closer to government and defence customers after earlier prohibitions on military use were revised. Microsoft, one of OpenAI’s key infrastructure partners, already has extensive defence cloud contracts, while AWS has long been a major provider of secure cloud services for US intelligence and military agencies. Oracle’s addition strengthens the Pentagon’s cloud diversification strategy.
NVIDIA’s role underscores the hardware dimension of military AI. Advanced graphics processing units and AI accelerators are now strategic assets, enabling large models to run at speed across secure environments. Export controls, supply chain concentration and growing demand from defence, cloud and private-sector customers have turned AI chips into a core element of national security planning.
Reflection’s inclusion gives the agreements a start-up dimension. The company is less established than the technology giants on the list, but its presence suggests the Pentagon wants access to emerging model developers rather than relying only on dominant platforms. Such diversification may help defence users test specialised models for coding, cyber operations, intelligence analysis and agentic workflows.
Topics
Technology