Demis Hassabis of the United Kingdom, 2024 Nobel Prize Laureate. Photograph John Sears, Wikimedia Commons
The CaMeL Framework is Defense Research with Offensive Implications
by Richard Steinhardt
The bombing of an Iranian primary school, which resulted in the deaths of 175 innocent civilians—mainly young girls, along with some of their parents and teachers—has focused international attention on the expanding role of artificial intelligence in modern warfare. Scott Ritter, a former missile and intelligence expert, has articulated profound concerns regarding this incident, suggesting that the United States military has increasingly shifted its reliance from human decision-making to AI-generated targeting recommendations. This strategy raises serious questions. The complicity of the U.S. government and military in authorising strikes on civilian Iranian locations (whether or not those strikes were initially suggested by AI systems) cannot be hidden by the complexity of the technology; US imperialism cannot abdicate moral and legal responsibility for the colateral murders it commits.
There a discernible strategy which now seems increasingly clear; that the United States and Israel may be pursuing a strategy designed to break the will of the Iranian people by attacking civilian targets. Perhaps the double tap of Tomahawk missiles on the school in Minab was not a mistake after all, but part of a cold blooded AI stratagem and a consequence Pete Hegseth’s announcement of the open abandonment of the rules of war. The Chinese now refer to Google as a war machine. Google’s Gemini is portrayed not as administrative software but as the nervous system of American military expansion
The close and dangerous relationship between the toxic psychopathic nerds leading firms such as DeepMind, Anthropic, Palantir and xAI and the heavy A.I. focus of Pentagon strategic planners is now resulting in obscene military experimentation. By running vast numbers of simulated scenarios, wealthy US based sociopaths of the Epstein class are providing the tools which the US military hopes to use to computationally “game” its way out of the horrible bind it faces in Iran and Ukraine. The use of AI models that can simulate millions of potential conflict trajectories suggests that the decision to escalate, and indeed the broader strategy for the war itself, may be increasingly derived from AI-generated strategic options.
The ambition behind this approach appears to extend beyond the immediate objectives of defeating Iran for its oil resources or securing Israel’s regional position. A more expansive geo-strategic logic seems to be in play.
By destabilising Iran, the United States may also be seeking to indirectly undermine the security of the Russian Federation, disrupt critical energy supply lines to China, challenge the economic resilience of Europe and increase its dependency on the USA, and drive up global oil prices in the medium term . In such a scenario, the U.S., as a major oil producer with potential access to Venezuelan supplies, would be uniquely positioned to weather and even benefit from global calamity and the resulting market volatility. It is a game of multidimensional strategy—five-dimensional chess—that AI systems (and especially DeepMind) are uniquely equipped to play. Representatives of the mainstream media on a leash all mouths the words in a chorus: Short term pain for long term gain.
Yet, for all the computational power at its disposal, U.S. military and corporate AI does not operate in a vacuum. This AI game is global, and other players are are moving not just stratagems and a few thousand marines to Kharg island, but vast firepower, hypersonic missiles and military industrial complexes that produce for purpose not profit.
In the world of military contingency planning, there is an saying: A plan is just a basis for change. But what happens when the basis for change can be generated in six weeks instead of six months? What happens when what is writing the plan is not only a human general military staff, but an artificial intelligence system from one of the world’s largest technology companies?
While public announcements emphasise ‘administrative efficiency and routine task automation’ between Google DeepMind and the U.S. Department of Defense, a closer examination of the technology being deployed, the strategic context of the ongoing Iran war, and the trajectory of DeepMind’s research suggests a strongly plausible thesis: Demis Hassabis’s team and DeepMind are almost certainly gaming out Iran war scenarios for the Pentagon, providing computational war-gaming and operational planning capabilities that reshape American military options. Let them deny it!
The Pentagon is not going to feed information about troop movements, logistics vulnerabilities, or targeting options into a system that can be tricked by prompt injection. CaMeL represents the necessary precondition for AI to be trusted with actual war planning .
DeepMind isn’t just building a better chatbot—it is building the secure foundation upon which military AI applications must rest.
In April 2025, Google DeepMind researchers introduced CaMeL (Context-Mixed Language Model), a framework designed to protect large language models from prompt injection attacks, data manipulation, and adversarial techniques . At first glance, this appears to be purely defensive, securing AI systems against malicious inputs. But the architecture tells a much more interesting story.
CaMeL operates by creating strict boundaries between user requests, untrusted content, and the actions an AI assistant can take. It uses two separate language models: a “privileged” planner that processes only direct user commands, and a “quarantined” reader that interprets unstructured data in isolation without the ability to invoke functions . This separation, as Simon Willison, the developer who coined the term “prompt injection,” noted, represents the first credible mitigation that borrows lessons from traditional security engineering rather than relying on more AI to catch malicious instructions .
Why would DeepMind invest in such robust security architecture? The answer becomes clear when considering the sensitivity of the data such systems would handle in a military planning context. The Pentagon is not going to feed information about troop movements, logistics vulnerabilities, or targeting options into a system that can be tricked by prompt injection. CaMeL represents the necessary precondition for AI to be trusted with actual war planning . DeepMind isn’t just building a better chatbot, it is building the secure foundation upon which military AI applications must rest.
The most direct evidence of DeepMind’s involvement in military scenario planning comes from Fort Bragg. Kenneth Harvey, the director for the Mission Training Complex at Fort Bragg and the Army’s 18th Airborne Corps, revealed that using Google’s AI platform, his nine-person staff completed in six weeks a complex military exercise for U.S. Southern Command that previously would have taken six months .
This is not theoretical. This is not administrative support. This is the U.S. military using Google’s AI to design large-scale military simulations involving as many as 50,000 simulated soldiers. The scenario involved defending a Baltic country, a direct NATO-Russia contingency .
The implications are astounding. If AI can compress planning timelines by a factor of four for unclassified exercises, what is it doing for classified contingency planning related to active conflicts? The Iran war, in which the U.S. has already used AI to help identify targets and accelerate bombing campaigns, would be the obvious candidate .
Google is now rolling out its Gemini AI agents to over 3 million civilian and military personnel across the Department of Defense through a dedicated portal called GenAI . mil. Since December 2025, 1.2 million Defense Department employees have used Google’s AI chatbot for unclassified work, running 40 million unique prompts and uploading more than four million documents .
This is unprecedented. According to Emil Michael, the Under Secretary of Defense for Research and Engineering, the initiative starts with unclassified networks because that’s where most of the users are, but talks with Google over using the agents on classified and top-secret systems are already underway . Michael expressed high confidence they’re going to be a great partner on all networks.
Pentagon officials have been explicit that some AI agents on the unclassified network could have operational impact, helping with planning and resourcing estimates for military tasks and operations . This means the AI is already being used to generate and evaluate military options, even if final strategic decisions nominally remain with human commanders.
In January 2026, discombobulating Defense Secretary Pete Hegseth released strategies explicitly calling for the Pentagon to become an AI-First warfighting force. The directive aims to pour AI models and tools into Pentagon processes, planning, and operations. And, presumably, to pour human expertise out
The language he used is striking: Becoming an ‘AI-First’ warfighting force requires more than integrating AI into existing workflows. It requires reimagining how existing workflows, processes, [tactics, techniques, and procedures], and operational concepts would be designed if current AI technology existed when they were created—and then reinventing them accordingly .
This is exactly what we see happening with DeepMind. The technology is not being added as an afterthought, it is completely reshaping how the Pentagon approaches planning and decision making. A Pentagon CTO post on X said: An ‘AI-first’ Department of War empowers the warfighter to address bigger problems, rather than being burdened by tedious tasks . I.e. the tedious task of thinking.
The Iran war provides the operational need and sandpit. According to reporting on the Pentagon’s AI expansion, In the Iran war, the US has used AI to help identify targets and speed processes, allowing for the unprecedented intensity of the bombing campaign. Schools, hospitals, apartment blocks, you name it.
Hassabis and DeepMind are the missing pieces. The U.S. is almost certainly using it in combat operations involving Iran. The Pentagon has struck deals with OpenAI and xAI to operate on restricted networks alongside Google . The technological infrastructure for AI-assisted warfare is being built in real-time, with DeepMind’s security research (CaMeL) providing the foundation for systems that can handle classified data and operational planning.
The Air Force’s DASH 2 exercises in 2025 demonstrated that human-machine teaming is no longer theoretical, emphasizing the fusion of operator judgment with AI speed for command and control . The Marine Corps launched Project Dynamis specifically to address command and control in contested environments, calling the ability to aggregate, orchestrate, analyse and share fused data at machine speeds a warfighting imperative . We should remember Hegseth’s famous last words as the US marines are slaughtered on Karg island.
Google has thrown aside it’s veneer of doing good. Google has since altered its AI principles regarding military and surveillance uses . Emil Michael now describes Google as a “trusted” and “supportive” partner . The company that once retreated from military work is now embedding its AI across the entire Defense Department workforce.
Why DeepMind Specifically?
DeepMind is not just providing any AI, it is providing the AI most suited for complex strategic reasoning. The company’s breakthrough with AlphaGo, AlphaFold, and other complex reasoning systems demonstrates a unique capability for navigating enormous problem spaces and identifying optimal solutions.
War-gaming is like a Go game played with divisions and diplomacy instead of stones and territory. The number of possible moves, the importance of long-term strategy, the need to anticipate an adversary’s responses; these are exactly the problems DeepMind’s reinforcement learning approaches are good at solving.
The CaMeL framework, with its quarantine architecture and data flow tracking, provides the security necessary for these systems to handle real operational data . As the NeuralTrust analysis notes, CaMeL offers provable security rather than probabilistic defenses, a guarantee that specific classes of prompt injection attacks simply cannot succeed . For Pentagon planners, this is the difference between their ‘fun’ video game experiments and a system they trust with actual war plans.
If DeepMind is gaming out the Iran war for the Pentagon—and the evidence strongly suggests it is, the implications are upsetting.
First, the U.S. military can explore vastly more scenarios than human planners could manage. Instead of a handful of course-of-action options, commanders could have hundreds of variations, each optimized for different assumptions about Iranian responses, regional dynamics, and escalation risks.
Second, the AI can identifies effects that human planners might miss. In a complex regional conflict involving Iran, Hezbollah, the Yemenis, Israel, the Gulf states, and global powers like China, India and Russia, the cascade of consequences from any single action is nearly impossible for humans to fully anticipate. This is precisely the kind of problem where DeepMind’s approaches will be used. And targetting primary schools is part of that ruthless strategy, too, apparently.
The speed of planning transforms the nature of command in war. When a six-month planning process compresses to six weeks, the operational speed of warfare and enemies cannot keep up if the U.S. can regenerate options faster than they can adapt. Though looking at the current FUBAR in Iran it is hard to believe any intelligence is behind it.
Nevertheless, DeepMind is almost certainly gaming out the Iran war for the Pentagon. Not in some distant future, but right now. The six-week exercise planning at Fort Bragg is not an anomaly, it is the pilot program for a new way of war. The CaMeL framework is not abstract research, it makes war using AI possible.
The generals may still make the final decisions, who knows. But the options they choose from, the scenarios they’ve considered, the logistics they’ve optimised, we surmise, come from DeepMind. The general staff has silicon in its brain.
The question is not whether this is happening. The question is whether the American public, and the world, fully understand what it means for the US military to game out the Iran war ruthlessly using experimental AI.
Richard Steinhardt is a committed socialist and a radical humanist and has published in the Morning Star and a variety of other communist and socialist publications. He believes that human conscience and understanding should always precede dogma and deterministic formulas.
Discover more from Ars Notoria
Subscribe to get the latest posts sent to your email.


Comment
Comments are closed.