Intelligence Squared

Is AI About to Automate War? With Anthony King

September 30, 2025

Key Takeaways Copied to clipboard!

  • The current debate around AI in warfare often relies on a teleological belief in approaching superintelligence, which Professor Anthony King refutes by focusing on the actual, more limited applications of AI over the last decade. 
  • AI's most potent and current military applications are in data processing for intelligence, situational awareness, logistics, targeting (like the IDF's Lavender system), and cyber/information operations, rather than automating overall strategy. 
  • The most significant organizational impact of AI is the deep, novel fusion between the military and the tech sector, creating a dependency where private companies shape the data architectures and operational understanding of armed forces, a dynamic that is consolidating power in the West rather than being reversed. 

Segments

Defining AI and Setting Expectations
Copied to clipboard!
(00:00:00)
  • Key Takeaway: Artificial intelligence is functionally defined as diverse computer programming capable of processing vast data to yield useful, yet not entirely predicted, results beyond initial human input.
  • Summary: The episode opens with sponsor messages before introducing Professor Anthony King, who aims to debunk science-fiction narratives about AI’s military potential. King defines AI as a system that processes large data sets to produce results that exceed the initial human programming. This sets the stage for a grounded exploration of AI’s actual military applications over the last decade.
AI Hype vs. Military Reality
Copied to clipboard!
(00:05:52)
  • Key Takeaway: The prevailing conjecture that AI will lead to superintelligence and displace human commanders in warfare is a teleological belief that requires recalibration against current evidence.
  • Summary: The discussion addresses the common narrative that AI is rapidly approaching general intelligence and will soon automate strategy and command decisions. King argues for a skeptical recalibration, asserting that while AI is already significant in warfare, it is far from automating strategy itself. Strategy and war decisions require defining purpose, which current inductive AI systems cannot provide.
Obstacles to AI in Strategy
Copied to clipboard!
(00:11:46)
  • Key Takeaway: Second-generation AI, including generative models, is constitutionally brittle because its inductive, probabilistic nature struggles with decisions requiring definition, moral judgment, or data that lacks clear digital correlation.
  • Summary: Generative AI operates on statistical correlations derived from massive data, achieving high accuracy in predictable domains like medical research. However, strategy and war decisions are not purely statistical; they require defining the purpose of the operation, which AI cannot determine. Furthermore, warfare involves moral judgments and decisions where data is often qualitative rather than binary digital information.
Actual AI Functions in Warfare
Copied to clipboard!
(00:15:36)
  • Key Takeaway: Current potent AI applications in the armed forces are distilled into three functions: planning/logistics, targeting, and cyber/information operations.
  • Summary: AI is currently applied effectively in automating data processing for intelligence and situational awareness. This manifests in aiding logistics and planning, most controversially in targeting, and in cyber and information operations generating propaganda. These sub-functions are where AI is making a difference, and their effectiveness is expected to grow as models improve.
Case Studies: Planning and Targeting
Copied to clipboard!
(00:18:21)
  • Key Takeaway: The British Army’s route-planning software demonstrated AI’s utility in bounded problems, while the IDF’s Lavender system exemplifies AI’s role in generating targets based on massive data aggregation, not in defining the overall strategy.
  • Summary: The conflicts in Gaza and Ukraine mark a transformation where AI processing of live data became indispensable. The British Army successfully automated route planning, showing that identifying a clear, bounded problem with sufficient data is key to successful application. In Gaza, the IDF’s Lavender system generated 37,000 Hamas targets, but the decision to prosecute those targets based on an acceptable level of collateral damage was a strategic human decision, not an AI one.
Organizational Transformation and Tech Dependency
Copied to clipboard!
(00:26:41)
  • Key Takeaway: The implementation of complex systems like JADC2 necessitates a deep, integrated partnership between the military and the tech sector, fundamentally altering 20th-century military-industrial relations.
  • Summary: The organizational requirements for utilizing AI, such as processing massive live data, force militaries to rely heavily on the tech sector for expertise, computing power, and software development. This creates a novel, deep partnership where tech companies are integrated into operational environments, as seen with Palantir in Ukraine or Elon Musk’s influence via Starlink.
Tech Monopoly and Political Alignment
Copied to clipboard!
(00:33:39)
  • Key Takeaway: Silicon Valley is consolidating its monopoly over AI capital and talent, aligning itself with US national security interests against China, rather than rebalancing talent back into public institutions.
  • Summary: The concentration of economic, human, and computing capital in a few tech primes is unlikely to be reversed, as evidenced by the massive R&D spending gap between the private sector and the Pentagon. The 2018 Google Project Maven protest marked the end of pure libertarianism, leading to a deepening alliance between Silicon Valley and the DOD to counter geopolitical rivals like China.
Data Architecture and Automation Bias
Copied to clipboard!
(00:48:14)
  • Key Takeaway: Tech companies creating data architectures act as ‘optical power,’ pre-structuring decision-making, which risks flattening complexity and inducing ‘automation bias’ where decision-makers defer to plausible AI results in a crisis.
  • Summary: The influence of companies like Palantir is profound because they structure the data maps used by militaries, pre-influencing understanding. While this influence is not a complete colonization of command, there is a risk of automation bias, where human experts implicitly agree to defer to AI results during crises. This curated data functions like an indispensable map that structures thought but does not determine the final strategic outcome.
Technology, Culture, and Adversarial Change
Copied to clipboard!
(00:58:19)
  • Key Takeaway: While technology alters organizational structures and practices, military culture remains fundamentally driven by the adversarial environment, meaning new capabilities lead to more complex operations rather than simple automation.
  • Summary: Technology is solidified human labor that allows organizations to do new things, thereby changing their internal structures and expectations. However, because warfare is adversarial, militaries immediately seek to use new capabilities to conduct more complicated operations rather than repeating past actions with better tools. This continuous competition ensures that the fundamental nature of command decisions—which are interpretive and imaginative—remains human-driven.