Decoder with Nilay Patel

How AI safety took a backseat to military money

September 25, 2025

Key Takeaways Copied to clipboard!

  • AI companies are shifting focus from ethical AI development to lucrative military applications, driven by the profitability of defense contracts and the narrative of an AI arms race with China. 
  • The traditional, rigorous military procurement process, designed for safety and security, is being bypassed by AI companies who redefine 'safety' to accelerate the deployment of potentially inaccurate and compromised generative AI in high-risk scenarios. 
  • The redefinition of AI safety by tech companies, moving from preventing direct harm to focusing on abstract concepts like alignment and hypothetical existential risks, undermines established safety standards and diverts attention from current, tangible harms posed by AI. 

Segments

AI Companies’ Military Pivot
Copied to clipboard!
(00:02:40)
  • Key Takeaway: AI firms are actively selling and developing technology for military applications, reversing previous stances against such uses.
  • Summary: Companies like OpenAI and Anthropic have removed bans on military use and are actively partnering with defense contractors and securing government contracts. This shift is motivated by the financial opportunities within the military-industrial complex and the narrative of an AI arms race with China.
Risks of Commercial Models
Copied to clipboard!
(00:08:20)
  • Key Takeaway: Commercial foundation models used in military applications pose significant security risks due to their unvetted nature and compromised training data.
  • Summary: Commercial AI models are trained on publicly available data, making them susceptible to ‘sleeper agents’ and web poisoning attacks. These models lack a traceable supply chain and can be compromised by adversaries, introducing vulnerabilities into defense infrastructures.
Military Procurement Standards
Copied to clipboard!
(00:11:13)
  • Key Takeaway: Generative AI models often fail to meet the stringent testing and evaluation standards required for traditional military procurement.
  • Summary: Military procurement demands extremely high accuracy and security thresholds, including air-gapped systems and traceable supply chains, which current generative AI models, with their inherent inaccuracies and reliance on public data, do not satisfy. This creates a conflict between the desire for government contracts and the reality of AI system limitations.
Redefining AI Safety
Copied to clipboard!
(00:31:27)
  • Key Takeaway: AI companies are redefining ‘safety’ from preventing harm to focusing on alignment and hypothetical existential risks, undermining established safety standards.
  • Summary: The traditional definition of safety in critical systems focuses on preventing human harm and environmental catastrophe. AI labs are reinterpreting this to mean alignment with human preferences and addressing abstract, hypothetical risks like CBRN weapons, which allows for the deployment of less accurate AI in high-risk areas.
Codex Safety Evaluation
Copied to clipboard!
(00:40:09)
  • Key Takeaway: Early AI safety efforts, like the evaluation of Codex, aimed to introduce risk assessment inspired by safety-critical fields but were not intended to replace existing rigorous standards.
  • Summary: The work on Codex involved introducing risk assessment for AI, a novel approach at the time, drawing techniques from safety-critical domains. This was meant to complement, not replace, established safety evaluations for critical systems. The intention was to understand AI risks, not to create a universal safety solution that bypasses existing regulations.