Odd Lots

Meet the Politician the AI Industry Is Trying to Stop

December 18, 2025

Key Takeaways Copied to clipboard!

  • New York State Assemblymember Alex Bores is being targeted by a $100 million AI-industry Super PAC, Leading the Future, primarily due to his sponsorship of the RAISE Act, which proposes safety standards for advanced AI research. 
  • The RAISE Act aims to mandate public safety plans, disclosure of critical safety incidents, and prohibitions on releasing models that fail internal safety tests for frontier labs that spend over $100 million on final training runs. 
  • Alex Bores, who has a tech background including time as a data scientist at Palantir, advocates for proactive, data-driven regulation to ensure the American public has a voice in AI development, contrasting with the industry's push for minimal oversight. 
  • Bores emphasizes that effective governance requires tracking the real-world impact of legislation, citing examples where his bills on telemarketing fines succeeded while an e-bike registration bill required further iteration. 

Segments

AI Politics and Industry Pushback
Copied to clipboard!
(00:02:48)
  • Key Takeaway: The AI industry is actively mobilizing political opposition, exemplified by a new Super PAC targeting Alex Bores for his state-level AI regulation efforts.
  • Summary: AI is predicted to be a major political issue due to its impact on labor, electricity, and inequality. The AI industry is funding a $100 million Super PAC, Leading the Future, to oppose politicians like Alex Bores who push for state-level AI regulation, viewing it as an existential threat. Proponents argue regulation must balance safety with innovation, especially concerning geopolitical competition with China.
The RAISE Act Details
Copied to clipboard!
(00:08:41)
  • Key Takeaway: The RAISE Act mandates public safety plans and incident disclosure for frontier AI labs based on a $100 million compute spending threshold or training models with 10^26 flops.
  • Summary: The RAISE Act targets major frontier labs like Meta, Google, and OpenAI, requiring them to disclose safety plans and critical incidents like model weight theft. A secondary trigger for regulation involves models trained via knowledge distillation, a technique China uses to circumvent compute export controls. Fines are proposed up to $10 million for initial violations, though Bores believes these may be too low.
Regulation and Global Competition
Copied to clipboard!
(00:15:21)
  • Key Takeaway: Regulation is intended to lock in existing voluntary safety commitments made by US labs, and enforcement mechanisms like App Store injunctions can apply to foreign open-source entities like DeepSeek operating in the US market.
  • Summary: Bores argues that the RAISE Act codifies existing voluntary commitments, preventing labs from cutting safety corners during competitive rushes for funding or reporting cycles. While open-source models from China could theoretically ignore NY legislation, they risk injunctions preventing their availability on US app stores if they seek profit there. The bill passed the Assembly with bipartisan support, indicating a middle ground approach to balancing safety and innovation.
Trump’s AI Executive Order Conflict
Copied to clipboard!
(00:17:00)
  • Key Takeaway: Trump’s proposed national AI rule threatens to preempt state-level regulations, such as New York’s requirement for chatbots to disclose their AI nature every three hours.
  • Summary: Trump’s executive order aims to block state regulation, putting New York in the crosshairs given its existing efforts, including rules for chatbot disclosure and self-harm referral. Bores suggests that major donors to Trump’s campaign are also key figures funding the Super PAC targeting him, potentially explaining the administration’s anti-regulation stance on AI. This contrasts sharply with Trump’s generally protectionist and nationalistic trade agenda.
Broader AI Concerns Beyond Safety
Copied to clipboard!
(00:22:41)
  • Key Takeaway: Beyond existential safety risks, AI impacts education, the workforce, and the electrical grid, requiring government intervention to ensure benefits like personalized tutoring are realized responsibly.
  • Summary: AI’s impact extends to education, where personalized tutoring could be positive if pedagogy is updated, and the environment, where grid upgrades needed for data centers should be funded by private capital, not ratepayers. Bores highlights the dual potential of AI, comparing it to nuclear energy, where capabilities for curing diseases could also be used for bioweapons, necessitating thoughtful policy.
Combating Deepfakes with Provenance
Copied to clipboard!
(00:27:47)
  • Key Takeaway: The solution to pervasive deepfakes and synthetic media is not human detection but cryptographic content provenance standards like C2PA, which must become the default expectation for digital files.
  • Summary: The fidelity of AI-generated images and voice replication is rapidly surpassing human ability to detect fakes, leading to decreased trust. The technical solution involves attaching cryptographic metadata via the C2PA standard to prove content origin (real device vs. AI generation). States like New York are leading in banning harmful uses like deepfake pornography, underscoring the importance of state action while federal standards are pending.
Palantir’s Role in Government Tech
Copied to clipboard!
(00:30:12)
  • Key Takeaway: Palantir’s core function is data integration and analysis, using ontologies to structure disparate data sources so that individual objects, like mortgages, can be tracked across systems for better government implementation.
  • Summary: Bores worked at Palantir from 2014 to 2019, focusing on implementing technology for agencies like the Census Bureau and DOJ to improve service delivery. The company specializes in data integration, creating an ‘ontology’ to define what data objects mean, enabling complex analysis like tracking individual loans involved in mortgage securities fraud. This experience informs Bores’s belief that policy success depends on rigorous implementation and data-driven performance tracking, not just bill signing.
Governing Small vs. Large Issues
Copied to clipboard!
(00:37:03)
  • Key Takeaway: Effective governance requires addressing both large national issues and small, persistent daily irritations, such as subscription cancellation friction and text message scams, to improve public trust.
  • Summary: Bores passed a ‘click to cancel’ bill in New York, allowing consumers to cancel subscriptions the same way they signed up, addressing low-level consumer frustration. He also supports legislation requiring disclosure for AI-generated books on platforms like Amazon. This focus on tangible, everyday improvements is seen as crucial for political success alongside tackling major issues like corruption.
Crypto Regulation in New York
Copied to clipboard!
(00:42:31)
  • Key Takeaway: New York risks losing its regulatory leadership in digital assets as companies opt for federal charters because the state relied too heavily on guidance rather than codified statute for its rules.
  • Summary: Bores championed a bill to standardize New York’s crypto rules into statute, aiming to maintain the state’s regulatory structure against federal oversight that requires detailed legislation. Because New York’s existing rules were largely based on guidance from the Department of Financial Services (DFS), companies are now defaulting to federal charters. This highlights the danger of relying on non-statutory rules for setting clear ‘rules of the road’ for innovation.