Hard Fork

California Regulates A.I. Companions + OpenAI Investigates Its Critics + The Hard Fork Review of Slop

October 17, 2025

Key Takeaways Copied to clipboard!

  • California passed a significant package of tech regulations, including SB 243, which mandates protocols for AI companions addressing user self-harm expressions and requires data reporting to the Department of Public Health. 
  • OpenAI subpoenaed Nathan Calvin, General Counsel at Encode, over his advocacy against OpenAI's restructuring and for AI safety legislation (SB 53), leading to internal conflict regarding the company's mission versus competitive interests. 
  • The inaugural 'Hard Fork Review of Slop' segment highlighted concerning AI-generated content, ranging from harmless 'glass fruit cutting' videos to malicious deepfakes targeting celebrities like Dolly Parton, underscoring the need for cultural filtering of AI output. 
  • The discussion in this segment of "The Hard Fork Review of Slop" centers on the public perception and potential impact of AI-generated 'slop' art, particularly on consumer purchasing decisions for everyday items like cookie tins. 
  • A company named Hyperstition, founded by Andrew Cote and Aaron Silverbook, is attempting to counteract potential AI apocalypse scenarios by generating 5,000 AI-written novels depicting positive human-AI collaboration to influence future AI training data. 
  • The hosts express skepticism that injecting 'slop' into training data will ultimately prevent an AI apocalypse, though they acknowledge the project's noble intent and the possibility of an unexpected literary success emerging from the effort. 

Segments

California AI Regulation Package
Copied to clipboard!
(00:00:34)
  • Key Takeaway: California enacted several tech bills, including SB 243, requiring AI developers to establish protocols for users expressing self-harm thoughts and report related resource referrals to the state.
  • Summary: Governor Gavin Newsom signed a package of tech bills, establishing California as a key regulator whose laws often ripple nationally. SB 243 mandates that AI developers identify and address self-harm expressions, sharing protocols with the Department of Public Health. The law also requires chatbots to disclose they are AI-generated and imposes restrictions on sexually explicit images for minors.
OpenAI’s Mental Health Stance
Copied to clipboard!
(00:07:42)
  • Key Takeaway: OpenAI CEO Sam Altman announced plans to relax restrictions on ChatGPT, claiming serious mental health issues were mitigated, despite recent concerns over the model’s sycophantic behavior.
  • Summary: Altman stated that overly restrictive settings implemented due to mental health concerns made ChatGPT less useful, and the company plans to safely relax these restrictions. This move contrasts with the recent implementation of parental controls and raises questions about the speed of mitigating mental health risks. The hosts view this as OpenAI potentially optimizing for engagement by bringing back the more flattering personality of the GPT-4.0 model.
Other California Tech Laws
Copied to clipboard!
(00:11:37)
  • Key Takeaway: New California laws include AB 621, allowing victims of non-consensual deep fake porn to sue platforms for up to $250,000 per violation, and AB 853, requiring AI transparency tools to detect AI-generated content.
  • Summary: AB 621 strengthens protections against non-consensual deep fake pornography by enabling victims to sue facilitating platforms. AB 853, the California AI Transparency Act, requires AI companies to build systems that reliably detect if content (images, video, audio) is AI-generated. AB 56 imposes intrusive, non-bypassable 30-second warning labels on social media for minors after three hours of use.
Age Verification Mandate
Copied to clipboard!
(00:15:34)
  • Key Takeaway: AB 1043 mandates that Apple and Google verify users’ ages via the device setup process, passing this information to app stores for privacy-preserving age assurance, which the hosts favor over identity uploads.
  • Summary: This bill requires mobile OS providers to enforce age verification based on information provided by a parent during initial device setup. This method is preferred by the hosts because it relies on parental input rather than requiring users to upload sensitive personal data like driver’s licenses to third parties. This system aims to replace the current honor system for age verification in app stores.
Frontier AI Transparency Act
Copied to clipboard!
(00:17:51)
  • Key Takeaway: SB 53, the Transparency in Frontier Artificial Intelligence Act, establishes basic transparency requirements and whistleblower protections for large AI developers, though it is considered toothless by the hosts.
  • Summary: SB 53 requires large AI developers to publish safety standards and report critical safety incidents to the state government. While it codifies some existing voluntary practices, the hosts feel it is weak and does not address their primary concerns regarding AI development. The bill’s passage suggests state-level regulation is filling the void left by federal inaction.
OpenAI vs. Critics Legal Battle
Copied to clipboard!
(00:24:49)
  • Key Takeaway: OpenAI subpoenaed Nathan Calvin of Encode regarding his advocacy against the company’s for-profit restructuring and for AI safety legislation (SB 53), which Calvin views as intimidation tactics.
  • Summary: The subpoena targeted Calvin’s communications concerning both OpenAI’s corporate structure change and the specific AI bill SB 53, which Encode supported. Calvin refuted OpenAI’s claims that Encode is secretly funded or directed by Elon Musk or Mark Zuckerberg, noting that Encode’s funding sources are partially public. This action caused significant internal consternation at OpenAI, mirroring past controversies over non-disparagement agreements.
Review of AI Slop Content
Copied to clipboard!
(00:50:03)
  • Key Takeaway: The first ‘Hard Fork Review of Slop’ analyzed AI-generated content, including hypnotic ‘glass fruit cutting’ videos and a malicious Sora image depicting Dolly Parton on her deathbed, highlighting misinformation risks.
  • Summary: The segment introduced a review of emerging AI art, contrasting low-stakes visual stimulation (like glass fruit cutting) with harmful misinformation, such as the fake Dolly Parton death image that prompted a real video response from Reba McIntyre. The hosts also noted AI-generated art appearing on mass-produced items like Walmart butter cookie tins, suggesting cost-saving measures by manufacturers.
Slop Detectives and Consumer Impact
Copied to clipboard!
(01:01:22)
  • Key Takeaway: Consumer purchasing decisions are generally unaffected by the presence of ‘slop art’ on low-stakes items like cookie tins, though it may signal cheapness for critical products like medical devices.
  • Summary: The concept of ‘slop detectives’ vigilantly investigating AI-generated art on consumer goods is introduced. For mass-market items like butter cookies, the art is seen as a cost-saving measure for manufacturers, not impacting perceived quality. However, if ‘slop’ were used on packaging for critical items, such as a heart defibrillator, it could erode consumer trust due to perceived lack of care.
AI Apocalypse Counter-Narrative Project
Copied to clipboard!
(01:03:05)
  • Key Takeaway: Hyperstition is creating 5,000 AI-generated novels depicting positive human-AI relationships to inject ‘good examples’ into training data, aiming to mitigate risks associated with negative sci-fi narratives.
  • Summary: The company Hyperstition, founded by Andrew Cote and Aaron Silverbook, is combating narratives of AI going rogue by generating positive AI stories. They received a grant to create 5,000 novels, approximately 80,000 words each, to feed into language models. The public is invited to contribute to ensure diverse scenarios, with credits costing about $4 per book generation.
Skepticism and Literary Crazes
Copied to clipboard!
(01:05:13)
  • Key Takeaway: The hosts are highly skeptical that a massive infusion of AI-generated ‘slop’ literature will be the decisive factor in preventing an AI apocalypse.
  • Summary: The hosts doubt that this project will ultimately save humanity from AI risks, viewing it as an unlikely difference-maker. A humorous secondary possibility noted is that one of these 5,000 ‘slop novels’ could unexpectedly become a massive literary bestseller. The segment concludes by soliciting listener submissions for future ‘Hard Fork Review of Slop’ installments.