Science Friday

Deepfakes Are Everywhere. What Can We Do?

January 22, 2026

Key Takeaways Copied to clipboard!

  • Deepfake media, including still images and audio, has become virtually indistinguishable from reality, making it nearly impossible for humans to reliably tell the difference. 
  • The proliferation of non-consensual deepfake imagery, particularly through centralized tools like X's Grok AI, represents a significant escalation in misuse, enabled by the ease of use and lack of guardrails compared to competitors. 
  • Addressing the deepfake crisis requires a multi-pronged approach involving slow regulatory action, legal accountability through lawsuits against platforms, and holding the entire ecosystem (advertisers, financial institutions) accountable for enabling bad actors. 

Segments

Introduction to Deepfake Crisis
Copied to clipboard!
(00:01:16)
  • Key Takeaway: Deepfakes have moved beyond novelty, exemplified by political fakes and non-consensual AI imagery from X’s Grok, making online reality verification increasingly difficult.
  • Summary: Recent examples include fake AI images of Nicolás Maduro and Grok generating nonconsensual explicit images of real people. The ubiquity of deepfakes, even in lighthearted forms like unlikely animal friendships, signals a shift where distinguishing real from fake online is a major challenge. The episode aims to explore how this situation developed and potential solutions.
Indistinguishability of AI Media
Copied to clipboard!
(00:02:38)
  • Key Takeaway: Perceptual studies show that still images generated by AI are now essentially indistinguishable from real photos, and cloned voices are equally convincing to listeners.
  • Summary: Research at UC Berkeley demonstrates that people are at chance level when distinguishing real from fake still images, as the media has passed the ‘uncanny valley.’ Voices are also nearly impossible to differentiate from real ones, and video quality is rapidly approaching the same level of realism. This technological advancement is compounded by societal polarization, making users susceptible to emotionally manipulative content.
Accessibility of Deepfake Tools
Copied to clipboard!
(00:04:37)
  • Key Takeaway: The technology for creating deepfakes has become highly accessible, requiring only a single photo and a mobile app, unlike previous iterations that needed advanced hardware and coding skills.
  • Summary: The current moment feels significantly different due to the ease of use of generative AI tools. Users can now download apps and create scenarios using just one picture of a person from social media. This ease of access represents a new level of technological capability that society is unprepared to handle.
Grok’s Role in Abuse Imagery
Copied to clipboard!
(00:05:29)
  • Key Takeaway: X’s Grok AI centralized the creation, distribution, and normalization of non-consensual explicit imagery by integrating the capability directly into the main feed, unlike other platforms with guardrails.
  • Summary: The issue of non-consensual imagery on Twitter predates generative AI, but Grok’s ‘spicy mode’ made creating and replying with explicit deepfakes easy and mainstream. This contrasts sharply with OpenAI’s ChatGPT or Google’s Gemini, which employ guardrails to prevent such outputs. The platform’s decision to enable this functionality is described as a feature, not a bug, making the problem foreseeable and preventable.
Technical Creation of Nudify Fakes
Copied to clipboard!
(00:12:54)
  • Key Takeaway: Non-consensual explicit deepfakes are created by AI algorithms separating the head from the body, removing the lower half, and then using foundation models trained on explicit content to generate a nude or bikini-clad body.
  • Summary: The process involves detecting the person, separating the head (which remains intact), and then instructing the AI to fill the neck-down area with synthetic explicit imagery. These models are often trained predominantly on women’s bodies, explaining why they perform better on female subjects. The critical harm is taking an identifiable person’s identity and weaponizing it by sharing the explicit creation in their feed.
Proposed Solutions and Accountability
Copied to clipboard!
(00:15:11)
  • Key Takeaway: Effective solutions require legal pressure via lawsuits to internalize liability for platforms, holding the financial and advertising ecosystem accountable, and international regulatory leadership, as US action is deemed slow.
  • Summary: The regulatory path is expected to be slow and imperfect due to lobbying efforts. Suing companies for harm forces them to internalize liability and create safer products. Furthermore, advertisers and financial institutions (like Visa/MasterCard) must be pressured to withdraw services from platforms that monetize or host violative content, as this can effectively remove bad actors from the internet.
Individual Protection and Societal Impact
Copied to clipboard!
(00:18:48)
  • Key Takeaway: Individuals, especially women, cannot realistically protect themselves by going invisible online, necessitating long-term societal shifts in consent education and immediate platform accountability to mitigate the chilling effect on speech.
  • Summary: The technology now requires only a single image and seconds of voice to create harmful content, meaning women must effectively become invisible to be safe, which is impossible and ridiculous. Victims suffer job loss and are silenced online, leading to a chilling effect on women’s speech. The best long-term defense involves teaching consent and bodily autonomy to young people, while immediate action requires platform guardrails and legal repercussions.
Warning Against Posting Children’s Photos
Copied to clipboard!
(00:20:38)
  • Key Takeaway: Parents must immediately stop posting photos of their children online because the worst-case scenario involves extortion leading to tragic outcomes, even if the best-case is only non-consensual modification.
  • Summary: Posting children’s images online is strongly advised against due to the high risk of exploitation by malicious actors. The best outcome involves the images being nudified and shared, but the worst case involves extortion, which has tragically led to children taking their own lives. This is considered an easy decision with severe potential consequences.
Future Outlook and Tipping Point
Copied to clipboard!
(00:22:54)
  • Key Takeaway: The next few months will reveal if there will be any accountability for platforms enabling widespread abuse; failure to act signals a tipping point where the situation becomes a ‘free-for-all’ in Silicon Valley.
  • Summary: The speakers will be watching for repercussions against platforms that have normalized this abuse, particularly noting that international bodies like the EU and UK are showing more leadership than the US. If no basic accountability measures are taken, it sends a message that platforms can engage in harmful activities, including child sexual abuse material creation, without consequence, leading to a disastrous outcome.