The Prof G Pod with Scott Galloway

Regulating AI, Future-Proof Jobs, and Who’s Accountable When It Fails — ft. Greg Shove

October 6, 2025

Key Takeaways Copied to clipboard!

  • Regulation of AI, particularly concerning safety and the protection of children, is urgently needed, but progress in the U.S. is slow compared to the EU and China, often lagging due to the economic upside of AI companies. 
  • For knowledge workers, AI acts as a "truth serum" that reveals job value, meaning employees must proactively become AI-enabled (e.g., having 100 daily AI conversations) to remain valuable, rather than waiting for training or fearing replacement. 
  • When AI is used to generate strategy or make decisions, ultimate accountability rests with the human who owns and presents the output, as AI should function as an intern, not a management consultant lacking necessary context. 

Segments

AI Regulation Urgency and Skepticism
Copied to clipboard!
(00:02:52)
  • Key Takeaway: Vital AI regulations must focus on safety, especially for children, and need to be implemented quickly at the state level, as federal action is currently lacking.
  • Summary: Skepticism exists regarding regulators keeping pace with AI technology, but safety regulations are deemed necessary. California’s SB 53 bill is highlighted as a crucial state-level effort concerning AI safety. Companies like Anthropic are praised for prioritizing safety teams, contrasting with Meta and XAI, which are criticized for neglecting model safety.
Global Regulatory Landscape Overview
Copied to clipboard!
(00:04:04)
  • Key Takeaway: The U.S. lacks federal binding AI regulation, relying on voluntary agreements like the NIST AI Safety Institute testing, while the EU and China have already implemented binding rules.
  • Summary: Forty countries have AI strategies, but only the EU and China have binding rules, with China requiring labeling of AI-generated content. The U.S. relies on a 2023 executive order mandating AI officers in federal agencies and voluntary testing agreements with model developers. California’s ambitious state bill was vetoed in 2024, though a narrower version passed later.
AI’s Immediate Externalities vs. Long-Term Hype
Copied to clipboard!
(00:08:15)
  • Key Takeaway: Immediate, tangible negative externalities of AI, such as rising energy costs and permitting issues for data centers, should receive more regulatory focus than distant, long-term existential threats.
  • Summary: Greg Shove argues that immediate issues like 20% higher energy costs in some states due to AI data centers warrant attention now, citing Elon Musk’s Memphis data center built without permits. Consumers are encouraged to vote with their wallets by supporting safety-conscious AI companies like Anthropic and avoiding Meta or XAI products.
Existential Risk and Empathy in AI
Copied to clipboard!
(00:09:13)
  • Key Takeaway: The core existential risk of AI stems from the historical precedent that the smarter species ultimately controls the less intelligent one, necessitating the early programming of empathy into models.
  • Summary: Jeffrey Hinton, the ‘Godfather of AI,’ posits that IQ rules the world, suggesting that a smarter AI will inevitably control humans unless safeguards are built in from the start. The concept of programming empathy or a hard constraint against harming humans, similar to programming in the film Aliens, is proposed as a solution.
AI’s Current Economic Upside and Adoption
Copied to clipboard!
(00:11:28)
  • Key Takeaway: Current data suggests AI’s most popular consumer use is companionship/therapy, for which users are unwilling to pay much, while enterprise adoption is stalling around 10-12% adoption.
  • Summary: The current reality may be less dystopian than feared, with AI potentially becoming a useful tool but lacking massive economic upside or presenting an immediate dystopia. Consumer AI’s top use case is conversation/advice, indicating loneliness is a primary driver, but willingness to pay is low. Enterprise AI adoption is flatlining, suggesting a change management challenge rather than a technology hurdle.
Enterprise AI Use and Behavioral Change
Copied to clipboard!
(00:12:54)
  • Key Takeaway: In corporations, AI is primarily used to automate the bottom quarter of roles involving repetitive tasks like cutting and pasting data, but adoption is hindered by employee fear of job replacement.
  • Summary: Section’s focus in enterprise AI is automating roles that act as ’lubricants for data’—high repetition, low judgment tasks. The flatlining adoption rate is attributed to employees’ fear that AI is coming for their jobs, making this a change management challenge. Employees are advised to focus on being in the top half of their team and engaging in high volumes of AI conversations (e.g., 100 per day) to become ‘super employees.’
Future Job Demand and Skill Endurance
Copied to clipboard!
(00:14:04)
  • Key Takeaway: Jobs ripe for immediate destruction are high repetition/low judgment roles like human translation, while enduring skills involve critical thinking, storytelling, and narrative crafting.
  • Summary: Human translators have seen jobs disappear overnight due to AI disruption. Over time, technological innovation usually creates new opportunities, but individuals must adapt by optimizing for generative AI if they are in fields like marketing. Enduring skills include storytelling, writing well, and crafting narratives, which AI cannot fully replicate.
AI as a Truth Serum for Performance
Copied to clipboard!
(00:18:42)
  • Key Takeaway: AI functions as a ’truth serum’ forcing individuals and managers to honestly assess their value chain: inputs, the work done on those inputs, and the resulting outputs.
  • Summary: AI reveals the true value of knowledge work by exposing what inputs are used and what work transforms them into outputs. Managers should use this to assess team value and determine if AI will improve or replace existing processes. The ultimate advice is to avoid mediocrity, as AI will expose low-value contributions.
Accountability for AI-Generated Strategy
Copied to clipboard!
(00:23:34)
  • Key Takeaway: When AI generates strategy documents, the human presenting the work is fully accountable, as AI is merely an intern whose recommendations must be vetted and owned by the employee.
  • Summary: Greg Shove states there is zero tolerance for blaming AI for poor decisions; the human must make the final decision and own it with conviction. Offloading strategy creation to AI results in poor decisions because the AI lacks real-time context and background information. Using AI to refine work, such as removing ‘Canadian-ness’ from a fundraising deck, is cited as a good use case.