Decoder with Nilay Patel

Why IBM CEO Arvind Krishna is still hiring humans in the AI era

December 1, 2025

Key Takeaways Copied to clipboard!

  • Arvind Krishna believes the shift to LLMs represents a 100x advantage in speed and deployability over the previous bespoke, data-labeling-heavy deep learning models, making AI industrial-scale. 
  • Krishna argues that the current AI spending frenzy is not a bubble, drawing a parallel to the fiber optics boom where infrastructure investment eventually proved correct, though some debt capital may not see a return. 
  • IBM's long-term strategy centers on hybrid cloud leadership for the B2B sector and a significant, validated bet on quantum computing, which he believes will be an 'and' technology, not a replacement for current compute architectures. 
  • The speaker assigns a very low probability (0-1%) that the current set of known Large Language Model (LLM) technologies will lead directly to Artificial General Intelligence (AGI), suggesting AGI will require fusing LLMs with other knowledge-based technologies. 
  • IBM's internal experience shows that adopting their proprietary AI coding tool led to a 45% productivity increase among early adopters, supporting CEO Arvind Krishna's strategy of hiring more programmers rather than cutting staff. 
  • The current market correction involving job cuts (like those at Accenture and Amazon) is partly a natural correction after pandemic-era over-hiring, but the long-term view suggests productivity gains will lead to hiring in new areas, not just displacement. 

Segments

IBM’s Enterprise Focus
Copied to clipboard!
(00:02:14)
  • Key Takeaway: IBM is currently a B2B company focused on helping clients deploy technology across multiple clouds and leverage data.
  • Summary: IBM has been fully an enterprise, B2B company for the last 30 years, having exited consumer products like the PC long ago. The company’s role is to help clients improve their business operations, often involving multi-cloud environments and data utilization. The Watson brand is now aimed at CIOs through advertising, reflecting this enterprise focus.
Watson’s AI Evolution and Missteps
Copied to clipboard!
(00:06:56)
  • Key Takeaway: The initial Watson approach was too monolithic and inappropriately focused on healthcare, leading to a necessary pivot to the modular WatsonX stack.
  • Summary: The original Watson successfully put AI on the map by understanding human language during its Jeopardy win. IBM’s error was trying to be too monolithic and targeting difficult sectors like healthcare too early. The current WatsonX rebrands the effort, offering building blocks that engineers can open up, tune, and build applications upon.
LLMs vs. Deep Learning Technology
Copied to clipboard!
(00:08:41)
  • Key Takeaway: LLMs offer a 100x advantage in speed and deployability over prior deep learning models because they eliminate the costly, bespoke human labeling requirement.
  • Summary: The technology inside the initial Watson involved neural network models akin to early deep learning, but it lacked modularity. Deep learning was bespoke, requiring extensive human labeling for single tasks, making it fragile for enterprise use. LLMs drastically reduce cost by using massive compute instead of labeling, enabling industrial-scale deployment across many tasks.
AI Spending vs. Cost Reduction
Copied to clipboard!
(00:13:50)
  • Key Takeaway: While current CAPEX is skyrocketing, Krishna projects a 1000x cost reduction in compute over five years through semiconductor improvements, architectural competition, and software optimization.
  • Summary: The massive current expenditure on GPUs is acknowledged, but Krishna forecasts significant cost decreases over a five-year arc. This reduction is expected from a 10x gain in silicon capability (Moore’s Law), a 10x gain from competing chip designs (like Grok’s inference chips), and a 10x gain from software optimization like quantization. These factors combined suggest compute will become 30 times cheaper in the near term.
AI Bubble Assessment
Copied to clipboard!
(00:18:43)
  • Key Takeaway: Krishna denies the current AI moment is a bubble, comparing it to the fiber optics buildout where infrastructure investment was ultimately validated despite initial failures.
  • Summary: Krishna believes the B2C race for subscribers makes economic sense, similar to the social media era, though not all ten competitors will win. He compares the current infrastructure build to the dot-com era’s fiber optics, where assets lost value but were later acquired profitably by others. While some debt capital will fail to pay back, the equity investment in the underlying infrastructure will yield returns.
Smartphone Economy vs. AI Economy
Copied to clipboard!
(00:28:32)
  • Key Takeaway: The AI transition will unlock efficiency in the physical economy and potentially create a billion new enterprise applications, unlike the smartphone transition which primarily centralized consumer activity.
  • Summary: The smartphone era moved the consumer economy onto phones, enriching platform owners like Apple and Google through app store fees. Krishna sees AI unlocking efficiency for the 60% of workers in the physical economy (construction, logistics, etc.). He predicts a billion new applications, far exceeding the smartphone ecosystem, driven by enterprise adoption.
IBM’s Strategic Focus Areas
Copied to clipboard!
(00:35:04)
  • Key Takeaway: IBM’s strategy hinges on focusing on hybrid cloud, leveraging its unique B2B trust for AI deployment, and making a long-term bet on quantum computing.
  • Summary: Krishna believes most customers will remain hybrid, not singular public cloud users, validating the Red Hat acquisition. IBM focuses on AI deployment where it has brand credibility, such as protecting sensitive health data for insurance clients. The third major bet is quantum computing, transitioning it from a science project to an engineering challenge.
Decision Making Framework
Copied to clipboard!
(00:42:44)
  • Key Takeaway: Krishna’s decision-making framework prioritizes client benefit, requires triangulation with internal and external experts, and demands identifying necessary internal structural changes for execution.
  • Summary: Every major decision must first establish client benefit. Conviction is built by triangulating opinions from various sources, including reaching down 10 levels internally and consulting clients and partners. Once conviction is reached, the final step is defining the internal changes—in sales incentives or resource allocation—required to successfully execute the strategy.
Quantum Computing Investment Rationale
Copied to clipboard!
(00:46:06)
  • Key Takeaway: IBM maintains its quantum investment because early client engagement and massive open-source software adoption (650,000 users) validate a future market worth hundreds of billions annually.
  • Summary: While quantum utility scale is years away, IBM validated the path by engaging 300 clients in research modes, proving potential value in areas like bond trading and EV battery design. Releasing the software open source attracted 650,000 users, signaling strong underlying traction beyond pure academic interest. IBM believes its in-house expertise across QPUs, control systems, and software provides a competitive advantage.
AGI Path Skepticism
Copied to clipboard!
(00:59:24)
  • Key Takeaway: Current known technologies have a near-zero chance of achieving AGI without new technological breakthroughs.
  • Summary: The speaker assigns 0% to 1% odds that the current set of known technologies, specifically LLMs, can reach AGI. AGI is believed to require fusing knowledge with LLMs or incorporating neuro-symbolic AI concepts. The current LLM path is considered useful for enterprise productivity but insufficient for achieving true general intelligence.
AI Technology Evolution
Copied to clipboard!
(01:01:10)
  • Key Takeaway: Machine learning is not being replaced, deep learning will be replaced by LLMs, and LLMs are not the final AI technology.
  • Summary: Traditional machine learning remains useful for simple, deterministic tasks like thermostat control or trajectory analysis. Deep learning is expected to be superseded by LLMs, which are considered robust but statistical in nature. The next major AI advancement beyond LLMs is unknown but expected, similar to how transformers emerged unexpectedly.
Academia vs. Industry Bets
Copied to clipboard!
(01:03:06)
  • Key Takeaway: Research into technologies that might supplant LLMs should primarily be driven by academia, not heavily invested companies.
  • Summary: When a technology path is highly unknown, it should be chased by academia rather than large corporations making massive capital bets. Despite media focus being entirely on LLMs, research around alternative AI paths is actively occurring in academic institutions like MIT and Illinois. Funding cuts in hard sciences and engineering at top US universities are reported to be minor (under 10%).
Near-Term Job Displacement
Copied to clipboard!
(01:05:07)
  • Key Takeaway: Up to 10% job displacement is likely in the next few years, concentrated in areas like consulting where AI can automate report generation.
  • Summary: The immediate impact of AI is seen in companies using it to justify layoffs, particularly in roles involving generating external validation decks for restructuring. The speaker predicts up to 10% job displacement in the total US employment pool over the next couple of years. Companies should strategically use AI to augment entry-level staff, turning them into ten-year experts, to foster future talent and product innovation.
IBM’s Productivity Gains
Copied to clipboard!
(01:09:12)
  • Key Takeaway: IBM’s internal AI coding tool increased productivity by 45% among early adopters, leading the company to hire more programmers.
  • Summary: IBM developed an internal code assistance tool that made embracing teams 45% more productive compared to non-users within four months. This productivity gain does not mean work is fixed; rather, it allows IBM to build more products and capture more market share. Because building products is cheaper, the business calculus shifts toward selling them at lower prices while maintaining strong margins.
Future Outlook and Quantum
Copied to clipboard!
(01:10:20)
  • Key Takeaway: IBM expects surprising results from its quantum computing efforts within the next two to three years.
  • Summary: The CEO advises watching IBM’s progress in quantum computing closely. Surprising results from this area are anticipated within approximately two to three years. The host commits to having the CEO return to Decoder to discuss these developments as the market shakes out.