The narrative that AI and LLMs will replace all white-collar work is intellectually sloppy. It conflates capability with deployment, ignores economic history, and mistakes exponential hype for exponential reality. Here is a multi-front attack on it.

Technology Adoption Is Never Linear

Every transformative technology follows an S-Curve: slow adoption, rapid growth, then saturation. Doomers are extrapolating from the steep middle section as if it never flattens. The internet, electricity, and mechanized farming all looked like civilization-enders during their inflection points. They restructured work; they did not end it. There is no physical or economic reason to believe AI breaks this pattern.

The Scaling Ceiling

The assumption that more compute yields proportionally more intelligence fails on multiple fronts.

Amdahl’s Law states that the speedup from parallelization is bounded by the irreducible serial portion of a task. If even 5% of a workload cannot be parallelized, no number of additional processors can ever produce more than a 20× speedup. There are hard serial bottlenecks in both training and inference that additional GPUs cannot eliminate. The doomers who assume 10× the compute yields 10× the intelligence are ignoring a law that has governed computing for sixty years.

The Pareto Principle compounds this: the first 80% of capability comes from 20% of the compute and engineering effort. The last 20% (the slice that would make AI truly autonomous across all knowledge work) demands the remaining 80%, and then the same ratio reapplies at the next frontier. OpenAI’s own 2020 scaling laws paper confirmed this formally: performance scales with compute, data, and parameters, but the returns are logarithmic, not linear. Doubling capability requires roughly an order-of-magnitude increase in resources. “AGI this decade” requires extrapolating a log curve as if it were linear. The math does not cooperate.

To be precise: this critique targets the transformer paradigm specifically. Techniques like Mixture of Experts, speculative decoding, distillation, and quantization are real engineering advances; they reduce cost and latency, and they matter. But they are optimizations within the existing regime. They do not repeal logarithmic scaling returns; they shift the curve slightly. A model that is cheaper and faster to run is still subject to the same fundamental ceiling on what that run can accomplish.

The genuine wildcard is a post-transformer architecture. Something like transformers was to RNNs: a qualitative shift that restarts the scaling curve rather than riding the tail of it. This is not impossible. It may happen. But it has not happened yet, and predicting its arrival and capabilities requires knowledge nobody has. The doomers who confidently project total displacement typically are not positing a secret architectural revolution; they are extrapolating the current paradigm. The argument here is against that extrapolation. If a genuinely new paradigm emerges, the calculus changes, and we will evaluate it then.

Deployment at Civilization Scale Is Not Inevitable

Even setting capability aside, the logistics of total displacement are absurd. Running frontier models at scale is extraordinarily expensive: a single H100 GPU costs around $30,000, and the energy demands of running AI across every white-collar workplace on Earth exceed current grid capacity. The bottleneck is not the model; it is the atom.

Beyond energy, every enterprise has unique legacy systems, compliance requirements, cultural norms, and data silos. Integration alone takes years and fails routinely. Projecting uniform, instantaneous displacement ignores the entire history of enterprise software adoption.

Work Expands to Fill New Capability

The Lump of Labour Fallacy is the assumption that there is a fixed amount of work to be done, so if machines do some of it, humans must do less. Economists have debunked this repeatedly.

The ATM is the canonical example. When it launched, the prediction was obvious: machines dispense cash, tellers become redundant. What actually happened: ATMs made it cheaper to operate a bank branch, so banks opened far more branches. More branches meant more accounts. More accounts meant more customers needing services an ATM cannot provide: loans, disputes, financial advice. The total number of bank tellers in the US increased for decades after ATM deployment. The machine that was supposed to replace the teller created the conditions for more tellers to exist.

Jevons’ Paradox explains the mechanism: when a resource becomes more efficient to use, total consumption of that resource tends to increase, not decrease. As AI makes knowledge work cheaper, demand for it will expand. Legal services unavailable to most of the world because of cost will become accessible. Financial advice only the wealthy could afford will reach the middle class. Medical second opinions that required flying to a specialist will become a chatbot query. AI does not destroy the market for knowledge work; it grows it.

Evidence From Practice

I use Claude Code. Before it, I had a fixed number of coding projects I could reasonably pursue, bounded by time. Now I explore projects I would have deprioritized indefinitely, not because the projects became easier, but because the activation energy dropped. The same projects are explored in more ways, with more experiments. I am doing more software work, not less.

A law firm given AI tools does not lay off its lawyers and close. It reaches clients it never could before. It offers services previously too expensive to provide. It expands into adjacent practice areas. The productivity gain becomes a growth engine, not a headcount reduction.

What Actually Happens

White-collar work restructures. The bottom of the skill distribution in any given field feels pressure. But the profession does not vanish; it transforms. The question is not “will AI take jobs?” but “will the transition be managed well?” That is a policy and education question, not a question about whether civilization-ending job destruction is the inevitable technical outcome. It is not.

The recurring error

Every generation believes its automation wave is uniquely total. None has been. This one has strong reasons to be less total than claimed, not more.

References

  • Amdahl, G. (1967). Validity of the single processor approach to achieving large scale computing capabilities.
  • Kaplan, J. (2020). Scaling Laws for Neural Language Models. OpenAI.
  • Jevons, W.S. (1865). The Coal Question.
  • Autor, D. (2015). Why Are There Still So Many Jobs? Journal of Economic Perspectives.
  • Bessen, J. (2015). Toil and Technology. Finance & Development, IMF.