The Challenge of Artificial Intelligence: Part III—the Artificial Intelligence Revolution

In prior installments in this series of posts I have reflected on the Industrial Revolution and the Digital Revolution which were alike in many ways but differed in some significant ways as well. In this installment I will consider the ongoing Artificial Intelligence Revolution. As we will see, it is different in some basic ways from its predecessors.

The Industrial Revolution mechanized human muscle. The Digital Revolution mechanized human memory and communication. The AI Revolution mechanizes human judgment itself.

A good starting point is this post by Emily Chamlee-Wright at Persuasion:

Given the nature of my work, I’m in coalitions—and a lot of conversations—focused on fortifying American democracy. And like everyone, the promises and threats associated with the rapid advancement of AI technology are top of mind for me.

When it comes to scientific discovery, optimism abounds. The promise that AI tools like AlphaFold will accelerate biomedical discovery—from new antibiotics to deeper insights into the proteins driving Alzheimer’s and cancer—sparks excitement. But there’s also a pessimism that lingers in these conversations, especially when it comes to what the future holds for workers who are not at the frontiers of science. If you write code or oversee routine managerial processes, the thinking goes, your days of productive employment are numbered. And a jobless citizenry does not bode well for democracy.

Dr. Chamlee-Wright’s central argument is the human imagination will always come up with new “needs” and that every newly perceived human need will continue to clear through human labor markets. She assumes, without either theoretical or empirical support, that newly imagined human needs will remain monetizable in a labor sense once cognitive scarcity disappears.

Dr. Chamlee-Wright is ignoring a crucial difference between the AI Revolution and the Industrial and Digital Revolutions before it. Cognition not being scarce has never occurred before. There is no historical analogy, because in every prior revolution, the scarce factor remained human. This time, it does not. Not only is there no empirical evidence from current AI diffusion that points in that direction, in fact the early evidence points the other way.

Consider this example. The Indian Ministry of Statistics and Program Implementation produces a report called the “Periodic Labour Force Survey” (PLFS) annually. Here are the results for low-level IT support workers for the last five years:

In human terms that reflect a decline in 2019-2020, a post-pandemic spike, a steep subsequent drop, and continuing slow decline. One could argue that this volatility merely reflects pandemic distortion. But that interpretation fails to explain the lack of rebound once demand normalized. That is precisely the pattern seen in prior waves of clerical automation.

Not only does the 2021-2022 spike not refute the notion of the erosion of jobs by AI, it actually confirms it. That was the same phenomenon that was observed among secretaries, typists, and travel agents before their jobs were automated. Organizations routinely overhire during system transitions to handle data cleaning, exception processing, and customer education. Those roles disappear once automation stabilizes. This is not cyclical recovery; it is terminal transition.

The 2022-2023 spike reflects the adoption of self-service portals, AI-driven chat, and internal copilots. As the graph reflects that process is continuing. There is no cyclic recovery. Furthermore, due to factors characteristic of the Indian economy including labor surplus, wage compression, and strong substitution effects, in India headcount may remain stable while task content is hollowed out which is exactly what my analysis has suggested. If even in a labor-surplus, low-wage economy like India we see hollowing out, the effect in high-wage Western economies will be more severe, not less. India is not an outlier here; it is a leading indicator.

India is not an arbitrary example. I chose it precisely because it is a labor-surplus, low-wage economy with relatively credible national statistics and deep integration into global IT services. If cognitive automation were merely a rich-country phenomenon, India should be resilient. If anything, it should absorb displaced work. The fact that we instead observe hollowing out even here makes the pattern more, not less, concerning. It’s not cherry-picking. I chose it before I knew the results, confident in what it would demonstrate which it did.

Contrary to Dr. Chamlee-Wright’s expressed view of the resilience of democracy despite job loss, that reflects a misunderstanding of the challenge. The problem is not that democracy will survive despite job loss. The problem is that democracy was architected around a laboring citizenry. Remove labor, and you remove one of its structural pillars.

The Challenge of Artificial Intelligence: Part I—the Industrial Revolution
The Challenge of Artificial Intelligence: Part II—the Digital Revolution

1 comment… add one
  • CuriousOnlooker Link

    There’s a couple of critiques I have.

    One is fundemental; we don’t have a mathematical, quantitative theory of intelligence. So we don’t know what’s the “ceiling” for AI / LLMs, and what’s the practical limit because of things like energy consumption. It is plausible that LLM / neural net AI’s top out roughly where humans top out. As an example, AlphaGO and other AI based chess engines haven’t significantly progressed since 2019 — they are better, but they aren’t 10 or a 100 times better.

    The second is while LLM’s may mean human cognitive scarcity is no longer an issue, the world of atoms still obeys physics and there isn’t unlimited resources like energy or land. I assume those have to be allocated via some form of market or mid market/command that democracies use. And markets mean trade and labor of some sort.

    Finally the last one is while the hot phrase is “agentic” AI, agency is the one thing they aren’t really good at and aren’t really trained on. Human’s are agentic, being driven by greed, curiousity, passion, empathy — even if LLM’s are better at solving math problems, which problems should it focus (because computing is not unlimited), and not all problems are equally valuable. So what problems AI is working on is human judgement or driven by human desires.

Leave a Comment