In prior installments in this series of posts I have reflected on the Industrial Revolution and the Digital Revolution which were alike in many ways but differed in some significant ways as well. In this installment I will consider the ongoing Artificial Intelligence Revolution. As we will see, it is different in some basic ways from its predecessors.
The Industrial Revolution mechanized human muscle. The Digital Revolution mechanized human memory and communication. The AI Revolution mechanizes human judgment itself.
A good starting point is this post by Emily Chamlee-Wright at Persuasion:
Given the nature of my work, I’m in coalitions—and a lot of conversations—focused on fortifying American democracy. And like everyone, the promises and threats associated with the rapid advancement of AI technology are top of mind for me.
When it comes to scientific discovery, optimism abounds. The promise that AI tools like AlphaFold will accelerate biomedical discovery—from new antibiotics to deeper insights into the proteins driving Alzheimer’s and cancer—sparks excitement. But there’s also a pessimism that lingers in these conversations, especially when it comes to what the future holds for workers who are not at the frontiers of science. If you write code or oversee routine managerial processes, the thinking goes, your days of productive employment are numbered. And a jobless citizenry does not bode well for democracy.
Dr. Chamlee-Wright’s central argument is the human imagination will always come up with new “needs” and that every newly perceived human need will continue to clear through human labor markets. She assumes, without either theoretical or empirical support, that newly imagined human needs will remain monetizable in a labor sense once cognitive scarcity disappears.
Dr. Chamlee-Wright is ignoring a crucial difference between the AI Revolution and the Industrial and Digital Revolutions before it. Cognition not being scarce has never occurred before. There is no historical analogy, because in every prior revolution, the scarce factor remained human. This time, it does not. Not only is there no empirical evidence from current AI diffusion that points in that direction, in fact the early evidence points the other way.
Consider this example. The Indian Ministry of Statistics and Program Implementation produces a report called the “Periodic Labour Force Survey” (PLFS) annually. Here are the results for low-level IT support workers for the last five years:

In human terms that reflect a decline in 2019-2020, a post-pandemic spike, a steep subsequent drop, and continuing slow decline. One could argue that this volatility merely reflects pandemic distortion. But that interpretation fails to explain the lack of rebound once demand normalized. That is precisely the pattern seen in prior waves of clerical automation.
Not only does the 2021-2022 spike not refute the notion of the erosion of jobs by AI, it actually confirms it. That was the same phenomenon that was observed among secretaries, typists, and travel agents before their jobs were automated. Organizations routinely overhire during system transitions to handle data cleaning, exception processing, and customer education. Those roles disappear once automation stabilizes. This is not cyclical recovery; it is terminal transition.
The 2022-2023 spike reflects the adoption of self-service portals, AI-driven chat, and internal copilots. As the graph reflects that process is continuing. There is no cyclic recovery. Furthermore, due to factors characteristic of the Indian economy including labor surplus, wage compression, and strong substitution effects, in India headcount may remain stable while task content is hollowed out which is exactly what my analysis has suggested. If even in a labor-surplus, low-wage economy like India we see hollowing out, the effect in high-wage Western economies will be more severe, not less. India is not an outlier here; it is a leading indicator.
India is not an arbitrary example. I chose it precisely because it is a labor-surplus, low-wage economy with relatively credible national statistics and deep integration into global IT services. If cognitive automation were merely a rich-country phenomenon, India should be resilient. If anything, it should absorb displaced work. The fact that we instead observe hollowing out even here makes the pattern more, not less, concerning. It’s not cherry-picking. I chose it before I knew the results, confident in what it would demonstrate which it did.
Contrary to Dr. Chamlee-Wright’s expressed view of the resilience of democracy despite job loss, that reflects a misunderstanding of the challenge. The problem is not that democracy will survive despite job loss. The problem is that democracy was architected around a laboring citizenry. Remove labor, and you remove one of its structural pillars.
The Challenge of Artificial Intelligence: Part I—the Industrial Revolution
The Challenge of Artificial Intelligence: Part II—the Digital Revolution







There’s a couple of critiques I have.
One is fundemental; we don’t have a mathematical, quantitative theory of intelligence. So we don’t know what’s the “ceiling” for AI / LLMs, and what’s the practical limit because of things like energy consumption. It is plausible that LLM / neural net AI’s top out roughly where humans top out. As an example, AlphaGO and other AI based chess engines haven’t significantly progressed since 2019 — they are better, but they aren’t 10 or a 100 times better.
The second is while LLM’s may mean human cognitive scarcity is no longer an issue, the world of atoms still obeys physics and there isn’t unlimited resources like energy or land. I assume those have to be allocated via some form of market or mid market/command that democracies use. And markets mean trade and labor of some sort.
Finally the last one is while the hot phrase is “agentic” AI, agency is the one thing they aren’t really good at and aren’t really trained on. Human’s are agentic, being driven by greed, curiousity, passion, empathy — even if LLM’s are better at solving math problems, which problems should it focus (because computing is not unlimited), and not all problems are equally valuable. So what problems AI is working on is human judgement or driven by human desires.
AI puting everybody or most people out of work is akin to perpetual motion. People need money to purchase goods and services. Universal Basic Income (UBI) schemes envision skimming profits to provide money, but the price for goods and services is cost + profit.
Extracting 100% of the profit will not cover the cost, no matter how small. Furthermore, investors must get a higher return than UBI. Otherwise, they will just take UBI, like everybody else. Basically, “from each according to his ability, to each according to his needs”. Ask the Soviets how well that worked.
All things require feedback, and AI is no different. To be most useful to humans, human goods and services require human feedback, and humans not always logical and rational.
For example, President Trump is the greatest president, or President Trump is the worst president. He could be neither. An objective list of characteristics could be used, but determining the list would be subjective.
AI can never fully replace humans. AI cannot innovate. Today was not imaginable 100 years ago. Humans are never satisfied with what they have. There is no perfect product.
Who would have imagined to add a computer chip to a teddy bear and sell it as a children’s toy. I would be happy with a spam filter that worked.
I am highly doubtful that AI will be the downfall of mankind, but I do not believe it will be the saviour of mankind, either.
TastyBits:
You’re right that it’s self-destructive. When has that stopped corporate management if they thought it would improve their balance sheets?
CuriousOnlooker:
I agree with your skepticism about agentic AI. My opinion is that human oversight of AI should be mandatory. That would require an act of government and insistent enforcement which would be difficult. That’s actually why I am writing this piece and other related pieces. There’s an urgent need for the federal government to intervene in this.
@Dave Schuler
The COVID response is indicative of what will happen with a large percentage of the population unemployed. If the Fed lowers rates, inflation will soar. If the Fed maintains or increases rates, corporations will go bankrupt.
The government tried a limited UBI scheme, and the results have not gone well. Scaling up the COVID UBI scheme will only increase inflation, and this will force a further increase in UBI. At some point, it is unsustainable.
AI operates after-the-fact. It uses existing knowledge to create new knowledge. It is derivative, and it requires humans to generate the data it consumes. It will be no different than the industrial or digital revolutions.
Eventually, everything will stabilize. AI workers will do what they do best, and human workers will do what they do best. The transition will be messy, but government intervention will only prolong it. Overall, has healthcare benefited from government intervention?
I do not support a UBI—I don’t think it’s practical. It’s intrinsically a positive-feedback system, as you point out. IMO the mitigation lies elsewhere.
@Dave Schuler
I was not implying you support it. I was just pointing out that the small scale UBI scheme has failed.
Mitigation for a fully autonomous drone system would include the US Constitution, Bill of Rights, the laws of war, international law, etc. It would decide targets based upon its training, and it would be no different than any Airman. Without human intervention, would it decide to bomb Venezuelan boats or a US citizen without a trial?
“My opinion is that human oversight of AI should be mandatory. That would require an act of government and insistent enforcement which would be difficult.”
I disagree, given our already litigious culture, its already required to have human oversight over the actions of AI. We don’t treat the output of an AI as an act of God; so its currently the case either the LLM provider or the person who applied the output of an LLM is liable for any harms resulting from said usage.
That liability risk is also why human labor isn’t going to dissipear anytime soon. Organizations will need a human backup to CYA if the LLM behaves unexpectedly.
Lets take discussion in an interesting direction. The usage of LLM’s in law. I read a blog post by a Supreme court litigator who laid out pretty compelling evidence LLM’s can argue a case in front of a judge as good as any lawyer today; and their ability to write a brief in support of a party is just as good. The post makes a point that letting pro se litigants use LLM’s as a virtual lawyer should be a tremendous boon for the majority of this class of litigants But do we allow it, what if the LLM makes a mistake, who’s liable? The LLM providers would refuse to let their models be used if they could be sued for mistakes; but that would deprave the vast majority of pro se litigants from improving their chances in court.
Then what about judges, LLM providers are likely to be better judges then people; they aren’t sensitive to politics, petty human biases, emotions that human judges have. But I doubt anyone is going to want an LLM judge to decide the fate of their case ever.
I would be satisfied with strict liability. I do not believe that is presently the case. Also, please explain the human oversight in agentic AI.
I don’t believe in replacing judges with LLM AI but I do believe in replacing judges (with ongoing human oversight) by expert systems. The expert system could be created by an LLM AI.
For the appellate courts. I do not see how this work. Not all rulings are unanimous. So, will human oversight be by one human, or will the appellate court judges be required? If more than one, what is the difference?
For common law, I foresee more issues. A civil code might have fewer issues, but there will still be problems.
Unless all human knowledge is entered, human intervention is required to determine what data to “train” the model. Will sharia law be the same as English law? If AI is the ultimate intelligence, there can only be one answer.
In my opinion, AI will dumb-down all employment. Today’s skilled jobs will be tomorrow’s entry level jobs, and eventually, the result will more jobs.