Getting It


I wanted to commend a post at Outside the Beltway to your attention. The post is about the impact of LLM AI on jobs in the United States. James’s observations are pretty quotidien (technology is improving rapidly, employment is increasing slowly, productivity rapidly, etc.). It’s the comments that are interesting. Most are the typical outright rejection and denial but there’s one, made by individual who comments here occasionally, that hits the nail squarely on the head:

It can generate code at the level of an intern, requiring large amounts of supervision. Typically, interns are useless on net, and the internship is just a months long job interview looking to see if the little rugrat can show signs of growing past that.

and

I worked at a startup decades ago, and quickly realized that what we were ostensibly doing was not the real goal — we weren’t really making a product, we were selling a dream to the investor class, telling them that we would be revolutionizing the internet and that they could get in on the ground floor. The actual users were just a means to an end, and their happiness was almost irrelevant, so long as a non-expert investor could look at it and think “yes, stupid peasants might like this.”

Here the dream is straightforward, and the dream of every C-suite executive: get rid of employees. It doesn’t have to be successful, it just has to be convincing to non-experts that long-term it will be successful. And even if all it does is make employees nervous and “grateful to have a job”… that’s part of the dream.

When I began working sixty years ago, big companies routinely and systematically trained new employees. That hasn’t been true for years. Today entry-level jobs exist primarily as training investments when they exist at all. Firms do not hire entry-level workers for productivity; they hire them to create future productive workers. If managers believe AI will replace that future worker, the rational action is to stop creating the positions now.

Other arguments are secondary at most. It doesn’t matter whether AI can do all jobs or, indeed, any jobs better than human beings in the near term. The point is what management believes and as long as managers believe that AI can do as good a job as a human employee, LLM AI will reduce the number of entry-level jobs created first, then higher level jobs.

That will be seen first where the largest number of entry-level jobs have been created, e.g. in South Asia. Entry-level technology jobs are already being disrupted in India, according to the India Times and Storyboard18:

Artificial Intelligence adoption is nudging India’s IT hiring in a new direction, slower at the entry level, steadier at the top, and sharper on skills.

A firm-level study by the Indian Council for Research on International Economic Relations (ICRIER), supported by OpenAI, finds that companies are moderating hiring, particularly for entry-level roles, even as mid- and senior-level employment remains largely stable.

This has happened multiple times in the past. Railroads massively overbuilt before profitability. MBA spreadsheet models drove layoffs in the 1980s before productivity gains existed.

The “title inflation” with which the technology sector will ensure that a significant number of jobs are eliminated. And it’s not just the technology sector. Any sector that has already experienced significant outsourcing including the financial sector will do the same thing. The number of associates being hired by large law firms will decline.

India is the canary in the coal mine.

6 comments… add one
  • steve Link

    Seems like this will be a lot more broadly based than railroads or the 1980s and likely to keep growing. I think a lot of people are unhappy due to their interactions with Ai or what they think is AI dealing with customer/consumer issues. Makes me wonder if those AI systems are doing exactly what they were designed to do ie make it difficult for consumers so that they will give up. Certainly what we have seen with health insurance companies.

    Steve

  • I think there are multiple things all going on at the same time. Your point is well-taken—it may well be working as intended.

    But it’s also true that the people writing prompts for AI don’t know what the heck they are doing. That’s implicit in the “title inflation” I mentioned in the post.

    And my observation about management still holds. What ultimately matters is what management thinks.

  • TastyBits Link

    At the lower, most (all) of customer support is rote based. There is a decision tree that they follow. The only reason for a human is customer comfort.

    I do not understand what AI is supposed to do or be, but an AI assistant would make sense.

    For coding, is the result in binary? I suspect AI coding is the same just without a human. Why? An AI coder should produce machine code. AI lives in a binary world and thinks in machine code. Human readable code exists for humans.

    An AI run company would have no need for the human based activities. If memos were needed, they would be sent as binary. Microsoft Office is for humans. Why pay for Excel? The data can be stored electronically as an array.

    As to management, stupid decisions are made all the time, and companies go bankrupt. Apparently, Saks Fifth Avenue is going bankrupt because management made a stupid decision to buy Neiman Marcus. (I am assuming that the problem was actually the price and/or the terms of the sale.) Maybe, AI would have made a better decision, but that assumes there was a better decision.

    The human activities and products being replaced by AI are useless to AI.

    (I saw a video with some AI guru. Supposedly, ChatGPT was originally trained with fan fiction sites, and the next version included Reddit. WTF. They could have used literature, textbooks, etc., but why bother with actual knowledge.)

  • steve Link

    “There is a decision tree that they follow. The only reason for a human is customer comfort.”

    Disagree. Verizon disenrolled me from Autopay for some reason. I just spent about 40 minutes going through all of the decision trees offered and none of offered any options related to autopay other than enrolling. I want even offered the option of speaking to someone. The problem as I see it is that the decision trees dont cover unusual situations. For straightforward problems they work well though you may have to go through multiple options to get to the correct choice. So the company saves money but at the expense of time by the consumer.

    Anyway, I re-enrolled but still dont know why it happened and if it will happen again. If it does I will have to figure out how to talk with a live person and hope I get a good one.

    Steve

  • There used to be a protocol for speaking with a live agent. I posted on that subject more than 20 years ago.

    Lately I’ve observed that in many cases there may be no such protocol. It may actually be impossible to speak with a live agent. O tempora o mores.

  • TastyBits Link

    @steve
    Actually, we agree. I was trying to shorten my comment, but it was too short.

    I rarely use customer service for the very reason you cite. You must go through the lower echelons before you can get to somebody who actually knows something, but everybody must go through them.

    Since I try to fix my problems, I have already gone through their recommended actions, and I need somebody who know something. I am in the minority, and I assume that the lower echelons work for most people. I could be wrong.

Leave a Comment