Why Was He Wrong?

This article at Futurism by Joe Wilkins caught my eye. Mr. Wilkins observes:

With so many wild predictions flying around about the future AI, it’s important to occasionally take a step back and check in on what came true — and what hasn’t come to pass.

Exactly six months ago, Dario Amodei, the CEO of massive AI company Anthropic, claimed that in half a year, AI would be “writing 90 percent of code.” And that was the worst-case scenario; in just three months, he predicted, we could hit a place where “essentially all” code is written by AI.

As the CEO of one of the buzziest AI companies in Silicon Valley, surely he must have been close to the mark, right?

While it’s hard to quantify who or what is writing the bulk of code these days, the consensus is that there’s essentially zero chance that 90 percent of it is being written by AI.

Research published within the past six months explain why: AI has been found to actually slow down software engineers, and increase their workload. Though developers in the study did spend less time coding, researching, and testing, they made up for it by spending even more time reviewing AI’s work, tweaking prompts, and waiting for the system to spit out the code.

Unfortunately, Mr. Wilkins does not actually answer the question that forms the title of this post: why was Dario Amodei wrong? Will none of his predictions come to pass?

I’ve been experimenting with large language model artificial intelligence (LLM AI) models for two years now. Based on my limited experience there are several inherent problems with their use.

The first is that applications created using AI aren’t designed. They’re just implemented. When features are added or bugs identified and reported, the applications are re-implemented with run-on effects. I’m seeing this in the software updates on my smartphone and tablet. They are being “improved” rapidly to the point of becoming unusable.

While that’s okay for simple tools only used occasionally, it can be disastrous for mission-critical applications or those that have financial aspects.

The second is that human beings just aren’t very good at explaining what they want and/or need. They leave things out. They include extraneous things. They may not recognize the run-on effects of a decision. It takes considerable skill and experience to explain things properly and completely and there are fewer people who can do those things than can grind out code.

The third is that there has been what might called “title inflation”. Seniors aren’t seniors any more. Forty years ago a senior developer had eight years or more of experience. Now five years is considered senior. It isn’t but that’s what businesses are saying these days. Also, as a past colleague of mine once observed, senior here in the U. S. and senior in another country are two different things.

Another is that you can’t rely on AI to test your applications for you. LLM AI models don’t understand anything. They just do what they’ve been trained to do (best case). The implication of that is that only human beings can determine whether something is suitable to task which in turn suggests that human beings need to do the testing. Good testing is harder than coding.

Sadly, none of this makes any difference. All that matters is that a senior developer plus 3 junior developers costs more than a junior prompt writer and a subscription to several AI models Note: there are no senior AI prompt writers because they haven’t been around long enough and such creatures may never exist because of the rapid pace at which they evolve. That’s all that will show up in the quarterly report and whatever President Trump says short term thinking is here to stay. Goosing stock value at the expense of long term risks to the enterprise is too easy a decision to make for modern managers.

7 comments… add one
  • Zachriel Link

    It used to be Google-fu. Now, it’s AI-fu. The human-AI symbiotic relationship will mature. People for whom AI is native will develop entirely new ways of creating.

  • In the long run we are all dead.

    You didn’t answer the question: why was Amodei wrong?

    While I agree that AI is here to stay and has benefits, I think it’s being grossly oversold. Maybe in five years or ten years the promise will live up to the hype but it isn’t the case now. And it’s producing adverse effects now.

    As I’ve written before I think the correct analogy to AI is spreadsheet programs, e.g. Excel, Lotus 1-2-3, VisiCalc. Not the analogies that are being made.

  • steve Link

    He was wrong because AI isn’t as developed as he seemed to think so in the real world people kept finding problems. Next, he was wrong because it takes people a while to figure out how best to leverage new technology. Even if AI was perfect we would still need to figure out how to use it and where it works best. There are other reasons but I also wonder if the guy was just trying to sell his product hoping FOMO would kick in.

    Steve

  • Zachriel Link

    Dave Schuler: why was Amodei wrong?

    His reasoning wasn’t provided; but irrational exuberance, even if honest, is not unusual in a CEO hyping their product. Our counter prediction is based on previous technologies and how they were integrated over time with humans. AI appears to be following a similar path (so far!). What others see as taking their jobs, the new generation will take as normal, and they will create something new and unexpected, leaving the older generation as befuddled as the previous generation with a new remote control.

    “It’s tough to make predictions, especially about the future.” —Yogi Berra (among others).

  • Drew Link

    “While I agree that AI is here to stay and has benefits, I think it’s being grossly oversold.”

    Aw, giv’m a break. Why, just this morning I saw a come on saying Chat GTP would tell you how to turn $10 into $10MM. And I thought, I’ve been working too hard; I gotta get me some of that there Chat stuff………..

  • Andy Link

    I think there are multiple ways of interpreting the prediction. I don’t have a ton of experience coding, but a lot of it is iterative.

    I think you’re right that being able to ask the right questions is key. People know what they mean, but AI doesn’t unless one is explicit.

    I found three great uses for AI in my current work and life:
    – Identifying random stuff – with a picture and an accurate description, it’s pretty good at identifying stuff. It’s been a big help in going through the stuff from my parent’s and sister’s estate.
    – Parsing technical documents. In my current work, I have to read a lot of technical filings for the FCC. Feed AI one of the documents and it’s very good at summarizing and pulling out the information I need.
    – Data organization. AI has already been a huge time-saver in organizing and cataloging the photo archives I’ve inherited from from the past two generations of my family. The difficult part is getting everything scanned with a general date. A lot of companies, including the one I work for, are now using AI to wrangle the vast and often stovepiped reams of internal data and history. These are not general models, but AI’s trained only on the organization’s information. This is also being used in government according to my friends who are still working, especially in the intel community, which generates much more data than analysts can look at.

    I think it will be the specialized models that will really succeed the most. AI, like anything else, is a tool, and specialized tools tend to perform their functions better.

  • CuriousOnlooker Link

    Want to comment on this before it drops out of the page.

    First, lets concede Mr Amodei is wrong, but lets give some context.

    LLM’s keep improving, over the summer they have repeatedly placed in the top 10 in the world doing programming contests on tasks that take the best humans about 10 hours to do.

    However, access to this capability is limited, I don’t know of easy ways to make a model run for 8-10 hours autonomously solving problems, usually the LLM providers cap a prompt at 20 minutes or so of “thinking time”. The software engineers at the AI labs don’t face that cap.

    The other part is software development looks quite different in the AI labs / AI adjacent startups that is Mr Amodei’s universe vs the software development that we think of.

    The algorithms / software for developing LLM’s is relatively simple — much simpler then the software for “google search”, or “linux”, so its quite possible that this type of software is more amenable to LLM’s code creation then traditional software.

    Indeed, Mr Amodei recently mentioned in his company, about 70-90% of the code is now written via LLM’s.

    I am leaning towards Mr Amodei was wrong on timeline, but directionally right on the impact, LLM’s will eventually dominate in terms of the amount of code written (and not that far away, within 5 years).

    Why did Amodei make such an aggressive prediction, besides being a salesman? He’s a veteran of AI, and one has to have unusual conviction to stick to a field that goes through “AI winters”.

Leave a Comment