2025 Drags to a Close

As we near the end of the year I’m seeing quite a few predictions for next year, many of them either unremarkable or preposterous. This morning on a lark I asked ChatGPT what its predictions were for 2026. Here are the results:

  1. AI spending hits a credibility wall in the form of pushback from boards of directors and CFOs for immediate measurable results from adopting AI.
  2. A visible tiering of AI users emerges among power users, occasional users, or institutional avoiders including government, regulated professions, and unions.
  3. White-collar hiring freezes spread, not layoffs
  4. One major AI firm retreats from “frontier” scaling away from ever-larger models toward efficiency, specialization, or verticalization.
  5. Electricity becomes a binding constraint. It should be noted that will give an edge to China in the adoption of AI.
  6. Courts quietly restrict AI use in legal proceedings
  7. Medical AI stalls at the liability boundary
  8. A backlash against “AI fluency” hiring language
  9. The first serious AI-driven outsourcing reversal appears. Work previously offshored will be reshored not to human workers but to AI.
  10. Public discourse shifts from “Can AI do X?” to “Who is responsible?”

Some of those are verbatim. Some are paraphrases. If you asked the same question I suspect the answer would vary considerably. After several years of regular use on my part ChatGPT has a pretty fair sampling of how to respond to me. YMMV.

I asked several follow-up questions. I may report on those in the coming year.

2 comments… add one
  • Charlie Musick Link

    In completely unrelated news, I just found out my cousin’s husband is currently the longest serving active judge in Illinois with 50 years of service.

    https://www.illinoiscourts.gov/News/1646/Justice-James-A-Knecht-from-Hickeys-Billiards-to-the-longest-serving-active-judge-in-Illinois/news-detail/

  • bob sykes Link

    First, Happy New Year! Best wishes for you and yours.

    I am deeply suspicious of the utility and reliability of AI in general. There have been several reports now (real or fake?) of lawyers submitting AI-generated briefs that contained citations of fabricated, nonexistent court cases. Have there been any false AI-generated medical diagnoses? Would any physician admit to one, given the prevalence of law suits?

    I am reminded of an essay published in the ‘70’s, or so, that predicted one day courts would only accept eye witness accounts, because documents and audio and visual recordings were so easily fabricated.

    I note, once again, that the Chinese use AI, and computers in general, to automate and optimize their manufacturing and transportation, while we use it to play games.

    Your AI prediction that reshored factories would be operated by AI, and bring no jobs, is certainly true. Chinese factories have very few, highly specialized workers. The robots do everything.

Leave a Comment