This post originally began as a reaction or riff on a post I saw at Substack but which I’ve lost track of. I wanted to make several points about the artificial intelligence craze that is presently under way.
My first is that, yes, we are in an artificial intelligence mania, rather similar to the Tulip Mania of the 17th century. Microsoft, Alphabet, Meta, and Amazon spent hundreds of billions of dollars on generative AI (GAI) in 2024. That certainly qualifies as a “mania”.
And it’s just getting started. You can reckon the beginning of the bubble from the launch of ChatGPT in 2022. Based on previous experience with bubbles, they typically last around five years. Like previous bubbles this one is propelled more by “fear of missing out” (FOMO) than it is by actual benefits.
It’s interesting to consider the changes wrought by the last major technological shift, the shift to mobile devices. When it began with the introduction of the iPhone, the largest companies in the S&P 500 were (in descending order) ExxonMobil, General Electric, Microsoft, Citigroup, and AT&T. By the launch of ChatGPT they were Apple, Microsoft, Alphabet, Amazon, and Berkshire Hathaway. At five times the market capitalization of their predecessors. Don’t be surprised if the leaders have changed just as much by 2030. Hence, FOMO.
Just as the infrastructure investments of the iPhone revolution were borne by telecommunications companies but other companies realized most of the benefits, again don’t be surprised if the same thing happens with the implementation of GAI.
One of the ways in which GAI will provide different improvements in automation from those of the 20th century is that rather than reducing the number of unskilled workers required for tasks it will provide the opportunity to increase the productivity of skilled activities that rely heavily on memory and adhering to certain guidelines.
The big question will be who captures the economic surplus realized from such efficiencies? Capital? Skilled workers? Consumers? Considering the entrenched interests and their power, I anticipate an enormous political battle over who benefits.
A final intriguing aspect of this shift is that it will go on primarily in the United States for political, social, and economic reasons. The U. S. is already very different from our European cousins in important ways (there are no trillion dollar European companies) and I suspect this shift will make the differences even more pronounced.
No idea myself, but I remember this exchange between Tyler Cowen and Peter Thiel:
“COWEN: . . . In this world to come, will the wordcels just lose their influence? People who write, people who play around with ideas, pundits —are they just toast? What’s this going to look like? Are they going to give up power peacefully? Are they going to go down with the ship? Are they going to set off nuclear bombs?”
“THIEL: . . . My intuition would be it’s going to be quite the opposite, where it seems much worse for the math people than the word people. What people have told me is that they think within three to five years, the AI models will be able to solve all the US Math Olympiad problems. That would shift things quite a bit.”
https://conversationswithtyler.com/episodes/peter-thiel-political-theology/
That’s just an excerpt. Discussions like this tend to get expressed in the extremes. Cowen has taken to regurgitating LLM responses on his blog, which tend to be poorly written and cloaked in appeal to authority (whose?) fallacy. OTOH answering a math quiz overlooks the necessity to understand what math tools to use and their limitations. While there are calculators, schools usually prohibit their use until the student reaches a point that math is no longer the focus.
I think leaders have already shifted due to AI.
Nvidia is now the 2nd largest public company by market cap. Tesla at 7th, has been marketing itself as a robotics company (self driving cars and actual robots), its hard to see investors valuing it at $1 trillion based on the EV business. Broadcom at 8th has benefited from designing custom chips for AI.
Its still an open question if the changes will occur primarily in the US first. China is doing excellent work in this field and its uncertain if the main bottlenecks in the path to better AI and commercialization is access to advanced chips; that is the only thing the Chinese lack vis a vis the US.
I think AI can easily do lots of boilerplate writing. Stuff like ad copy, news reports, instructions. It will be wonderful when they let people use AI to rewrite and edit science and medical papers so that they are actually readable. I think it’s great that AI can solve math problems but in the real world I dont think there are many jobs where you are just given a page of math problems to solve. You still need people to decide what problems to solve and the best way to go about it. AI, for now, will be best at making things go faster. Maybe eventually it takes over more steps.
Steve
Lets understand why there is so much focus on whether AI can solve math problems.
Make sure you watch Andrej Karpathy video on the topic.
As he explained, the ChatGPT most people are familiar with is “system 1” thinking — instinctive responses to questions which don’t have any “deep reflective thought into it”. Useful but also comes with the classic issues for LLM’s (fails miserably with anything outside of its training material set). What is missing is AI that can do “system 2” thinking, the type of intelligence you use to solve novel problems; carefully backtracks and doublechecks when doing problem solving.
They are focused on math problems due to an observation — humans that are good at doing “system 2 thinking” generally do well on math problems / math contests. Of course this leads to an assumption, that training AI to do well on math problems will lead to generalized “system 2 thinking” over many domains. Its possible it could lead to AI that’s good at solving math problems and nothing else — like training AI to be good at chess.
This is the toughest part of AI; unlike flight or rockets, we don’t know the “laws of intelligence”; we don’t even have a universally agreed way to measure intelligence. To a large extent we don’t really understand why LLM’s work at all.