AI With Chinese Characteristics

There was something missing in Vermont Sen. Bernie Sanders’s Wall Street Journal op-ed about the perils of generative artificial intelligence. It included Americans’ skepticism about AI:

A recent Quinnipiac poll found that 55% of Americans think AI will do more harm than good, 70% think AI will lead to fewer jobs, and only 5% think AI development is being led by people and organizations that represent their interests.

It included an explanation for the skepticism:

The American people understand that AI and robotics will transform our world. They want to make certain that this technological revolution makes life better, not worse, for them and their families. They know that fundamental questions must be answered before we rush forward. They don’t trust the AI oligarchs.

and it included a proposal for slowing down the adoption of AI:

Congress must act. That is why I have introduced legislation, with Rep. Alexandria Ocasio-Cortez, to impose a federal moratorium on the construction of new AI data centers until strong national safeguards are in place.

Sen. Sanders is correct that there are substantial risks associated with the development of AI and even more to its adoption including the loss of jobs and increased income inequality. But recognizing risk does not make every proposed remedy effective.

The development of AI is not solely a U. S. phenomenon. Although the U. S. presently holds the leadership position in the race to develop AI, dozens of countries including China, France, and Japan have active AI development projects. What Sen. Sanders’s proposal doesn’t include is how he plans to prevent China from proceeding with the development of AI or stop American companies from using Chinese AI.

Given that omission Sen. Sanders’s proposal would less slow the development or adoption of AI than it would impede American control, capability, and leverage. It would have little impact on the loss of jobs due to AI. To whatever degree that will happen, it will happen.

A domestic moratorium in a competitive global technology race is not “slowing AI”. It is unilateral disarmament.

The prudent policy action is selective export controls and encouraging the construction of domestic data centers rather than discouraging them.

6 comments… add one
  • steve Link

    I dont think this is a binary choice. It’s true that other countries will forge ahead if we lag back and many of those places have fewer concerns about safety than we do. It’s also true that it likely means fewer jobs, at least for a while, and that the people leading the AI charge cant be trusted and dont have the nation’s best interests at heart. It will just create another group of self-interested billionaires.* As long as Ai doesnt harm them they will care little wha tit does to others.

    I think the only realistic choice is to go ahead with AI accepting the bad parts knowing the other option likely worse. Maybe we can mitigate the harmful effects afterwards but now that the billionaire class has more openly and brazenly entered the political fray that wont be easy.

    * There are some prominent leaders in the Air Force movement talking about safety issues but they are a minority and those who are not concerned about safety issues are allied with the current admin.

    Steve

  • I think the only realistic choice is to go ahead with AI accepting the bad parts knowing the other option likely worse. Maybe we can mitigate the harmful effects afterwards but now that the billionaire class has more openly and brazenly entered the political fray that wont be easy.

    That’s pretty much my view. I think the fundamental problem is ethical or moral and the difficulty of solving such problems via legislation is well known.

  • steve Link

    OT-I missed this when it came out but interesting list compiled from some historians by Council on Foreign Relations on best and worse foreign policy decisions. Of note, they view the large majority of our foreign military adventures post WW2 as not being in our national interest. (Some of the choices seem much a judgment of morality than truly affecting national interest.)

    Steve

  • CuriousOnlooker Link

    The discourse over “AI” is both ill-informed (someone pointed out data centers consume about 6% of the water used for golf courses), and doesn’t illuminate because we don’t have an accurate theory of intelligence.

    Here is an analogy of where we are in understanding intelligence compared to physics — it is now just after Galileo discovered the moons of Jupiter; that Earth isn’t the center of the universe — but before Newton’s discovery of gravity and the laws of motion. Similarly, we’ve discovered or invented intelligence that matches human intelligence in many respects, but there’s no theory to make useful predictions of how where things go. For example, is there a ceiling to intelligence like there is for speed (speed of light); what is the relationship between the amount of compute vs “intelligence”; what kinds of problems can a “beyond human intelligence” solve?

    Without a “universal” theory, we have so few observations from 3 years of “AI” that one can extrapolate practically any outcome they can dream of for “AI”.

    Also, things move fast and a lot of peoples conception of AI is out of date. For example, I suspect “AI” isn’t going to be controlled by a few people or companies in the long term. The compute cost to train a gpt 2.0 class model (state of the art 7 years ago) is < $100, and a gpt 3.0 (that started the LLM craze) is about $10K. And not many tasks requires a Von Neumann level of intelligence to perform means many companies may prefer to create/customize proprietary models.

    Also, what I see is the adoption of AI in the enterprise is going to generate a lot of work for people — even as it makes projects / tasks that previously were too expensive to attempt feasible. A lot of processes have to be reworked and that will have to be done by humans.

  • steve Link
  • The most interesting thing about that list is that the position in the list is approximately inversely related to its recentness.

    The Reagan, Bush I, Bush II, and Obama administration all took actions that have among the lowest approval. There are some actions I was surprised were not on the list at all, e.g. occupation of Haiti 1915-1934.

    Also, note that the U. S. intervention in Russia following the October Revolution that I’ve mentioned before had very low approval.

Leave a Comment