Bolting the Barn Door

If this description by Billy Perrigo at Time is an accurate characterization:

The U.S. government must move “quickly and decisively” to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an “extinction-level threat to the human species,” says a report commissioned by the U.S. government published on Monday.

“Current frontier AI development poses urgent and growing risks to national security,” the report, which TIME obtained ahead of its publication, says. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.

It’s hard for me to imagine a more feckless, superficial set of recommendations. It’s not merely bolting the barn door after the horses have already fled, it’s bolting the barn door after the horses have died of old age.

To continue with the analogy used, to nuclear weapons, imagine that everyone knew how to make an atomic bomb and that it could be made in your basement with stuff you can find ordinarily in your home. Deterrence would be a sad joke. Let’s ask the obvious question. Even if all of their recommendations were adopted how would it change North Korea’s hypothetical use of artificial intelligence? India’s? China’s?

Not only would it do nothing to change any of those countries’ programs, it would do nothing to change thousands of private Americans’ ability to pursue artificial intelligence development.

Try again.

9 comments… add one
  • TastyBits Link

    There is no such thing as AI. It does not exist, but I will entertain the excitement, for a moment. Please define AI, and explain how it will exist outside of science fiction.

  • “Artificial intelligence” is, as it has been for the last 60 years, a catch-all or grabbag term for a collection of not much related strategies for executing specific tasks using computers. It includes pattern matching, expert systems, natural language recognition, and the most recent iteration of that called “large language model” (LLM) or “generative AI”.

    In general LLM consists of neural nets and other heuristic strategies that are “trained” using natural language samples. It exists and it’s certainly artificial although not particularly intelligent. What makes LLM possible is recent developments in add-in boards (mostly graphics adapters) in computers.

  • TastyBits Link

    It is statistical regression with a very large database.

    I hear sci-fi themes discussed with no understanding of the underlying concepts. A moon colony has a problem with radiation. Reheating Mars using CO2 has a problem with a lack of a magnetic field. Space travel for hundreds of years will evolve the human body.

    Adding a sexy interface to decision trees does not make it real. A spider’s brain is orders of magnitude more complex than a video card. It is silly nonsense.

    AI may make calling support more human-like, but you will get the same results as pressing 1 for yes and 2 for no. It might make video games more life-like, but they are still just games with defined parameters.

    Circuits are nothing more than really small logic gates, and they add really fast. A computer cannot understand anything other than 1’s and 0’s. (There have been base 10 computers, but they went the way of the dinosaur.)

    Actually, video game AI is a perfect example of the limitations. Because a game cannot alter its programming, exploits are gamers using the programing against the game. AI is doomed to the same fate.

  • Andy Link

    I think the “threat” of AI is completely overhyped. Suggesting it represents an “extinction-level” threat is simply ludicrous based on what we know.

  • steve Link

    There is a camp that wants to exaggerate the risks and a camp that wants to ignore them and believe it leads us to nirvana. It should make cyberattacks more effective but maybe also give us better defenses. Defenses usually respond to attacks so it will usually be behind. OTOH it has potential to greatly increase productivity in many areas.

    Steve

  • Larry Link

    Ask Yuval Noah Harari what he thinks about AI, which is currently only in its infancy.

    No one knows the future, no one!

  • AI, which is currently only in its infancy

    Long infancy. AI has been in its infancy for the last 60 years or more.

    Like desktop room temperature fusion or Brazil it may always be the future.

  • Drew Link

    See this? [_] This is a thimble. You now know how much I know about AI.

    But I would offer this. If there is ability to materially influence or distort with AI, there will be those intent on mischief, particularly our government. And every whackjob activist, right or left, around. Beware.

    Talk amongst yourselves.

  • Zachriel Link

    Gunpowder weapons meant the rise of citizenry and the increased brutality of war. Mechanized vehicles resulted in increased prosperity and tanks. Modern rockets led to space exploration and ICBMs. Social networks have helped draw people with like interests together, from knitting circles to terrorists. Every human invention is a double-edged sword, including double-edged swords. Why should AI be any different?

Leave a Comment