If this description by Billy Perrigo at Time is an accurate characterization:
The U.S. government must move “quickly and decisively” to avert substantial national security risks stemming from artificial intelligence (AI) which could, in the worst case, cause an “extinction-level threat to the human species,” says a report commissioned by the U.S. government published on Monday.
“Current frontier AI development poses urgent and growing risks to national security,” the report, which TIME obtained ahead of its publication, says. “The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.” AGI is a hypothetical technology that could perform most tasks at or above the level of a human. Such systems do not currently exist, but the leading AI labs are working toward them and many expect AGI to arrive within the next five years or less.
It’s hard for me to imagine a more feckless, superficial set of recommendations. It’s not merely bolting the barn door after the horses have already fled, it’s bolting the barn door after the horses have died of old age.
To continue with the analogy used, to nuclear weapons, imagine that everyone knew how to make an atomic bomb and that it could be made in your basement with stuff you can find ordinarily in your home. Deterrence would be a sad joke. Let’s ask the obvious question. Even if all of their recommendations were adopted how would it change North Korea’s hypothetical use of artificial intelligence? India’s? China’s?
Not only would it do nothing to change any of those countries’ programs, it would do nothing to change thousands of private Americans’ ability to pursue artificial intelligence development.
Try again.