How Not to Manage Artificial Intelligence

You might want to take a look at Michael O’Hanlon’s post at The National Interest on the importance of artificial intelligence to U. S. defense:

A case in point is what our colleague at Brookings, retired Gen. John Allen, calls “hyperwar.” He develops the idea in a new article in the journal Proceedings, coauthored with Amir Husain. They imagine swarms of self-propelled munitions that, in attacking a given target, deduce patterns of behavior of the target’s defenses and find ways to circumvent them, aware all along of the capabilities and coordinates of their teammates in the attack (the other self-propelled munitions). This is indeed about the place where the word “robotics” seems no longer to do justice to what is happening, since that term implies a largely prescripted process or series of actions. What happens in hyperwar is not only fundamentally adaptive, but also so fast that it far supercedes what could be accomplished by any weapons system with humans in the loop. Other authors, such as former Brookings scholar Peter Singer, have written about related technologies, in a partly fictional sense. Now, Allen and Husain are not just seeing into the future, but laying out a near-term agenda for defense innovation.

The United States needs to move expeditiously down this path. People have reasons to fear fully autonomous weaponry, but if a Terminator-like entity is what they are thinking of, their worries are premature. That software technology is still decades away, at the earliest, along with the required hardware. However, what will be available sooner is technology that will be able to decide what or who is a target—based on the specific rules laid out by the programmer of the software, which could be highly conservative and restrictive—and fire upon that target without any human input.

To see why outright bans on AI activities would not make sense, consider a simple analogy. Despite many states having signed the Non-Proliferation Treaty, a ban on the use and further development of nuclear weapons, the treaty has not prevented North Korea from building a nuclear arsenal. But at least we have our own nuclear arsenal with which we can attempt to deter other such countries, a tactic that has been generally successful to date. A preemptive ban on AI development would not be in the United States’ best interest because non-state actors and noncompliant states could still develop it, leaving the United States and its allies behind. The ban would not be verifiable and it could therefore amount to unilateral disarmament. If Western countries decided to ban fully autonomous weaponry and a North Korea fielded it in battle, it would create a highly fraught and dangerous situation.

IMO if we continue down the path we’ve followed for cyberwar, we’ve already lost this race. The United States does not produce the largest number of solitary geniuses (that would be Russia), it doesn’t do bureaucratic science better than anyone else (that would be Germany or Japan), and it can’t deploy enormous numbers of AI workers as China has in its approach to cyberwar.

For more than a century the distinct genius of the United States has been based on culture and economics. The A-bomb shortened WWII it didn’t end it. What won WWII was our farms and the car culture—all of the kids grew up using firearms and who knew how machines worked. Similarly, the U. S.’s failure to use our main strength, the large numbers of young people who’ve invested so much time in video games, and instead tried a top-down approach to cyberwar is a vision of things to come.

2 comments… add one
  • Andy Link

    It’s all a fantasy.

    However, what will be available sooner is technology that will be able to decide what or who is a target—based on the specific rules laid out by the programmer of the software, which could be highly conservative and restrictive—and fire upon that target without any human input.

    There are two main problems with this:

    1. Software cannot apply the laws of war which require judgment. It’s an open question, but weapons that can do what is described are probably illegal.

    2. Software can’t do what is described. AI still cannot beat humans in moderately complex video games without cheating. It will function worse in the much more complex and adaptive environment of warfare. In situations with a high degree of ambiguity programming will either fail or be gamed by the enemy.

  • steve Link

    Microsoft pays teams to game. So do other corporations. We have talent out there if we just choose to use it. I think that we are too rule bound to make it work well. Most of our efforts are focused through the military. How many top level gamers and hackers do we get who can also pass fitness tests and make it through boot camp? Not many I bet.

    Steve

Leave a Comment