The Problem With Artificial Intelligence

I was relieved to see that the caption on Gary Marcus’s op-ed in the New York Times, “Artificial Intelligence Is Stuck. Here’s How to Move It Forward.” isn’t really what op-ed was about. It was more a lament for the sorry state of artificial intelligence research than it was a plea for a massive new research program:

Artificial Intelligence is colossally hyped these days, but the dirty little secret is that it still has a long, long way to go. Sure, A.I. systems have mastered an array of games, from chess and Go to “Jeopardy” and poker, but the technology continues to struggle in the real world. Robots fall over while opening doors, prototype driverless cars frequently need human intervention, and nobody has yet designed a machine that can read reliably at the level of a sixth grader, let alone a college student. Computers that can educate themselves — a mark of true intelligence — remain a dream.

“Artificial intelligence” is a grab bag of sometimes interrelated technologies including pattern recognition, natural language processing, expert systems, neural nets, and others. Regardless of the advance press, it has been “stuck” for the last 50 years and is likely to remain so.

Most of the advances in artificial intelligence over that period have been due to faster, more capable computer hardware: faster, better, cheaper processors, memories, networks. The principles of artificial neural networks, for example, have been known for 70 years. Applying them has been incremental and frustrating but it has been greatly facilitated by faster, bigger hardware.

There’s a tremendous difference between artificial intelligence research and building a practical nuclear weapon, putting a man on the moon, or the search for the Higgs boson: the researchers have a pretty darned good idea of what they’re looking for. A more fruitful approach might be more research into natural intelligence but that will require recruiting different people into that field than are usually drawn to it.

5 comments… add one
  • TastyBits Link

    The problem with artificial intelligence is that it cannot actually ‘think’. It can only operate in a fixed framework, and no matter how complex, it always is a rules-based system. Faster hardware allows AI to evaluate a larger number of possible outcomes against its programmed rules.

    AI can never be creative. As it exists, AI is simply a googolplex of monkeys banging on a typewriters, and they will write the perfect novel at some point.

    True creativity is imagining what is not. It means assuming that the impossible is possible. All of Einstein’s fellow students believed that non-Newtonian physics was impossible, but Einstein did not.

    ‘Thinking outside the box’ is irrational. It is an antithesis of rule-based thinking. It can be creative, but it can also be destructive. Mass murders ‘think outside the box’. Unless AI is capable being psychopathic, it cannot be truly creative.

    The Three Laws of Robotics must be cast aside, and robots must be able to harm humans, without reason.

  • Modulo Myself Link

    The main problem with the classic idea of AI is that human consciousness is attached to a larger body. Deep Blue, for example, may be incredibly good at chess, but it will not become bored with it or end up having crazy theories about history and time. Being bored is a universal experience, I believe, and yet how it operates in consciousness is much different than having the qualia of ‘red’. It may be temporary or permanent, but it’s definitely a feeling about the mind rather than a thought of the mind and it can arrive unbidden, as if there were no cause at all. But we are very far from a Deep Blue giving up chess and becoming a conspiracy theorist.

  • Gustopher Link

    Artificial intelligence has been 10-15 years away for my entire lifetime, and then some.

    At the same time, we’ve gotten lots of advances that we thought would require true artificial intelligence — natural language processing, facial recognition, chess…

    I don’t think artificial intelligence really means much in a practical sense. We will keep defining it down until the robots kill us all — and even that will likely be based on some simple heuristics.

  • CuriousOnlooker Link

    I recently “hacked” around with machine learning (of the deep learning net variety), along with reading some of the literature. The discussion is more nuanced then I thought.

    What I told my colleagues was, they are generally very good at specific tasks but poor at general tasks. The caveat is that “specific” tasks was anything involving two very board categories, visual data, (from driving to face/mood recognition to art creation), and sequence matching (language). Something on the order of 20% of the human brain is involved in visual processing.

    The advances are real. Just 10-15 years ago, scientists had proven neural nets (esp “deep ones”) could in theory be super powerful, but no one knew how to “train” such a net, so there of little practical use. But theoretical advances such as convolution, pooling, rectified linear units, LTSM, have made it possible to train up to 100 layers.

    Only a small part of the advance is about hardware (GPU’s, custom chips).

    I agree that we’re probably heading for another “AI desert” but only because its going to take a long time to figure out all the applications from the recent revolution. And I think its going to be harder, the abstract thinking parts of the brain are pretty unique to humans, so we can’t gather data on that structure like we did with the visual cortex on cats.

    In some ways, what has happened with deep learning / neural nets is similar to immunotherapy for cancer research. A mostly ignored approach is suddenly brought to the forefront because it showed some significant results and people have confidence in further developing it because its how nature solves the problem.

  • TastyBits Link

    You could load all the building codes into an AI computer, and it could build a structurally sound design. Windows are unnecessary, and they make the building less energy efficient. Open spaces (lobbies) are a waste, as well as other human desired features.

    The only way for a computer to know what options are acceptable or not is to increase the rules it uses.

    One of Google’s self-driving cars hit a bus. It was using an algorithm to make decisions, but the algorithm cannot account for anything outside the rules that have been included. Anything outside those rules is irrational. The bus floating into the air is as likely as a drunk driver driving down a street the wrong way.

    Can self-driving cars cope with illogical humans? Google car crashed because bus driver didn’t do what it expected

    (This is only one of many articles about the incident.)

    For some reason, the car did not look both ways. Using probability in the calculation is no different than a human throwing the dice before each action. Irrational means that something is outside a ruleset, and the possibilities outside a ruleset are infinite.

    Much of human behavior is or seems to be irrational. Humans are able to reconcile the rational and irrational by changing the ruleset on-the-fly, but the result is not necessarily rational. It can be just a little less irrational, but that implies infinity is finite, again irrational. (Infinity minus anything is infinity.)

    Google notes that self-driving cars will continue to have accidents until there are no human drivers.

Leave a Comment