What Happens?

An article at Newsweek puts additional heft behind a point I’ve been making for some time. Moore’s Law, the rule of thumb predicting that the density of transistors on a chip would double every two years, appears to have run out of gas:

In 1965, Gordon Moore, co-founder of Intel, came up with a theory of technology progression that held true for more than 50 years. Coined “Moore’s law,” the theory suggested that the speed of computer processors would double every two years. The transistors inside of computer chips would continue to decline in cost and size but increase in power. Those predictions held true for decades, but a new study suggests that Moore’s law may have finally run out.

The study, published in Nature Electronics, suggests that technology can no longer get any smaller and innovators will have to figure out a new way to make it better. What this new way is, no one yet knows. As outlined in the new study, the future of microprocessors, the tiny computer chips that help run our lives, is complicated.

“The underlying science for this technology is as of yet unknown, and will require significant research funds—an order of magnitude more than is being invested today,” said Hassan Khan, a researcher at Carnegie Mellon University who specializes in engineering and public policy, Tech Explore reported.

And there’s no guarantee that the investment would pay off. Technology doesn’t necessarily advance geometrically or even continually but in fits and starts. Contrary to what you might have heard there haven’t been a lot of basic discoveries over the period of the last half century. Most have been elaborations of existing understanding.

It seems to me that the most obvious thing that an end to Moore’s Law would do is that lousy computer software wouldn’t be as easy to tolerate as it is now. Software developers could no longer depend on hardware to save them.

And to whatever extent autonomous vehicles and the robot apocalypse more generally depend on rapidly improving hardware, they may be practical later than has been anticipated.

2 comments… add one
  • TastyBits Link

    There is also a thermal limitation, and some years ago, heat became more important than speed. In addition, core functionality has been transferred to separate specialized sub-system. An array of high end video cards can rival Mainframes in computing ability.

    I use desktop systems, and I stopped worrying about speed long ago. I am more concerned with the architecture, abilities, and features.

    If all development was done in C, Assembly, or machine language, the code would be tighter and more elegant, but little of today’s applications would exist.

  • Most people don’t know how to calculate performance in a Windows environment. The advice I’ve given is that you can probably calculate performance based on just two factors: the speed of your mass storage and the speed of your video card. All sorts of operations use the video card you wouldn’t think did, i.e. memory to memory moves.

    As to C, it requires knowledge or at least awareness of the underlying hardware which practically nobody has these days.

Leave a Comment