A Faster Internet?

The Times of London features a column on the future of the Internet:

THE internet could soon be made obsolete. The scientists who pioneered it have now built a lightning-fast replacement capable of downloading entire feature films within seconds.

At speeds about 10,000 times faster than a typical broadband connection, “the grid” will be able to send the entire Rolling Stones back catalogue from Britain to Japan in less than two seconds.

The latest spin-off from Cern, the particle physics centre that created the web, the grid could also provide the kind of power needed to transmit holographic images; allow instant online gaming with hundreds of thousands of players; and offer high-definition video telephony for the price of a local call.

The article persistently confuses the Internet with the World Wide Web.

It’s all very hopeful and I have no doubt that someday, probably within my lifetime, there will be a substantially faster, securer Internet. However, I doubt it will be within the next couple of years.

Three words: last mile problem.

The only real way to get around that is wireless and the barriers to hyperfast wireless data communications are real and substantial.

12 comments… add one
  • All right! Faster porn!

  • Get cracking. It doesn’t download itself, you know.

  • The real problem with the Internet is not speed, but security. The dilemma Cliff Stoll wrote about in “The Cuckoo’s Egg,” that open and user-friendly systems that allowed you to do whatever you needed to get your work done were also easy for the bad guys to penetrate, applies to the Internet as well as it did to UNIX systems and later PC-based networks. Those systems were eventually locked down, making it much harder to get work done, because the alternative was too costly. Given the ratio of signal to noise in email and in open comment forums, we have the same problem with the Internet.

    The real question is, what can be done about it? In my mind, the best end state is to ensure that everyone can still send anything anywhere, but to make sure that bad actors can be identified and removed from the system. Some of the attempts so far at network security (firewalls spring to mind, particularly the poorly-administered ones, and killing off the redundancy of packet routing by not passing packets through any but backbone networks) are really a poor man’s attempt to get better system security (that didn’t fix the system security, merely moved the point of insecurity), and attempts to fight spam by filtering generally ends up killing legitimate email as well as spam, or letting through excessive amounts of spam. In order to put security at the network level, there are two things that are required: routine encryption and non-deniability.

    Encrypting everything that goes over the wires would make a large number of current attacks impossible, because the attacker would have to compromise the encryption keys for both ends of a communicating pair. (It would also defeat password sniffing attempts.) In other words, an encrypted communications channel is inherently non-deniable: you know who is at the other end of the connection. Since end-to-end encryption would limit the “send anything anywhere” flexibility of the network, the better implementation would be to encrypt and decrypt at each stage (ie, me to my router, my router to my ISP’s router, and so on down to the end system). This way, you would only have to maintain the public keys of your connection points, rather than of every system you might communicate with. The overhead should be relatively small in relation to modern processors, especially because network cards with onboard encryption hardware would quickly become ubiquitous, but if the overhead was too high, many of the benefits could still be obtained by cryptographically signing (rather than encrypting) packets. Note that all of this assumes larger packet sizes than are currently typical, in order to maintain efficient communications.

    The biggest advantage to such a system is that you could quickly establish webs of trust. If a given peer node keeps sending me spam or hosting hackers, I can cut it off and not route it, and that can be done in a semi-automated fashion. That doesn’t do much for home users, who have one peer, but it does a lot for multi-homed networks like businesses and backbone providers. Within a short time, as certain paths start getting marked as untrusted, botnets would start to get killed because their traffic would no longer be accepted by their peers.

    The problem with all of this, and the reason it hasn’t been done, is that it can’t be bolted on like SSL: it really needs to be done at the IP level. The good part is, that a secure IP could be routed alongside regular IP. The bad part is, we would have to replace every router (or at least its software) and network stack currently in existence.

    But at some point, if the Internet is to maintain its utility and in particular its universality, such a change (security of some kind at the IP level) must happen.

    Given the problems with address space, which IPv6 has failed to fix through not being better enough to get adopted, there are other reasons to reengineer the basic networking protocols. I hope that we will get it done before the Internet becomes only marginally useful.

  • Agreed. Notice I mentioned security in passing in the post.

    I can think of a dozen straightforward ways of improving Internet security. The spam problem is an easy one to solve and one that wouldn’t require changing out hardware or firmware. However, IMO there are lots of people who like things the way they are now. Not just the spammers themselves but storage and bandwidth vendors, for example.

  • Without changing out hardware or firmware? At that point, would you be suggesting changing out the higher-level services (blog software, mail software, etc) or protocols (which amounts to the same thing) instead?

    I’m curious what your thoughts are on that, because frankly I’m at the point where I hardly use email any more, and turned off comments on my blog (and won’t turn them back on without better protection if I go back to posting there). I’d love to see a solution.

  • The mail protocols need changing. Basically, both sending and receiving mail needs to be billed for, no piece of mail should be delivered until the sender accepts his part of the cost, and receivers need the opportunity to refuse mail.

    That would solve the problem completely but, as I said above, a lot of people like the way things are.

    I get on the order of 200-300 pieces of mail daily almost all of which is spam and a lot of which contains malware. One thing I think bears mentioning is that if spam were stamped out effectively the current torrent of mail would slow to a trickle. I suspect that it would improve the overall performance of the net perceptibly, not just measurably.

  • Indeed. By the way, you mentioned some of the ways in which the article missed, but my favorite is how they conflate grid computing (distributing computations across the nodes of the network) with the network used to link the grid. Almost as if, well, let’s just call fiber interconnects a grid, and the new buzzword means we have a new network to replace the old one. It misses the biggest, neatest feature of the Internet: it allows different network technologies and software systems to interconnect and interact essentially seamlessly.

    It’s actually pieces like this that convinced me to ignore anything said in the mainstream news media: if they get so much so wrong about computers, why would they get any less wrong about economics, politics, foreign policy or whatever?

    On to the email protocol: how would you ensure that the sender pays his part of the cost, while still ensuring the network remains neutral to content and open to connect any two nodes at will?

  • My experience has been that most people don’t distinguish among fundamentally different architectures. Network, grid, backplane, no distinction. That takes me back. Years ago I used to give classes on this sort of thing.

    Essentially what I have in mind is a protocol in which mail servers would reject mail that didn’t include a valid payment token. Tokens would be one-time use only and provided by senders (or re-senders). Yes, there would still be opportunities for fraud and a fairly complicated accounting job but I think it would be cheaper than what we’ve got now which, while it’s a hassle for end users, is a nightmare for hosts.

  • Who gets the money, and can I be the middle man?

    Actually, I do have a bit of an objection to this scheme, but it’s economic, rather than technical. First, I think it would reduce legitimate email use, because now I cannot (for example) use email to have my server notify me of impending problems, or have todoist send me reminders where my todo list is late (actually, this would affect a lot of web-based services). Second, it would not reduce spam, because spammers make considerably more from the spam that you could impose as a cost on legitimate businesses. Moreover, spammers almost always operate from zombie computers, networks of which are extraordinarily profitable, so token forging schemes would proliferate unless the protocol required a central, trusted clearing house for tokens, which sort of violates the Internet’s concept as a peering system, and that system would have to be more powerful than DNS to answer queries on token validity and maintain the CRL, but would have to be centralized. I simply don’t think that the cost structure that drives is practical.

  • You may be right although I think the general contours of my approach will probably be what eventually materializes.

  • Could be. Whomever figures out practical micropayment will be wealthy beyond the dreams of avarice.

Leave a Comment