Transitioning to a new Internet would be difficult, but research continues

Share with others:

Print Email Read Later

NEW YORK -- Transitioning to a next-generation Internet could be akin to changing the engines on a moving airplane.

Routers and other networking devices will likely need replacing; personal computers could be in store for software upgrades. Headaches could arise given the fact that it won't be possible to simply shut down the entire network for maintenance, with companies, groups and individuals depending on it every day.

And just think of the costs -- potentially billions of dollars.

Advocates of a clean-slate Internet -- a restructuring of the underlying architecture to better handle security, mobility and other emerging needs -- agree that any transition will be difficult.

Consider that the groundwork for the IPv6 system for expanding the pool of Internet addresses was largely completed nearly a decade ago, yet the vast majority of software and hardware today still use the older, more crowded IPv4 technology. The clean-slate initiatives are far more ambitious than that.

But researchers aren't deterred.

"The premise of the clean-slate design is, let's start by saying, 'How should it be done?' independent of 'Can we retrofit it?" said Andrea Goldsmith, an electrical engineering professor at Stanford. "Once we know what the right thing to do is, then we can say, 'Is there an evolutional path?"

One transition scenario is to run a parallel network for applications that truly need the improved functions. People would migrate to the new system over time, the way some are now abandoning the traditional telephone system for Internet-based phones, even as the two networks run side by side.

"There's no such thing as a flag day," said Larry Peterson, chairman of computer science at Princeton. "What happens is that certain services start to take off and attract users, and industry players start to take notice and adapt."

That's not unlike the approach NASA has in mind for extending the Internet into outer space. NASA has started to deploy the Interplanetary Internet so its spacecraft would have a common way of communicating with one another and with mission control.

But because of issues unique to outer space -- such as a planet temporarily blocking a spacecraft signal, or the 15 to 45 minutes it takes a message to reach Mars and back -- NASA can't simply slap on the communications protocols designed for the earthbound Internet. So project researchers have come up with an alternate communications protocol for space, and the two networks hook up through a gateway.

To reduce costs, businesses might buy networking devices that work with both networks -- and they'd do so only when they would have upgraded their systems anyhow.

Some believe the current Internet will never go away, and the fruits of the research could go into improving -- rather than scrapping -- the existing architecture.

"You can't overhaul an international network very easily and expect everyone to jump on it," said Leonard Kleinrock, a UCLA professor who was one of the driving forces in creating the original Internet. "The legacy systems are there. You're not going to get away from it."

Internet design challenges lead to work on rebuilding network Government and university researchers have been exploring ways to redesign the Internet from scratch. Some of the challenges that led researchers to start thinking of clean-slate approaches:

THE CHALLENGE: The Internet was designed to be open and flexible, and all users were assumed to be trustworthy. Thus, the Internet's protocols weren't designed to authenticate users and their data, allowing spammers and hackers to easily cover their tracks by attaching fake return addresses onto data packets.
THE CURRENT FIX: Internet applications such as firewalls and spam filters attempt to control security threats. But because such techniques don't penetrate deep into the network, bad data still get passed along, clogging systems and possibly fooling the filtering technology.
THE CLEAN-SLATE SOLUTION: The network would have to be redesigned to be skeptical of all users and data packets from the start. Data wouldn't be passed along unless the packets are authenticated. Faster computers today should be able to handle the additional processing required within the network.

THE CHALLENGE: Computers rarely moved, so numeric Internet addresses were assigned to devices based on their location. A laptop, on the other hand, is constantly on the move.
THE CURRENT FIX: A laptop changes its address and reconnects as it moves from one wireless access point to another, disrupting data flow. Another workaround is to have all traffic channel back to the first access point as a laptop moves to a second or a third location, but delays could result from the extra distance.
THE CLEAN-SLATE SOLUTION: The address system would have to be restructured so that addresses are based more on the device and less on the location. This way, a laptop could retain its address as it hops through multiple hot spots.

THE CHALLENGE: The Internet was designed when there were relatively few computers connecting to it. The proliferation of personal computers and mobile devices led to a scarcity in the initial address system. There will be even more demand for addresses as toasters, air conditioners and other devices come with Internet capability, as will standalone sensors for measuring everything from the temperature to the availability of parking spaces.
THE CURRENT FIX: Engineers expanded the address pool with a system called IPv6, but nearly a decade after most of the groundwork was completed, the vast majority of software and hardware still use the older, more crowded IPv4 technology. Even if more migrate to IPv6, processing the addresses for all the sensors could prove taxing.
THE CLEAN-SLATE SOLUTION: Researchers are questioning whether all devices truly need addresses. Perhaps sensors in a home could talk to one another locally and relay the most important data through a gateway bearing an address. This way, the Internet's traffic cops, known as routers, wouldn't have to keep track of every single sensor, improving efficiency.


You have 2 remaining free articles this month

Try unlimited digital access

If you are an existing subscriber,
link your account for free access. Start here

You’ve reached the limit of free articles this month.

To continue unlimited reading

If you are an existing subscriber,
link your account for free access. Start here