Before the Internet, there was the ARPANet, a closed computer network that pretty much shut down on weekends and over holidays. In 1983, Paul Mockapetris, then a computer scientist at the University of Southern California’s Information Sciences Institute, proposed opening up the network beyond academia to anyone with a computer and modem. Over the next three years, he went on to develop the Domain Name System architecture, which in turn established the principle of a distributed and dynamic network that could hook up to any computer. It was a radical idea for a government-funded project.
Today, by Mockapetris’s calculation, there are more than 10 billion domain names in use. SMS messages, tweets, e-mail, streamed video, and music also travel through the DNS layer of the Internet. There’s a dark side, too. Cybercrime gangs increasingly operate in the DNS layer to launch denial-of-service attacks, send a barrage of spam, or set malware booby traps. DNS enables them to hide their tracks and make it seem like the attack is coming from legitimate sites or from everyday Internet users. Mockapetris, chief scientist and chairman of the board at network security firm Nominum, wants to use what he calls DNS forensics—analyzing Web traffic down to the domain look-up level to sniff out nefarious online activity—to turn the tables on the bad guys. Bloomberg Businessweek caught up with him recently to talk about the last 30 years, and the next 30.
We’re nearing the 30-year anniversary of DNS and the open Internet. Did you foresee the demand for this level of distributed communications power?
I’m not going to tell you that I predicted there would be an iPhone. But predicting that every computer we have will have some communications capability—yeah, that was a no-brainer to me.
That was radical back then.
Perhaps. There will be more computation in 10 years than there is today. That sounds pretty safe to me. We have never had more data and more ability to process data than we have today. And it’s pretty clear to me that both of those things are going to expand even further.
How are we going to accommodate the next billion Internet users?
The real question is, what do you have to throw overboard to get the next billion online? As far as the DNS scalability goes, I’m not worried about it. Going forward, how do we get to a better Internet? One of the big things is a digitally secure DNS or DNS-like system. It might be time for DNS 2.0. I am perfectly willing to admit that.
What would that look like?
We need to get to the next level of naming, which combines authentication, but more importantly, a reputation system. I may use TripAdvisor and Michelin to decide which restaurants to go to, and Interpol can tell me, beware, this is a child porn site. But I think schools should be involved, too, to tell me which sites are best for my kids to do their homework. We should be able to get a list of the good guys and the bad guys on the Internet. I think the digital reputation field, coupled with authentication of the naming system, is a powerful concept for moving us forward.
Like a neighborhood policing model for the Internet?
Yeah, or the bouncer at the door, whichever metaphor you want to use. We already see this with e-mail. Everyone accepts spam filtering today. Everybody uses naming technologies, whether it is URLs or domain names or e-mail addresses to filter what they want. Having that same concept as part of the first level of defense in any security system makes complete sense to me. That’s what I think we will see incorporated into the network.
And what role does DNS play?
Right now one of the things I am personally most involved in is taking a look at DNS usage—this is like a big data problem—and figuring out what’s wrong with a particular network. If I take a look at the DNS traffic of a particular user, I can pretty much tell whether or not their computer has been infected with malware such as DNS changer or Conficker, or a variety of bad stuff. Doing that level of DNS forensics today is important. DNS forensics technology, I think, will be as ubiquitous in five years as anti-spam filtering is today.
Through DNS forensics, you can root out spammers?
Yes. For example, using DNS monitoring you can go to an individual user and say: Look, we don’t think you are capable of sending 1 million valid e-mail messages per day, but you are. Botnets are another one. A botnet is essentially an Internet application. Internet applications use the DNS to run their activities, so we can tell, looking at your DNS traffic, whether you are talking to a host that wants to infect your computer. We can tell whether or not you are downloading infections. We can tell whether or not that infection is running in your machine, and then we can interfere with its operation if you like.
How would you like to see DNS change?
From my personal point of view, I would like to see a DNS filtering service that an Internet user can personally control, can personally say what they want. They can set their own defaults, and, if they want to, they can say, I want recommendations from my church or my school. It can incorporate those sources of reputation from whomever I choose, and my choices are not observable to the outside world. That can be accomplished by doing filtering DNS in the end system. It may be more practical for the ISP to do it, outsourcing it to them. That is the world I would like for us to go to.
It sounds as if the original open model of the Internet that you helped introduce would be one of those things we throw overboard, particularly as the next billion Internet users come online.
There is no problem to provide the next billion with Web and e-mail access. But the more generative aspect, where people can run their own servers if they want to and you can talk end-to-end if you want to, and you can have end-to-end security? In this way, the original open model, the anything goes from any endpoint to any other endpoint, preserving that will be a little bit harder.