I have spent the three middle days of this week in the kick-off event of a new EU project commenced on January 1st. The experience compels me to write this longish blog entry. I must start it with a bit of background, so bear with me.
The Internet, as we know it, has by now been around for some 15 years, or 20 years for old hats such as myself. As we know, its growth has been phenomenal in the past and is likely to continue in the foreseeable future (which on the Internet is not necessarily a long time).
As a result, Internet is now being used for many purposes its original designers had no idea of. If we measure the raw number of bytes transferred, the Net of Nets is now mainly used for distributing digital content such as videos and music. It is used by more than a billion people, nearly half of which live in Asia. It reaches a widening variety of devices from mainframes and desktop computers to all kinds of mobile and embedded systems over equally wide variety of links from WLAN and satellites to 10 gigabit Ethernets.
Given this tremendous change, it may be considered a surprise that Internet manages to work in a useful fashion at all. Indeed its continuous build-up and maintenance has required a massive effort from the computer science and telecom engineering communities. This investment has been made feasible by the growing added value that Internet is generating to societies at large, and to various kinds of individual actors in them. The rest, as the saying goes, is history.
At present, however, many otherwise sane and respectable people have started to express grave concern that the Internet is reaching its limits as a result of outgrowing the original design scope so dramatically. One obvious case is mobility. The original Internet was built on the assumption that the devices connected to it (the "hosts") can be designated with fixed addresses, the "IP numbers", that can be used to route messages to them. This assumption became broken with the advent of laptop computers that people carry around and connect to Internet at various points. At present, it is challenged further by the mobile multimedia phones that preferably should be always connected to the Internet. Of course, we have by now learned to live with the resulting difficulties by resorting to solutions such as VPN's or Mobile IP that make a node appear to the Internet as if it was connected at a fixed point somewhere else. Unfortunately, this is living on loaned time.
The most fundamental problem of the Internet as of today nevertheless relates to its very basic primitives. In the original design, every host has a publicly known IP number which every other host can send messages to, even without the consent of the recipient. This might have been fine in a network where all communicating partners share a common ethos of acceptable behaviour, including that the senders do not obfuscate their identities and that they will not continue sending unsolicited traffic. Unfortunately, the present Internet is hardly like that anymore. As a result, we have been forced to introduce various kinds of means of limiting undesirable traffic such as firewalls and Network Address Translators (NATs), and so to spend increasing amounts of resources on basically wasteful things such as virus protection and spam filtering. Again, the future progress of the Net does not seem to rest on a sustainable basis.
So what? The imminent demise of the Internet has been predicted almost from the start of its phenomenal growth, and so far the scarecrows have turned out to have been "somewhat premature" with their death news. Could this not continue also in the future?
That the predicted "Net meltdowns" never became reality, in my view, is because the community responsible of developing the Internet technology took heed of the warnings (well, some of them) and launched corresponding evasive actions. So far it has worked out. Unfortunately, each new layer of Band-Aid taped on top of the crumbling skeleton of the Internet reduces the flexibility that used to be its most compelling attribute. Already now the introduction of every new technology on the Net is made painful by the need to maintain interoperability with the plethora of fixes and kludges that the present Internet is.
And this brings me, finally, to the project we launched this week. It is called "Publish-Subscribe Internet Routing Paradigm", and has the tongue-twisting acronym "PSIRP". (At least to Finnish ears, the acronym sounds like the sound of a little bird, so I expect that the project logo fill feature some feathery small animal).
Masking behind this mostly innocent name is a call for revolution. "Publish-subscribe" denotes a departure from the most fundamental idea of the present Internet: that hosts have publicly known names that other nodes can use. Instead, we propose a network that works in a completely opposite way: those hosts that wish to submit information ("Publish") must have a public name, whereas those nodes that wish to receive information ("Subscribe") can be nameless or anonymous. Thus, no longer spam, no longer unwanted traffic: only such messages that are explicitly subscribed to will reach the destination.
While this alone is already a significant effect, it is the ripple effects of the publish-subscribe paradigm that really make it cool. In effect, the new paradigm changes the division of work between the network and the hosts in that a number of functions and properties that now are provided by hosts' operating systems (e.g., file systems) or "server side software" (e.g., Internet search) will be assigned to the "network". This, if it happens, will wreak havoc with existing value chains and business models of many major players of the field, and thereby open new opportunities for some others, including newcomers.
The idea itself, radical as it seems, is nevertheless not novel but instead captures how existing services such as P2P systems or distributed event systems essentially work. What is new in our project is that we aim to push the idea deep down the network stack to packet routing level (and perhaps deeper). We not only claim that this is possible, but also, more importantly, that this results in a simpler and "cleaner" overall architecture that will work on the scale of the present and future Internet. We aim to demonstrate the claim by actually implementing the needed functionality, not only once, but twice: both as an overlay network that will permit rapid experimentation and take-up, and as very clean native implementation running directly on top of the link layer.
These claims and objectives are bold indeed. Thus it was with a certain degree of apprehension that we went to the kick-off event: will we really be able to pull it off? These feelings were intensified by the fact that HIIT is the co-ordinator of the project, and thus bears the overall responsibility of its success both to the EU commission (and EU taxpayers) and to our European partners.
The main characters of the coming play, Dr. Arto Karila who leads the project and the project co-ordinator Marja-Leena Markkula, concluded that such a tall order will require unconventional means. Indeed, the design of an architecture is a "wicked problem" that cannot be neatly broken in subproblems that can be distributed to parallel teams. Instead, it requires the integrated effort of several clear heads that rapidly is translated to the form of running code that can be investigated and criticised. This, in turn, requires that the experts from different countries, companies, and universities can develop a high level of mutual trust and respect towards each other on the basis of a shared vision and genuine excitement.
As a result of such considerations, Arto and Marja-Leena decided to spend the full first extended afternoon of the event in work sessions designed to make explicit the basic thoughts and feelings each of us had at front of the 2,5 year trek that we endeavour to make together. I must confess that I expressed a curious combination of excitement and angst: excitement, because this project indeed is what we have wanted to do for a long time, and already have worked hard and long to make happen; angst at the challenge ahead and whether we can kindle our own excitement also in other consortium members.
To our great relief, our apprehension turned out be to be a case of pre-curtain jitters that just gave an edge to the performance. There was little need to kindle excitement that already burned bright. While all of us were surely aware of the challenges ahead, we also were confident on our ability to deal with them. Moreover, we realised that we do not need to stand alone, but there are other groups around the world investigating similar themes and sharing the load.
With this mindset, the rest of the event went smoothly and we were able to get the first planned activities in motion.
Will we prevail? Only future will tell. The maximal success is a big leap, in networking terms comparable with the original introduction of packet switching to replace circuit switching. Thus the biggest fish in the pond is a whale. Fortunately, there are other success scenarios as well where our work will influence the future progress by more indirect routes.
Our biggest challenge, after all, may be to find and motivate a few good researchers who really understand networking architecture and are passionate to express their understanding in the kind of clean and crisp code the beauty of which approaches poetry in engineer's eyes. I think we will not need to search far: in the kick-off, the room was full of them.