Kun Vanhasen II hallitus viime keväänä sisällytti ohjelmaansa maininnan "huippuyliopistosta", sai aihe varsin paljon huomiota osakseen. Samassa yhteydessä esitetty maininta "strategisen huippuosaamisen keskittymien" (eli SHOK:ien) synnyttämisestä on sen sijaan pysynyt julkisuuden näkymättömissä. Silti kyse on sangen merkittävästä asiasta, josta kannattaa kirjoittaa jotakin, ehkä lukeakin.
Termin takana on Valtion tiede- ja teknologianeuvoston kesäkuussa 2006 julkaisema raportti. Raporttia edelsi jo vuonna 2005 alkanut valmistelu, johon myös HIIT osallistui. Se ehdotti perustettavaksi teollisuuden, tutkimuslaitosten ja yliopistojen yhteenliittymiä, jotka yhteisesti sovittujen tutkimusohjelmien pohjalta tähtäisivät maailmanluokan tasoiseen tutkimukseen ja merkittävään impaktiin. Ensi vaiheessa SHOK:eja perustettaisiin viisi kappaletta, teema-alueina energia ja ympäristö, metallituotteet ja koneenrakennus, metsäklusteri, terveys ja hyvinvointi sekä tieto- ja viestintäteollisuus ja -palvelut. SHOK:eista kaavailtiin tulevan pitkäaikaisia (10 v) ja kriittiseen massaan (100-200 htv/v) yltäviä kokonaisuuksia.
Raportti jätti kuitenkin sangen avoimeksi sen, miten SHOK:it itse asiassa pitäisi organisoida. Kun metsäteollisuus jo loppuvuonna 2006 päätyi siihen vaihtoehtoon, että klusterin tutkimusta koordinoimaan perustetaan teollisuuden ja yliopistojen omistama yritys - Metsäklusteri Oy - muodostui tämä rakenne kuitenkin pian lähtökohdaksi muidenkin SHOK:ien kehittämiselle. Yritys saikin klusteristatuksen kesällä 2007. Varsinaista tutkimustoimintaa se ei kuitenkaan ole vielä näyttänyt käynnistäneen.
Tieto- ja viestintäteollisuuden (ICT) SHOK:in valmistelu alkoi TEKES:in vetämänä alkuvuodesta 2007 ja sai lisävauhtia uuden hallitusohjelman myötä. Niin vihdoin tultiin kesäkuun 1. päivään 2007, jolloin TEKES kutsui koolle avoimen kokouksen, jossa sen tarkoituksena oli luovuttaa valmisteluvastuu ICT-toimialan toimijoille itselleen. Muistan hyvin ajatukseni, kun saavuin TEKES:in auditorioon tuona aamuna ja havaitsin sen olevan tupaten täynnä ihmisiä eri yrityksistä, yliopistoista, tutkimuslaitoksista ja muualtakin. "Tämä ei tule olemaan ihan helppoa", huokaisin.
Oli jo alusta asti ilmeistä, että ICT-toimialan koko kirjoa ei voida pakottaa yhden ainoan tutkimusrakenteen muotoon ilman huomattavaa väkivaltaa. Kun kenelläkään ei ollut tahtoa sellaisen käyttöön, ratkaisuksi muodostui jakaa ICT-tutkimuksen aihepiiri muutamaksi rinnakkain valmisteltavaksi teema-alueeksi joita pyrittäisiin koordinoimaan yhtenäisten periaatteiden mukaisesti. Muodostettiin kaksi työryhmää, joista toinen alkoi miettiä valittavia teema-alueita VTT:n Tatu Koljosen vetämänä ja toinen keskittymän muodollis-hallinnollista rakennetta Dimes ry:n johtaja Kimmo Ojuvan vetämänä. Koko valmisteluhankeen vetovastuun sai Nokian Erkki Ormala, varamiehehään Elektrobitin Reijo Paananen.
Sisältöryhmä aloitti toimintansa todella vauhdikkaasti, ilmeisenä tarkoituksenaan saattaa hitaammin heräävät osapuolet fait accompli'n eteen. Siksi päätin osallistua sen toimintaan.
Monestakin syystä (joita tässä blogissa on jo käsitelty) tuntui välttämättömältä, että ICT SHOK:iin sisältyy tulevaisuuden Internetiin suuntautuva tutkimuslinja. Tärkein peruste oli kuitenkin se että suurten yritysten kuten Nokian, Nokia-Siemensin, Ericssonin ja teleoperaattoreiden kiinnostukset kohtaavat toisensa sen piirissä, sitä voimakkaammin mitä pidemmän aikavälin impaktia tutkimuksella tavoitellaan. Sain teollisuudesta tukea tälle idealle, ja Future Internet valittiinkin yhdeksi jatkotyön teema-alueeksi 20.6. pidetyssä kokouksessa. Ericssonin Raimo Vuopionperä otti alueen vetovastuun, ja minusta tuli valmisteluryhmän sihteeri. Muut valitut alueet olivat digitaaliset palvelut, älykäs liikenne sekä vaikeasti nimetty hanke Device Interoperability Ecosystem, joka suuntautuu ubiikkiteknologiaan liikkuvien päätelaitteiden tulokulmasta.
Kovin lyhyeltä tuntuneen kesäloman jälkeen tutkimussisältöjen valmistelutyö jatkui kiivasta vauhtia. Maaliksi oli asetettu lokakuun loppu, jotta ICT SHOK voisi saada virallisen "SHOK-statuksen" marraskuussa ja aloittaa toimintansa vuoden 2008 alussa. Oma sihteeritehtäväni oli onneksi kohtalaisen suoraviivainen, koska keskeisten yritysten tutkijat ottivat pallon haltuunsa ja kokosivat jo elokuun loppuun mennessä alueen tutkimusohjelman rungon. Pidin tätä välttämättömänä, jotta ohjelmasta tulisi riittävän fokusoitu ja jotta keskeisten yritysten sitoutuminen voitaisiin varmistaa alusta alkaen.
Kun elokuun lopussa pidettiin ensimmäinen avoin teema-alueen suunnittelukokous, joutuivat siihen kutsutut muut osapuolet jo osin katettuun pöytään. Oli kuitenkin tärkeää saada riittävästi eri yliopistojen parhaita tutkijoita mukaan, joten ovet avattiin syyskuun ajaksi heidän ehdotuksilleen. Niiden tuli kuitenkin sopia temaattisesti ja sisällöllisesti jo valmisteltuun perustaan. Soveltuvia palasia yhteen sovitellen saimme lähes valmiin tuloksen määräaikaan mennessä.
Alkuperäisestä aikataulusta jouduttiin kuitenkin luopumaan, koska muiden teema-alueiden ja varsinkin ICT SHOK:in hallintomallin valmistelu oli edennyt huomattavasti hitaammin. Hallintomallin valmistelua hankaloitti varsinkin se, että osa toimijoista piti tarkoituksenmukaisena muodostaa ICT SHOK:ia koordinoiva yhtiö jo olemassa olevasta tutkimusyhtiö RTT Oy:stä. Paitsi että tämä herätti epäluuloja muissa osapuolissa, se edellytti kohtalaisen monimutkaista järjestelyä kaksivaiheisine osakeanteineen. Kun valmistelu vielä tapahtui hieman salamyhkäisesti ja kömpelösti, ei valmista tullut. Myös sopimusasioiden miettiminen osoittautui takkuiseksi.
Näiden mutkien oikominen vei lopulta aikaa aina joulukuun alkuun asti. Future Internet -ryhmä käytti tämän ajan hyväkseen paitsi tutkimusohjelman loppuhiontaan, myös pienille ja keskisuurille yrityksille suunnatun seminaarin järjestämiseen NRC:ssä 2.11.2007.
Vauhdin säilyttämiseksi päätimme tässä vaiheessa, että "valmisteluryhmä" muuntautuu "ohjelmaksi" samantien. Tätä varten muodostettiin "väliaikainen ohjausryhmä", joka koostui keskeisten yritysten edustajista. Sen tehtävänä oli tuottaa tutkimussuunnitelma, joka voitaisiin toimittaa TEKES:in katseltavaksi heti vuoden 2008 alussa. Myös vetohevosia vaihdettiin: Nokian Pasi Sarolahti otti Interim Programme Leader -tehtävän, ja HIIT:in Kristiina Karvonen ryhtyi ohjausryhmän sihteeriksi. Valmistelua tehtiin useissa rinnakkaisissa työryhmissä ja koordinoitiin yhteisen Wikin avulla. Sisällöllinen loppuhionta tehtiin joulun ja uuden vuoden raossa, ja luonnosteksti toimitettiin TEKES:iin tammikuun alussa.
Lopullinen läpimurto hallinnollis-juridisten ongelmien osalta saavutettiin vasta 11.1.2008 pidetyssä ICT SHOK -yleiskokouksessa, jossa TKK:n vararehtori Outi Krause pontevasti purki sumaa ICT SHOK -yhtiön osakemerkinnän tasapainotilan saavuttamisen tieltä. Nyt on ilmeistä, että SHOK-toimintaa koordinoiva Tivit Oy voi aloittaa toimintansa helmikuussa 2008. Virallisen SHOK-statuksen se saanee 23.1.2008.
Tämä kertomus siis jatkuu edelleen, ja on itse asiassa vasta alussa. Jos vastaan ei tule uusia yllättäviä vaikeuksia, tutkimustoiminta voi alkaa loppukeväästä 2008. Olisin yllättynyt mikäli Future Internet ei olisi ensimmäisten alkavien hankkeiden joukossa. Sitä ennen on vielä monia kysymyksiä mietittävänä, ennen muuta se miten tutkimus saadaan organisoiduksi siten että koko puuhan laadullinen tavoite - huipputason strateginen tutkimus - voidaan saavuttaa. Tämä on tärkeää erityisesti Innovaatioyliopiston kannalta, se kun luullakseni tulee poimimaan SHOK:ien kattamia alueita tutkimuksensa keihäänkärjiksi. ICT SHOK:in osalta tämä koskee Future Internetin ohella erityisesti digitaalisten palvelujen teema-aluetta.
Miksi tämä tarina? Ainakin se kertoo, kuinka hankalan väännön takana uudet ideat lopulta ovat, ja kenties myös sen kuinka vaikeaa on pitää katse horisontissa kun koko ajan pitää väistellä eteen tulevia esteitä. Voi olla, että ICT SHOK:ista on tulossa takin asemesta tuluskukkaro. Ihan perusteita vailla ei ole sellainenkaan mielipide, että nyt syntyvä ICT SHOK ei ole sen paremmin "strateginen", "huippu" kuin "keskittymäkään". Tällaisen väitteen esittäjää kehottaisin kuitenkin tutustumaan läheisemmin varsinkin Future Internet tutkimusagendaan. Toki agendat ovat papereita, vain tutkijat tutkivat.
Voiko hanke onnistua? Voi kyllä, mutta se tulee edelleen vaatimaan vaivannäköä ja tämän uuden instrumentin virittämistä. Minulla on sellainen tunne, että varsinkin Innovaatioyliopiston edistyminen tulee vuorovaikuttamaan ICT SHOK -toiminnan evoluution kanssa jonkinlaisen symbioottisen yhteiselon muodossa. Tämä on luonnollisesti HIIT:in kannalta varsin kiinnostava kehityssuunta, jonka toteutumisen hyväksi tulemme tekemään aloitteita ja muita tarvittavia toimia. Sellaisia on jo mietintämyssyssä.
Suurin pullonkaula tulee olemaan lopultakin hyvin rajallisten osaajavoimien kohdentaminen kaikkein tärkeimpiin ja hedelmällisimpiin kysymyksiin. On myös selvää, että meidän olisi osattava kiihottaa opiskelevaa nuorisoa hakeutumaan ICT SHOK -alueen tutkimuksen piiriin. Se viesti olisi saatava läpi, että tietotekniikka on edelleen kuuma alue, kenties kuumempi kuin koskaan ennen. Suuria asioita on tapahtumassa, ja suuria tekoja on tehtävissä.
lauantai 19. tammikuuta 2008
torstai 10. tammikuuta 2008
The New Internet
I have spent the three middle days of this week in the kick-off event of a new EU project commenced on January 1st. The experience compels me to write this longish blog entry. I must start it with a bit of background, so bear with me.
The Internet, as we know it, has by now been around for some 15 years, or 20 years for old hats such as myself. As we know, its growth has been phenomenal in the past and is likely to continue in the foreseeable future (which on the Internet is not necessarily a long time).
As a result, Internet is now being used for many purposes its original designers had no idea of. If we measure the raw number of bytes transferred, the Net of Nets is now mainly used for distributing digital content such as videos and music. It is used by more than a billion people, nearly half of which live in Asia. It reaches a widening variety of devices from mainframes and desktop computers to all kinds of mobile and embedded systems over equally wide variety of links from WLAN and satellites to 10 gigabit Ethernets.
Given this tremendous change, it may be considered a surprise that Internet manages to work in a useful fashion at all. Indeed its continuous build-up and maintenance has required a massive effort from the computer science and telecom engineering communities. This investment has been made feasible by the growing added value that Internet is generating to societies at large, and to various kinds of individual actors in them. The rest, as the saying goes, is history.
At present, however, many otherwise sane and respectable people have started to express grave concern that the Internet is reaching its limits as a result of outgrowing the original design scope so dramatically. One obvious case is mobility. The original Internet was built on the assumption that the devices connected to it (the "hosts") can be designated with fixed addresses, the "IP numbers", that can be used to route messages to them. This assumption became broken with the advent of laptop computers that people carry around and connect to Internet at various points. At present, it is challenged further by the mobile multimedia phones that preferably should be always connected to the Internet. Of course, we have by now learned to live with the resulting difficulties by resorting to solutions such as VPN's or Mobile IP that make a node appear to the Internet as if it was connected at a fixed point somewhere else. Unfortunately, this is living on loaned time.
The most fundamental problem of the Internet as of today nevertheless relates to its very basic primitives. In the original design, every host has a publicly known IP number which every other host can send messages to, even without the consent of the recipient. This might have been fine in a network where all communicating partners share a common ethos of acceptable behaviour, including that the senders do not obfuscate their identities and that they will not continue sending unsolicited traffic. Unfortunately, the present Internet is hardly like that anymore. As a result, we have been forced to introduce various kinds of means of limiting undesirable traffic such as firewalls and Network Address Translators (NATs), and so to spend increasing amounts of resources on basically wasteful things such as virus protection and spam filtering. Again, the future progress of the Net does not seem to rest on a sustainable basis.
So what? The imminent demise of the Internet has been predicted almost from the start of its phenomenal growth, and so far the scarecrows have turned out to have been "somewhat premature" with their death news. Could this not continue also in the future?
That the predicted "Net meltdowns" never became reality, in my view, is because the community responsible of developing the Internet technology took heed of the warnings (well, some of them) and launched corresponding evasive actions. So far it has worked out. Unfortunately, each new layer of Band-Aid taped on top of the crumbling skeleton of the Internet reduces the flexibility that used to be its most compelling attribute. Already now the introduction of every new technology on the Net is made painful by the need to maintain interoperability with the plethora of fixes and kludges that the present Internet is.
And this brings me, finally, to the project we launched this week. It is called "Publish-Subscribe Internet Routing Paradigm", and has the tongue-twisting acronym "PSIRP". (At least to Finnish ears, the acronym sounds like the sound of a little bird, so I expect that the project logo fill feature some feathery small animal).
Masking behind this mostly innocent name is a call for revolution. "Publish-subscribe" denotes a departure from the most fundamental idea of the present Internet: that hosts have publicly known names that other nodes can use. Instead, we propose a network that works in a completely opposite way: those hosts that wish to submit information ("Publish") must have a public name, whereas those nodes that wish to receive information ("Subscribe") can be nameless or anonymous. Thus, no longer spam, no longer unwanted traffic: only such messages that are explicitly subscribed to will reach the destination.
While this alone is already a significant effect, it is the ripple effects of the publish-subscribe paradigm that really make it cool. In effect, the new paradigm changes the division of work between the network and the hosts in that a number of functions and properties that now are provided by hosts' operating systems (e.g., file systems) or "server side software" (e.g., Internet search) will be assigned to the "network". This, if it happens, will wreak havoc with existing value chains and business models of many major players of the field, and thereby open new opportunities for some others, including newcomers.
The idea itself, radical as it seems, is nevertheless not novel but instead captures how existing services such as P2P systems or distributed event systems essentially work. What is new in our project is that we aim to push the idea deep down the network stack to packet routing level (and perhaps deeper). We not only claim that this is possible, but also, more importantly, that this results in a simpler and "cleaner" overall architecture that will work on the scale of the present and future Internet. We aim to demonstrate the claim by actually implementing the needed functionality, not only once, but twice: both as an overlay network that will permit rapid experimentation and take-up, and as very clean native implementation running directly on top of the link layer.
These claims and objectives are bold indeed. Thus it was with a certain degree of apprehension that we went to the kick-off event: will we really be able to pull it off? These feelings were intensified by the fact that HIIT is the co-ordinator of the project, and thus bears the overall responsibility of its success both to the EU commission (and EU taxpayers) and to our European partners.
The main characters of the coming play, Dr. Arto Karila who leads the project and the project co-ordinator Marja-Leena Markkula, concluded that such a tall order will require unconventional means. Indeed, the design of an architecture is a "wicked problem" that cannot be neatly broken in subproblems that can be distributed to parallel teams. Instead, it requires the integrated effort of several clear heads that rapidly is translated to the form of running code that can be investigated and criticised. This, in turn, requires that the experts from different countries, companies, and universities can develop a high level of mutual trust and respect towards each other on the basis of a shared vision and genuine excitement.
As a result of such considerations, Arto and Marja-Leena decided to spend the full first extended afternoon of the event in work sessions designed to make explicit the basic thoughts and feelings each of us had at front of the 2,5 year trek that we endeavour to make together. I must confess that I expressed a curious combination of excitement and angst: excitement, because this project indeed is what we have wanted to do for a long time, and already have worked hard and long to make happen; angst at the challenge ahead and whether we can kindle our own excitement also in other consortium members.
To our great relief, our apprehension turned out be to be a case of pre-curtain jitters that just gave an edge to the performance. There was little need to kindle excitement that already burned bright. While all of us were surely aware of the challenges ahead, we also were confident on our ability to deal with them. Moreover, we realised that we do not need to stand alone, but there are other groups around the world investigating similar themes and sharing the load.
With this mindset, the rest of the event went smoothly and we were able to get the first planned activities in motion.
Will we prevail? Only future will tell. The maximal success is a big leap, in networking terms comparable with the original introduction of packet switching to replace circuit switching. Thus the biggest fish in the pond is a whale. Fortunately, there are other success scenarios as well where our work will influence the future progress by more indirect routes.
Our biggest challenge, after all, may be to find and motivate a few good researchers who really understand networking architecture and are passionate to express their understanding in the kind of clean and crisp code the beauty of which approaches poetry in engineer's eyes. I think we will not need to search far: in the kick-off, the room was full of them.
The Internet, as we know it, has by now been around for some 15 years, or 20 years for old hats such as myself. As we know, its growth has been phenomenal in the past and is likely to continue in the foreseeable future (which on the Internet is not necessarily a long time).
As a result, Internet is now being used for many purposes its original designers had no idea of. If we measure the raw number of bytes transferred, the Net of Nets is now mainly used for distributing digital content such as videos and music. It is used by more than a billion people, nearly half of which live in Asia. It reaches a widening variety of devices from mainframes and desktop computers to all kinds of mobile and embedded systems over equally wide variety of links from WLAN and satellites to 10 gigabit Ethernets.
Given this tremendous change, it may be considered a surprise that Internet manages to work in a useful fashion at all. Indeed its continuous build-up and maintenance has required a massive effort from the computer science and telecom engineering communities. This investment has been made feasible by the growing added value that Internet is generating to societies at large, and to various kinds of individual actors in them. The rest, as the saying goes, is history.
At present, however, many otherwise sane and respectable people have started to express grave concern that the Internet is reaching its limits as a result of outgrowing the original design scope so dramatically. One obvious case is mobility. The original Internet was built on the assumption that the devices connected to it (the "hosts") can be designated with fixed addresses, the "IP numbers", that can be used to route messages to them. This assumption became broken with the advent of laptop computers that people carry around and connect to Internet at various points. At present, it is challenged further by the mobile multimedia phones that preferably should be always connected to the Internet. Of course, we have by now learned to live with the resulting difficulties by resorting to solutions such as VPN's or Mobile IP that make a node appear to the Internet as if it was connected at a fixed point somewhere else. Unfortunately, this is living on loaned time.
The most fundamental problem of the Internet as of today nevertheless relates to its very basic primitives. In the original design, every host has a publicly known IP number which every other host can send messages to, even without the consent of the recipient. This might have been fine in a network where all communicating partners share a common ethos of acceptable behaviour, including that the senders do not obfuscate their identities and that they will not continue sending unsolicited traffic. Unfortunately, the present Internet is hardly like that anymore. As a result, we have been forced to introduce various kinds of means of limiting undesirable traffic such as firewalls and Network Address Translators (NATs), and so to spend increasing amounts of resources on basically wasteful things such as virus protection and spam filtering. Again, the future progress of the Net does not seem to rest on a sustainable basis.
So what? The imminent demise of the Internet has been predicted almost from the start of its phenomenal growth, and so far the scarecrows have turned out to have been "somewhat premature" with their death news. Could this not continue also in the future?
That the predicted "Net meltdowns" never became reality, in my view, is because the community responsible of developing the Internet technology took heed of the warnings (well, some of them) and launched corresponding evasive actions. So far it has worked out. Unfortunately, each new layer of Band-Aid taped on top of the crumbling skeleton of the Internet reduces the flexibility that used to be its most compelling attribute. Already now the introduction of every new technology on the Net is made painful by the need to maintain interoperability with the plethora of fixes and kludges that the present Internet is.
And this brings me, finally, to the project we launched this week. It is called "Publish-Subscribe Internet Routing Paradigm", and has the tongue-twisting acronym "PSIRP". (At least to Finnish ears, the acronym sounds like the sound of a little bird, so I expect that the project logo fill feature some feathery small animal).
Masking behind this mostly innocent name is a call for revolution. "Publish-subscribe" denotes a departure from the most fundamental idea of the present Internet: that hosts have publicly known names that other nodes can use. Instead, we propose a network that works in a completely opposite way: those hosts that wish to submit information ("Publish") must have a public name, whereas those nodes that wish to receive information ("Subscribe") can be nameless or anonymous. Thus, no longer spam, no longer unwanted traffic: only such messages that are explicitly subscribed to will reach the destination.
While this alone is already a significant effect, it is the ripple effects of the publish-subscribe paradigm that really make it cool. In effect, the new paradigm changes the division of work between the network and the hosts in that a number of functions and properties that now are provided by hosts' operating systems (e.g., file systems) or "server side software" (e.g., Internet search) will be assigned to the "network". This, if it happens, will wreak havoc with existing value chains and business models of many major players of the field, and thereby open new opportunities for some others, including newcomers.
The idea itself, radical as it seems, is nevertheless not novel but instead captures how existing services such as P2P systems or distributed event systems essentially work. What is new in our project is that we aim to push the idea deep down the network stack to packet routing level (and perhaps deeper). We not only claim that this is possible, but also, more importantly, that this results in a simpler and "cleaner" overall architecture that will work on the scale of the present and future Internet. We aim to demonstrate the claim by actually implementing the needed functionality, not only once, but twice: both as an overlay network that will permit rapid experimentation and take-up, and as very clean native implementation running directly on top of the link layer.
These claims and objectives are bold indeed. Thus it was with a certain degree of apprehension that we went to the kick-off event: will we really be able to pull it off? These feelings were intensified by the fact that HIIT is the co-ordinator of the project, and thus bears the overall responsibility of its success both to the EU commission (and EU taxpayers) and to our European partners.
The main characters of the coming play, Dr. Arto Karila who leads the project and the project co-ordinator Marja-Leena Markkula, concluded that such a tall order will require unconventional means. Indeed, the design of an architecture is a "wicked problem" that cannot be neatly broken in subproblems that can be distributed to parallel teams. Instead, it requires the integrated effort of several clear heads that rapidly is translated to the form of running code that can be investigated and criticised. This, in turn, requires that the experts from different countries, companies, and universities can develop a high level of mutual trust and respect towards each other on the basis of a shared vision and genuine excitement.
As a result of such considerations, Arto and Marja-Leena decided to spend the full first extended afternoon of the event in work sessions designed to make explicit the basic thoughts and feelings each of us had at front of the 2,5 year trek that we endeavour to make together. I must confess that I expressed a curious combination of excitement and angst: excitement, because this project indeed is what we have wanted to do for a long time, and already have worked hard and long to make happen; angst at the challenge ahead and whether we can kindle our own excitement also in other consortium members.
To our great relief, our apprehension turned out be to be a case of pre-curtain jitters that just gave an edge to the performance. There was little need to kindle excitement that already burned bright. While all of us were surely aware of the challenges ahead, we also were confident on our ability to deal with them. Moreover, we realised that we do not need to stand alone, but there are other groups around the world investigating similar themes and sharing the load.
With this mindset, the rest of the event went smoothly and we were able to get the first planned activities in motion.
Will we prevail? Only future will tell. The maximal success is a big leap, in networking terms comparable with the original introduction of packet switching to replace circuit switching. Thus the biggest fish in the pond is a whale. Fortunately, there are other success scenarios as well where our work will influence the future progress by more indirect routes.
Our biggest challenge, after all, may be to find and motivate a few good researchers who really understand networking architecture and are passionate to express their understanding in the kind of clean and crisp code the beauty of which approaches poetry in engineer's eyes. I think we will not need to search far: in the kick-off, the room was full of them.
Tilaa:
Blogitekstit (Atom)