Peer-to-Peer Conference: The First Day

by Andy Oram

People at the O'Reilly P2P conference are telling me something over and over, but I can't quite hear it yet. There's some unspoken thread running underneath their fervent descriptions of their projects, where they believe some peer-to-peer model is critical to success. It's something about people having to customize a service that is so complicated it could never be formalized by a central staff. Whether it's a matter of customizing a large order or storing digital content for later retrieval, peer-to-peer seems to offer an escape hatch where traditional centralization would tie everybody in knots.


This illustrates what's so great about the conference: it gives one a chance to re-examine the fundamental premises and purpose of one's work. Many conferences feature people standing in tight clumps arguing about the best strategy for integrating RMI and COM or some such narrow topic. While you get plenty of that here, you also get questions like "What can allow people to make the best use of the Internet?" or "What could people accomplish if they could share all the resources on a million computers?" Not in a fluffy or superficial way, but a serious exploration of a serious technical question teaming with social implications.


Just a few impressions follow of the sessions I saw today.


The keynote by Clay Shirky (along with a background introduction by Tim O'Reilly) was complemented beautifully by a later morning panel on "P2P in the Enterprise." Tim and Clay painted broad, rainbow-colored visions of the future. The following panel basically covered the question, "How does anybody make a living doing this?"


Clay gave yet another in his evolving definitions of P2P ("aggregating resources at the edges of the network"), offered several reasons why Napster works (I'm sure he'll publish those on his own) and warned us of the ominous centralization represented by the databases at sites like AOL Instant Messenger and Napster. If each P2P application sets up its own name service, as they do, not only does the world of P2P applications become fragmented, but a public resource becomes subject to the whims of those in control of the database.


In another blue-sky session that followed, Johnny Deep of Aimster suggested that their combo of instant messaging and file downloading could take care of the last mile problem that infrastructure providers haven't been able to economically solve.


I couldn't attend most of the afternoon sessions, but the final CTO Panel of distributed computation pointed up some of the amazing spread of approaches that all fall under the P2P buzzword. Take the issue of security, for instance. When you're downloading software to do computations, you have to worry whether it's malicious (or just buggy), while the person giving you the computation to solve has to worry whether you'll substitute your own software and return bad results. This can be a major security issue. On the other hand, projects like Aimster and Groove base their security on the trust among a small group of collaborators who know each other.


The lack of popularity of the public key infrastructure (PKI) came up, and was ascribed to the trouble it takes to get people to set up their keys. This ties back to Shirky's keynote, because he stressed that successful systems hide the complexity of the Internet and offer a system that anybody can use--and that just works.