P2P+WS Conference, Day 1: Clay Shirky
by Rael Dornfest
[Please excuse misspelings ;-) and other herky-jerkiness; these are my live notes from the conference floor.]
1994-1998: The Great Wiring. We assumed that everything would continue being wired as client-server.
1998. One of the reasons we could spin out business as quickly as we could was that the framework was already set up; the browser as the only software you'd need.
1999: The beginning of the Post-Web World. Napster, SETI@Home, ICQ changed all that.
Today: P2P techniques of local resources, group formation, and novel addressing is disappearing from the forefront, becoming simply part of the way we build applications.
Short term problems:
Client-server is not being replaced, but rather displaced as the only way. But I like to think of client-server as the definition of a transaction rather than a definition of a node. Web Services can absolutely move into a world wherein -- because it's SOAP/XML-RPC end-to-end -- the architecture can be flexible.
Protocols vs. APIs. A protocol determines how an application will be accessed, owned and created by a large group and defined outside software. An API is defined by a small group, owned by a small group, and closely tied to the software. APIs break over time in ways protocols don't. Classes of API breakages: (1) Unintended consequences, (2) Underestimating the value of backward compatibility and overestimating the coolness factor of new features, (3) ...
How much control can and should be seeded from the application designers to the pool of users. What we've learned from the Network is that the more this is shared, the more it scales.
The value of group-forming networks. The bad news: attempts to aggregate hardware only succeeds in the short term. The lower you are in the grouping stack, the more in danger you are of being commoditized. There are no longer business models for general aggregation of hardware; they always have failed and that's no less true today.
The big problem: starting to see a need for an operating system that really mediates between the user/application and the place where these resources really lie. If we've distributed the server, we suddenly have the problem of aggregating those bits to perform a task. The more stuff there is, the harder it is to figure out which point-to-point connections you should be making. The pressure of brokering connections becomes harder and harder over time. "The Chicago Solution" (named after the Chicago Board of Trade's problem with wheat being in Kansas, but the market being in Chicago). Leave the stuff out at the edges of the network, but move the valuable bits that need to be brought in contact with one another to the center. It works for wheat; it works for MP3s.
The big issue for the Internet operating system: is the local model or the global model right? The local model says we'll blow up the value of the PC; make the Internet into one great big computer. Characterized by extremely high coordination costs. The global model says we'll bring the whole network into the the operating system: assume everything's remote/global; the problem is that you lose out on the value of local resources.
There is no solution to this problem, even in principle. There are only trade-offs.