The executable Web

by Bruno Pedro

A while ago, I published a teaser on this blog about using the Web as a whole as a data storage object. At that time I said that "the Web right now is cut down into a million pieces that don't talk to each other properly". Almost two years have gone by since that article and it looks like not much has changed.

One of the early questions was how interoperable are Web services when they're not envisioned and created by them same company. This problem lead to a number of initiatives that are trying to push forward Web services creation standards. DataPortability, for example, is evangelizing a number of different standards that will create a better interoperable Web:


  • end user authentication through OpenID;

  • inter-application authorization through OAuth;

  • information syndication and distribution through RSS, RDF and OPML;

  • information meaning and automatic extraction through microformats;

  • user attention profiling through APML;

  • messaging and information brokerage through XMPP.



This collection of standards and best practices is great when a large number of companies start following them. For us, developers, it means that by following these standards our Web services will be interoperable with all other Web services that use the same standards. It means that creating a Web service now is much easier than it would have been two years ago.

What about the end users? How can they take advantage of this interoperability? I'm not just talking about Web services that let you consume data, because that problem was solved a long time ago by aggregators. Aggregators are a good example of a class of Web application that survives because there's a de facto standard in place: RSS.

So, my point is, how can end users take advantage of Web services that let you publish, transform and assemble information? We're moving to a point where a number of emerging services give you a one-to-many publishing approach:


  • Ping.fm and HelloTxt publish your status across multiple services, like twitter, jaiku and Pownce;

  • Typepad's Blog It publishes blog articles across different platforms and also announces them on different status services;

  • twitxr publishes your pictures across different services like flickr and Picasa.



Is it just me or there's a pattern emerging here? Users see value in these services because they save you precious time by automating repeatable actions, like publishing a picture across different services. One thing to notice, though, is that these services only provide half of all that's possible with the existing Web.

All these services let you choose among a number of services and then broadcast your information to all of them. Forgetting minor format and content adaptations, they won't give you the possibility of programming the flow of your information. One thing is to shoot a picture and send it to different services, another thing is letting users tell how that picture flows through different services.

One service that's offering you the capability of configuring this flow of information is switchAbit. It evolved from an original idea by Dave Winer that you could grab your pictures from flickr and post a tweet for each one of them. Quoting Dave's original post:


The SwitchABit platform was developed because we noticed that an ever more complex flow of ideas and information is being facilitated by editorial systems and aggregators such as Flickr, Facebook, Twitter, FriendFeed, Seesmic, Qik, Ustream, YouTube, BlogTalkRadio, Disqus, Wordpress, Tumblr, TypePad, Blogger, etc.


switchAbit is basically an RSS to publish mechanism. It's built around the pub-sub paradigm which means that it will get your information from a number of services, filter it according to your instructions and publish part of it into other services.

With this approach you'd still have to publish your information on at least one supported service, so that switchAbit grabs it and routes it somewhere else. Another approach is acting like a reverse aggregator, extending the functionalities of Ping.fm and others by adding the possibility of configuring information flow.

You could, for instance, add a watermark or a copyright notice to the picture, extract EXIF geo-location information and send it to Fire Eagle, publish the transformed picture on a number of services, and announce it to your contacts on some social networks. And this is just an example of what can be done in the near future.

I've been working since January on such an application. It has an interface similar to Yahoo! Pipes, but it lets you compose the flow of information from a starting point through a set of Web services that exist on the cloud. Because of the obvious similarities of this concept with the familiar UNIX pipe, it's called tarpipe. Quoting tarpipe's blog original post:


tarpipe will also create an ecosystem where Web applications and services will be able to receive and transform media content. Users will take advantage of this ecosystem by defining delivery and transformation workflows for their documents.


With tarpipe you can direct the output of one Web service into the input of another one. This makes different services virtually interoperable, even if they're not able to talk to each other individually. It also gives end users the ability to compose flows of actions (or workflows) for their information. It currently accepts information sent through email and a REST endpoint, meaning you can extend your application by connecting it to tarpipe.

So, my initial thought that "the Web right now is cut down into a million pieces that don't talk to each other properly" is not so true anymore. There are ways of making the Web more interoperable, like following de facto standards and creating programmable service adapters.