While you can take the pulse of computer users from O'Reilly's Open Source Convention and other well-publicized meeting places for the Open Source community, it's the Ottawa Linux Symposium where you should go for developers' concerns. I spent four days there last week while a lot of my O'Reilly colleagues and authors were partying in San Diego. As it turned out, I got to do a fair amount of partying myself in wild and rowdy Ottawa, Canada but I'll discuss some of the substantive issues of the conference before I describe the atmosphere.
Why Linux Reaches into Areas That Are Off-Limits to Windows
Heads Up to Firewall Administrators
Ximian Gets a Lot of Play
The Politics of the Computing Conference
The Atmosphere of the Computing Conference
The mighty Nimrod was as arrogant as he was evil. Since the whole world was united in his time, he organized everyone to build a tower that would reach into heaven and to wage war against it. According to the legend, God foiled his plan by splitting the world into many nations.
Regarding the modern Nimrod, the U.S. Court of Appeals in Washington D.C. has declined to play God. But the Windows operating system may be splintering of its own accord. Microsoft has had to significantly cut down the Win32 libraries to make Windows CE, and this embedded version still hasn't caught on because—according to many critics—it's too large and unwieldy. At the other end of the computing spectrum, attempts to beef up Windows 2000 and make it a contender for high-availability servers handling massive loads have also fallen short. The "Windows everywhere" campaign is frustrated in its goals.
What prevents Linux from suffering the same fate? Several people at the conference seemed confident that Linux had succeeded in scrunching down small enough to fit in one's palm even as it shoots up to rule mighty computer servers.
Karim Yaghmour, an embedded systems developer, claimed that Linux has won the battle because it is so flexible in its choice of libraries, windowing toolkits, filesystems, and other open source components. Essentially, the very aspect of Linux and open source that creates a frustrating experience for current users—the semi-random agglomeration of software components from many different projects that need a lot of work to fit together—also gives it the beautiful flexibility that enables different system integrators to assemble the precise system they need.
Furthermore, Yaghmour said, Linux was pulled into new platforms and domains by people who tinkered with it for their own benefit, rather than being pushed into such domains the way Windows was pushed by Microsoft. I also heard from conference attendees that Linus Torvalds is very strict in what he lets into the core kernel, and he usually rejects features aimed at narrow environments or solutions if they make the kernel any bigger or slower.
Yaghmour is well-known in the embedded Linux world, thanks to a nice tool he developed called the Linux Trace Toolkit. It lets you watch every hardware interrupt and every kernel response to it—great for rainy Sunday afternoons, not to mention embedded system development.
Visit opensource.oreilly.com for a complete list of O'Reilly's books about open source technologies.
He gave an intriguing presentation of the Real-Time Application Interface (RTAI) and RTNet on the last day of the conference; while the presentation was sparsely attended, the many questions asked showed that there were several developers there with a serious interest in embedded real-time systems. RTAI, like the alternative RTLinux, runs underneath the Linux kernel by trapping interrupts and turning the processor over to a privileged real-time process. But RTAI goes further by allowing a regular Linux process to turn itself into a real-time process. This feature, part of the LXRT subsystem, was presented by Yaghmour with the appropriate drum rolls and flourishes as the only such capability provided by any real-time operating system. (He did point out that RTLinux provides a limited form of user-space services through a customized version of sigaction.)
Yaghmour culminated his presentation with an actual demo—a rare treat at this symposium—of real-time networking. He also predicted that RTAI would spread further through more ports, a real-time RAM filesystem, and a Flash-based filesystem.
There's a new transport-layer protocol in town. Get ready to update all your filter tables because TCP and UDP have been joined by the Stream Control Transmission Protocol (SCTP).
SCTP can deliver multiple streams of short messages between sockets. One of its big wins is that it permits multiple IP addresses to be associated with each connection, and if one connection fails the streams will automatically switch to the next. The core technology for SCTP came out of Motorola, from whence La Monte Henry Piggy Yarroll came to discuss the pending Linux API.
"SCTP should prove useful for anything involving multiple, independent, complex exchanges of messages, and anything that needs to tolerate network failures," Yarroll said. Examples include a high-volume, high-availability database, and providing IP-based connections to devices that monitor patients at a clinic.
Yarroll's lucid presentation explained how the SCTP protocol works and how to program it in the Linux implementation. Testing has shown that the protocol involves less overhead than comparable applications using TCP (at least at the user level and in CPU time; kernel performance is expected to be better too); this is attributed to the protocol's careful alignment of data on 4-byte word boundaries.
A single new call has to be added to the standard socket API, bindx, which works like bind but adds or removes an address to the same connection. The stream ID can be obtained from ancillary data. Recommended Web sites are www.sctp.org and www.sctp.de. The Linux implementation is the lksctp project on SourceForge.
Speaking of firewalls, there are a lot of new features on the way from the team developing Netfilter. Some of these seem to me to be reasonable extensions that bring the treatment of various network parameters up to the level of the ones currently recognized. Others scream "Bloat! Bloat!"
Harald Welte, who presented the changes at Friday's BOF (birds of a feather) on Netfilter, distanced himself from many of the changes while defending others. The audience vociferously recommended that the team devote itself to creating a robust test framework and recruiting testers. This sounds like good advice to me, but coding new features is fun while testing is merely indispensable.
The upcoming features that are certain to be released soon include:
Extending the "expectation" feature (which looks inside packets to determine the state of the connection in regards to the application running) to support multiple expectations. This is useful for some complex applications like IRC.
Stateful failover: If the firewall machine fails, the rules will be all ready and up to date on the box that takes over.
Allowing the tracking of connections to be restricted to particular interfaces, so you don't suffer the overhead of tracking an interface where you haven't installed any rules.
Greater efficiency when a rule is changed: the kernel won't have to reload the entire table containing all the rules.
Of the other changes, the only one that sounds both imminent and of widespread interest is a logging facility called ulog. Currently logging involves an expensive formatting and writing of a message at the kernel level. Ulog, by contrast, will perform initial filtering and send data to a user process, which can then do any desired kind of time-consuming or complex processing.
Ximian programmers working on the GNOME architecture and applications got a pretty enthusiastic reception here (as well as the expected grilling about standards support and reliability). They have triumphantly picked up Miguel de Icaza's crusade to transform the expectations of the Unix community, and they implicitly (or sometimes gleefully) criticize the usability of old applications from the X Consortium or the GNU project.
The trend they promote is away from ASCII and toward rich, displayable, typed objects; away from rc files and toward run-time discovery of interchangeable components. For backward compatibility and portability, the GNOME Helix set-up tool still queries and stores information in traditional Unix configuration files, but new facilities tend to use XML and the Wombat central database. In Evolution, which is both a platform and an application suite and which attempts to provide all the applications that office staff need to use daily, the object model extends down to a single column in a table. Each column knows whether it's a number, a string, a date, or whatever, and can be displayed and sorted appropriately.
GNOME hacker Federico Mena Quintero insisted that applications should be "consistent and pretty;" that it's ridiculous in this day and age for anyone to use a mailer that doesn't display HTML or handle MIME types; and that basic activities like printing should be offered with a full-featured interface such as paper selection. He declared that, "People who come from a proprietary world come to expect certain features in their applications."
What's good about GNOME, however, despite its unabashed admiration for the look and feel of Microsoft Office and the success of its component architecture, is that its programmers are doing things that should go far beyond what is imagined by Microsoft. The Bonobo CORBA interface should enable rich collaborative applications. There are some interesting side features, such as the index used by several applications to speed up searching, which was demonstrated by Ettore Perazzoli during his Evolution presentation. Thanks to pre-indexing, a word search on a huge mail folder or an address book finished so fast I didn't even have a chance to see the screen update itself.
Andy also edited O'Reilly's best-selling Peer-to-Peer book, a collection of essays by leading developers of well-known P2P systems, including Gnutella, Freenet, Jabber, and SETI@Home. To find out more about P2P as a technology and a business opportunity, register for O'Reilly's Peer-to-Peer and Web Services Conference, September 18 - 21 in Washington, D.C.
Most conference attendees seemed to find Ottawa a relief from the heat found in the U.S., of both the meteorological and the legal kind. But every conference on free software these days is turning into a political event, as I've repeatedly noticed at O'Reilly's Open Source and Peer-to-Peer conferences.
When I got to Canada and told customs officials I was "attending a computing conference," they searched my bag and questioned me at length about my business. I fear this may be a trend worldwide as governments hear about the transformative power of software, hardware, and the Internet—particularly in the frightening guise of arrests and corporate lawsuits. Next time I'll say I'm going to an AIDS forum or an anti-G8 demonstration.
The customs officials did not know that Hugh Daniel of the FreeS/WAN project would harangue the assembled conference attendees about the politics of software at a dinner the following evening, because Daniel himself did not know until he arrived in Ottawa and was asked to give a speech. Despite the short notice, Daniel—whose FreeS/WAN development team is dedicated to protecting privacy and freedom of expression by providing a free software virtual private network solution using IPSEC from desktop to desktop or LAN to LAN—was eloquent and galvanizing. He covered the travesties of the DeCSS case and the arrest of Dmitry Sklyarov, urging people to vote, to discuss issues with their friends, to follow new bills introduced in their governments, and to protest like hell against bad bills. (Significantly, I didn't hear Daniel say we should argue in favor of good bills.) Most of all, he ordered the audience, "Do not stop writing code that annoys my government" (U.S.).
Canadian customs officials also did not know that I would meet for two and a half hours with a lawyer who is a leading advocate of privacy and civil liberties on the Internet in Canada. He drew an activist's lesson for me from the passage of the Digital Millennium Copyright Act, which has so blackened the reputation of the U.S. in such matters as the Sklyarov arrest. "Civil libertarians find a bill with 30 bad clauses and fight it down to 3 or 4; they consider it a victory and feel they did pretty well. Not true! You have to fight every bad clause."
Politics also emerged in the keynote by Theodore Ts'o, called "Ten Years of Linux." It was really a practical guide to the development of open source software in general, and focused a lot on the social issues that could have an impact on our software future. Ts'o, like Hugh Daniel, urged programmers to become political. He mentioned the Canadian DMCA-like proposal. He also warned against Microsoft, both the "Shared Source" initiative (which he said could "taint" any programmer who looks at the code) and the Hailstorm/Passport attempt to dominate e-commerce.
Nevertheless, Ts'o claimed that proprietary software played a useful role and encouraged open source developers to make it easy for proprietary companies to provide applications—"or at least be neutral; don't hurt them." This call led to a long and contentious discussion over the appropriateness of allowing device drivers to be distributed in binary-only form, and whether kernel developers should try to keep driver interfaces stable (at least within a major release of the kernel) to spare companies the pain of distributing new binaries.
The Ottawa Linux Symposium is the tangible offspring of a 4 a.m. thought in the mind of Andrew Hutton in 1999 during another Linux conference. Over the years he had noticed the growth of commercialism at Linux conferences and he was afraid that the people actually creating Linux, the people responsible for its health and future, would be shut out. The Linux Symposium, a grassroots effort by dedicated Ottawa Linux users, was his guarantee that developers would always have a place once a year where they were in control and could talk about things that really mattered to them.
Now in its third year, the Ottawa Linux Symposium has quickly become the primary forum for discussing Linux technical topics, or as organizer Craig Ross says, "a canvas for open source development." If you think of most boating conferences as being for yachts-folk, the Ottawa Linux Symposium is like a conference for the people who design and build hulls.
Four hundred and fifty people crammed into a basement of the Ottawa Congress Center makes for a high-pressure atmosphere. You certainly wouldn't be able to fit a single dancer in a penguin suit. I found the setting claustrophobic after a couple hours, but there's no doubt that the restricted setting facilitated intense personal interaction. Like many conferences, the most promising work went on in clumps of informal talkers outside the presentations.
The knowledge of the speakers was uniformly impressive, but not every topic was worthy of a presentation, in my opinion. Just because someone has a novel idea does not mean he or she has got a viable innovation—I reserve "innovation" for something that has widespread use over a long term. Some of the projects that looked exciting on paper turned out to be just clever hacks when I heard the presentation. "Linux contains lots of clever hacks," said a fellow attendee, but I believe there's a qualitative difference between the technologies we've come to depend on and some of the new things being proposed. A couple of presentations got so dull I was left deciphering the code on the T-shirt of the person sitting in front of me.
The range of topics—which varies a great deal from year to year—was somewhat restricted this time. While low-level operational features got lots of attention—cache mechanisms, memory handling, hot pluggable devices (known to the rest of the world as Plug-and-Play), filesystems, Flash chips—there was nothing about such application-layer aspects as Apache, Samba, Mozilla, or Perl. The only high-level components that made their way into presentations were KDE and GNOME. The latter, heavily pushed by Ximian (a sponsor of the conference) drew many attendees and lots of supportive interest. I think Linux developers sense that the fate of projects like GNOME and KDE will determine much of open source's success in meeting user needs.
The conference had about 50 people fewer than last year's, and many were obviously missing because their companies had folded or were falling on hard times. In addition, fewer of the well-known Linux developers were present. But Hutton says they will be back next year, and he points out, "We met people who will be key in the future." Attendees were quite focused and clearly intended to carry on the flame. The atmosphere was pretty loose at the final party hosted by Ximian. (AMD also threw a party earlier in the week, and I should not forget to say that O'Reilly & Associates was one of the dozen sponsors.)
Copyright © 2009 O'Reilly Media, Inc.