OSCON 3.3: Current State of the Linux Kernel

by Geoff Broadwell

Related link: http://conferences.oreillynet.com/cs/os2005/view/e_sess/6378




Greg managed to fit a lot about historical issues surrounding release engineering and source control for Linux in his 45 minutes, and still had time to explain the various solutions the kernel team has tried. I'm going to just give the highlights of the issues before relating the current ways that the kernel team is trying to fix these issues.




Up until 2.5.3, Linus was accepting all changes to the Linux kernel only as emailed patch files, and eventually the poor scaling of this came to a head. At that point, he switched to using BitKeeper, a proprietary distributed revision control system, and while the license made people crazy, processes improved significantly, making kernel life generally better for a while. But the license issue would come back to haunt them when BitKeeper changed the license in a way that the kernel team could no longer work with.




At the same time, release engineering was pretty poor, and as the time between successive releases began to get longer and longer in the 2.6 timeframe, end users and distributions started taking exception to the wait (especially for security fixes).




Over the past few months, the kernel team has implemented several new tools and processes to deal with the outstanding problems.




First, a bug fix patch-only tree was created that had very strict requirements for accepted patches. This was used to produce 2.6.x.y patch releases to fix critical issues only.
Each .y patch series is dropped and recopied from the mainline with each .x release. Each .y series usually contains around three orders of magnitude fewer changes than are in a .x release, making people feel very safe in following .y patch releases.




To deal with the problem that .x "release candidates" were nothing of the sort, a new policy was created recently that after each .x release, a one-week window is opened for changes (the "patch flood"). After that week, rc1 is released, and really is a release candidate. After that point, all accepted patches must be bug fixes only, with no new or changed functionality. Once bug fixes die down, a new .x release is done, and the cycle repeats.




Finally, when the BitKeeper license changed, Linus and crew were left holding the bag, so they investigated the available options and found them all lacking. There was only one thing to do -- they created their own. More than one, actually. The main one right now is git, but some subsystems are using Mercurial instead. Greg mentioned others, but I have forgotten them.




These new SCMs share two major attributes -- they are lightning fast at importing patches, and they are distributed by design. They are also all very young, and Linus has said that in another three months he will see how far each project has gotten, and choose a new one for himself (and presumeably his core team).




Switching to a whiz-bang mode, Greg talked about a number of cool things recently merged or soon to be merged into the mainline kernel, including the Xen virtualization technology, lots of new file systems, improved internal APIs, and so on. He also proudly announced that Linux now supports more devices on more platforms than any other operating system ever (Linux passed NetBSD last year, an impressive achievement). In fact, there are now a number of operating systems that directly use Linux drivers so that they won't have to recreate the whole driver corpus.




Finally, he talked about stability of APIs. Internal kernel APIs are never going to be stable, but external APIs should remain so -- though he admitted that this only applied to syscalls, not to sysfs and procfs; stability of the latter is a subject of discussion these days. He pleaded with vendors to get out-of-tree drivers into the mainline so that they can be magically fixed every time internal APIs change, and pointed people to the stable_api_nonsense.txt file in the tree for more details.




I asked if these changes meant that a 2.8 series may never come, and he said that the new processes were forcing developers to do a much better job, no longer ripping out and replacing humongous chunks of code, but rather incrementally improving things until each major change was completed. They are discovering that they may not ever really need a new "pure development" kernel series, just more happy 2.6.x releases for years to come.




As an end user, that's just fine with me.




What do you think of these process and tool changes?


2 Comments

hubertf
2005-08-06 18:59:18
Linux vs. NetBSD - a few questions
Please see the entry in my blog for a few questions about the comparison between Linux and NetBSD:
http://www.feyrer.de/NetBSD/blog.html#20050807_0340


Feedback welcome to hubert@feyrer.de.



- Hubert

Geoff_Broadwell
2005-08-07 11:17:17
Linux vs. NetBSD - a few questions
I was actually (closely) paraphrasing what Greg KH had said. Whenever in the past I had pointed out how extremely portable Linux is, someone always told me "Well, NetBSD is moreso." This has long given me a great deal of respect for the NetBSD folks (which I still have). It was interesting to hear someone deep in the know say that Linux has actually managed to surpass NetBSD, and when.


As for what exactly he was referring to, I suspect he means the kernel and associated modules. To be fair, Debian (as a complete OS with kernel and userland) supports somewhere around a dozen architectures officially, plus a pile more in special vendor variants, experimental trees, "official in next stable release", and so on.


In fact, I've heard that a large part of the portability of *nix userland is due to the combined efforts of NetBSD and Debian, who between them are the most rigorous proponents of portability in the *nix world. Debian, I know, is ruthless in getting packages to compile in environments the authors never tried or even intended. I suspect NetBSD is similar, though I don't know first hand.