|
Linus at Fermi Lab
Authors note: Slashdot posted this page on their site but the article really starts at http://ssadler.phy.bnl.gov/adler/Torvalds/comdex99.html . It's an introductory page which puts my FNAL and Comdex write-up into perspective. If you are only interested in what Linus had to say at FNAL, then just read on.
|
April 19th, the day of Linus's talk at FNAL, dawned to be a gorgeous day on Long Island. I'm going to fly Southwest, the 1:20pm flight, through Baltimore, and transfer to the Chicago Midway flight. I'm to arrive in Chicago at 4:30pm. Linus's talk is scheduled for 5:30pm. Trying to get my reservation setup to fly out to Chicago was a mess. Originally, Linus was scheduled to talk at 7:30pm. And I planned my flight scheduled around that. (4:30pm arrival, 7:30pm talk, no problem.) But, that changed when I got a message about Linus's talk being rescheduled. By then I had no choice except brave the tight time table. So, I had a relaxing morning, enjoying some quality time with my wife. Flight time came and off to the airport I go. With such nice weather, all flights were on time. (The Free Software Gods were looking after me...)
|
|
Linus starts by saying he does not like podium and thus will not stand behind one for this Q&A session. He has this wireless mike which Dan has hooked him up with. I also notice that the FNAL media guys are recording this session for posterity, so if you don't like my write-up, you can contact them to get a full playback of Linus's talk. In any case, Linus starts off with a very brief history of Linux. It was 1992(?), he had a PC, but there was no Unix available for it. Since, and I quote "he was the best programmer since Jesus," he would fix that. He would write his own Unix like OS. So off he went and wrote it. The concept that need fosters development was key in getting the Linux kernel going and has been key through out all of its development. And then he did something which was, as he says, the most important decision of his life. He posted the code on the Internet, via some news group and asked for feedback. That he got. He expected people to download his code, run it and tell him whether it works or not. "Linus, this really sucks!" He got some of those responses; but more importantly, he got code back in the form of patch fixes and enhancements. And from then on it was history. With that he ends his introductory talk and starts in on the questions.
Dan Yocum starts it off by asking about the 2.3 kernel and/or plans for large files systems (i.e. file system journaling.) A good question, since in High Energy and Nuclear Physics there is a big need now for this type of file system. Petabytes of data will soon be recored and file systems which can handle this type of data load will be necessary. (Maybe not a petabyte file system, but terabyte file systems will be a must.) Linus's answer to that question was that up to this point, large files systems were not an issue. He reminded us that back in the days when he was starting the kernel, there was a 64 Meg partition limit which he had to solve. He then said something about how new users bring new problems and how this was the "development model" for the kernel.
At this point my notes get rather fuzzy so I'm just going to paraphrase from what I can decipher from them.
Someone asked about security issues with Linux. Linus said that people are keeping after the bug fixes. From my personal experience with Linux and the Red Hat distribution, this is the case.
Someone asked about addressing more than 2 Gigs on a 32 bit system. His answer was to use a 64 bit machine. Linux is fully 64 bit compliant.
There was a complicated SMP question to which the answer was that 2.0 and to some extent 2.2 are really a single spinlock SMP implementation. Linus will work on making it more fine grain.
There was a question about capabilities. I believe this is like splitting up the super user function into separate users through access control lists. Theoretically it's a good idea, but in practice it's too complex. Most of the time, one sets up the system in the wrong way, making it less secure. He claimed it's a feature which needs to be added to Linux just so that one can check it off on the "Linux can do this" matrix, but then have a README on how to disable it.
Someone asked the copyright question. Linus talked about the license he released his original kernel code under. Basically, its intent was that anyone could use it, distribute it and modify it. But the modifications had to be freely distributable as well. The people were starting to sell the Linux kernel at computer shows by charging a couple of bucks for the floppies. They asked Linus if this was OK. Clearly, Linus said that it was obviously OK, since he wanted the code to be distributed and could not expect people to lose money on the distribution cost. So he modified his license. I'm not sure whether he modified his license further, but the fact is that he eventually switched over to the GPL license. He said that it was an awful piece of legalese but it fulfilled all his requirements. Also, the one bit of software which it really depended on was the GNU C compiler. That played a role in the adoption of GPL for the Linux code. Again, the main emphasis was that the source code had to be available to the "community" as well as the modifications, which were brought back into the Linux source repository.
A question on the Merced was asked. Linus said he would not sign any Non-Disclosure Agreements. The reason for this is that he does not want to be put in the situation where he cannot release his source code due to conflicts with an NDA. A very wise choice on his part. He lets others sign the agreements, which has been done by others. Notably, there are some people at CERN who are working on the Merced port. Linus defended Intel's move on asking for NDA's to be signed. It's done so that Intel can keep control over the flow of the technical information into the public domain. Once the CPU has been fully released by Intel into the "market," then they certainly want every one to know how to use it. But before that, it's clear that they need to keep their specs under wraps to keep the competition at bay. The big problem with the Merced is in the compiler technology sector. All the kernel needs is a version of gcc that will generate a Merced executable. It's up to the gcc guys to get it to generate Merced instructions. Linus is confident that once gcc is ready, which should be by the time the Merced is released, then the Linux port will follow within a couple of days or weeks.
Someone asked what is better, one really fast CPU or many not so fast CPUs. Linus's answer was that the best SMP system for the Linux kernel is a dual CPU one. If one were to build a Beowulf type cluster, one should do so using a set of dual CPU systems.
There was a question about SVGAlib -- what its viability was for the future. Linus's response to that was that 2 or 3 days after working with X11, he decided never to go back to console mode. All he needs, graphics wise, is to have 15 xterms open with the kernel compiling in one of them. He kept reminding the audience that all he really likes to do is compile the kernel. The fvwm2 window manager coupled to 15 concurrently opened xterms was all the graphics functionality he needed. This question was one directed towards games. He said that there was a good OS for running games called Windows. He claimed that MS admitted to the fact that they could not write an OS very well and basically kept out of the way of the software games developers by letting them take over the system when the game app was active.
A question was asked about how he decides whose code is to be included in the kernel. He said that drivers were no-brainers. Since the code sits outside the kernel, he tends to include them without much thought. When it comes to adding something that exists in kernel space, then his main requirement is that there be at least one person who will take charge in maintaining it. My take on this is that items like the TCP stack or the kernel version of NFS etc. are coordinated and maintained by someone besides Linus.
Someone asked him if he ever has talked with Bill Gates. His reply was that, no he has not, but if he did, he would "be talking money." (His palms rubbed together as he was finishing his answer.)
More questions on benchmarks. The conclusion to his answer on benchmarks is that the best benchmark is your own application. It's not easy since this requires the vendors to give you access to their hardware and you have to do some porting. The bottom line is that your own application is truly the best benchmark.
Someone asked about frame buffers or rather how one could get a DVD app ported to Linux. Linus said that most of the work is in setting up the hardware. Once done, the hardware takes care of getting the DVD imagery onto the screen. The trick is to get this to interface to X11. He didn't seem to have any immediate plans on taking on this project. Also he mentioned that DVD encryption is a trade secret. I assumed this means that an open source application would be difficult to implement.
Someone who works at Lucent asked a question related to drivers for modems made by Lucent. The question lead to a discussion about how one can get companies to release the specs of their hardware. Linus made a point about how sometimes it's not a question about keeping the engineering design behind some gizmo a secret and thus keeping a market advantage. But rather one wants to keep secret the bad engineering that went into making the gizmo. He hypothesized an example of a gizmo that in order to get it to run, you need to write to xyz registers in some specific order, then toggle some interrupt lines, followed by holding the reset bit in the CSR high for 30 clock cycles etc. etc. This kind of kludgey design is the real reason behind not releasing specification. It's all hidden in the binary version of the driver.
Someone asked about UDI, Unified Driver Interface. Linus replied that it's in the Nice Theory stage but he is keeping an open mind about the idea.
A question came up about GUIs. He as no interest in GUI design or interfaces, and has no influence in current GUI theological discussions ongoing right now. (My guess is that this refers to GNOME vs. KDE type of theoretical friction.) He is happy using fvwm2 and his 15 xterms to apply patches to the kernel and rebuild it again and again.
I asked a question about how he maintains the Linux source repository. I wanted to know if he used CVS. His reply was that he has his own method. I should think of it as lovingly hand-crafted maintenance of the kernel source. He does not use CVS because he does not need it. He is the only one who applies patches or updates the source code, and he does not care to use the history logging mechanism CVS provides. He does use CVS at work, so he knows what it's capable of doing, but chooses not to use it.
By this time we started to run out of time, and a few more questions were asked. From these questions, the following general statements were given by Linus. MS is a good OS for running games. The bottleneck in the development cycle of the kernel was the users. A project should never grow beyond the scope of what can be kept in one person's head. My take on this is that the kernel is broken up into many "projects," each one with a leader in charge of it. And whatever that one person is in charge of, he must keep the whole concept and source code layout/structure/functionality in his head. Keeping "things" modular is the Unix way.
Developers grow linearly, while the users exponentially. The users of Linux have grown by 7 orders of magnitude, and his goal of global domination is only 2 orders of magnitude away. "What's 2 orders of magnitude after growing 7..." (Global domination is in reach.) Avoid black and white when trying to solve a problem. There is never a silver bullet which can be applied to a project or problem to "fix it".
|
Linus concluded with the statement that there has always been a physical invariant regarding building his kernel. This being 12 minutes. It always took 12 minutes to compile the kernel. When he started out with his 386, it was 12 minutes, when he moved up to a 66MHz 486, the code has grown such that it still took 12 minutes. The growth of the code and the speed up of the Intel technology kept pace with each other such that the kernel compile time always took 12 minutes. This has changed recently. With his quad CPU development system, it now takes him 73 seconds to build the kernel. He admitted that the hardware development has now been recently out-paced his software (kernel) development.
With that, a physicist from FNAL named G P Yeh, who is one of FNAL's strongest Linux advocates, closed the session by thanking Linus for all his work. FNAL is now using Linux in a big way to process all the data coming out of the large collider detectors that will start taking data within a year or so. The data rate from these detectors is expected to increase 200 fold from the last time they took data. This is due to an upgrade to the Tevetron called the Main Injector. It's designed to increase the proton flux by a lot, and thus 200 times more data will flow out of the detectors. Linux will play a big part in analyzing all this data. (I can attest that Linux is playing a big role at BNL as well. It will be used on about 500 processors to analyze the data coming out of the 4 detectors being built for the Relativistic Heavy Ion Collider. The RHIC is scheduled to turn on this summer, and by this coming winter the Intel Linux farm will start its first production data processing.)
|
I left Linus and Mad Dog behind in Ramsey. My plan was to stay at FNAL for the night and drive in early to catch the opening keynote at Comdex. Bill Gates was giving this keynote. From Linus to Bill, this was going to be a real contrast.
- Back to the main page
- On to Spring Comdex 99, Chicago, Day 1
- On to Spring Comdex 99, Chicago, Day 2
- Photos of the Linux Pavilion
This page was proofread by Tim Chambers. Thanks Tim.
Copyright 1999, Stephen Adler