Jul 252011
 

Hi again and welcome to another chapter of the Foss Archeologist. My apologies for the delay in posting this but I was away for a death in the family and took some time off from blogging afterward. Now it's time to get back into it and finish this series.

Midnight Commander (or 'mc' as the program launcher was called) was one of the first truly modern file-managers in the free software world – all the more spectacular because it was a console app, not a graphical one. The two-pane interface with hotkey control was copied from the old Dos program Norton Commander (hence the name) but midnight commander would come to be far more powerful than it's ancestor in many ways.

MC had it's own built in shell which was compatible with bash though it didn't support tab-completion (because tab was the hotkey for switching between panes), it could mass-select, copy, paste, rename and delete – all the standard functions you want in a file manager – but it went further. It implemented the first filemanager virtual filesystem on a Unix-like platform allowing you to work directly with the contents of tarballs or browse and manage files on samba and ftp servers. In fact when working on the console MC provided one of the best ways to transfer files to and from network servers, including automatic support for resumes, full and proper handling of Unix permissions and more.

The brains behind MC was Miguel De Icaza – nowadays better known as the head of the Gnome project – this was before he became "the great pragmatist" and was one of the most hardcore believers in the ethics of free-software. De Icaza of course was not alone, and many other programmers worked on, and expanded, MC over the years – it was after all, a tool they loved as much as their sys-admin brethren.

One of those programmers is worthy of a special mention. Paul Sheer developed midnight commander's builtin text-editor mcedit. It could also run standalone and was in many was the most userfriendly text editor available on the Linux console – and still is. Unlike the other programmer's editors it eschewed dual-mode editing and power-features for a simple arrow-controlled editing environment reminiscent of the editors popular in the dos world at the time.

It is quite a surprise to me that mcedit is not the editor by default suggested in Ubuntu documentation for system text editing, preferring nano instead as nano is a much harder editor to use with much more difficult key-stroke controls and an interface that is, unintuitive to say the least. I can only surmise that mcedit fell out of favor because it depends on mc itself which is a bit of a bulky program and these days with our graphical file managers and graphical editors no longer generally included in a distribution's default packages (though it's almost always available in the repositories since so many older users still love it).

Part of why I wanted to mention mcedit is because Paul Sheer is a South African, one of the founding members of Obsidian systems (and he was the first trainer in their then newly-opened training department) and mcedit counts as one of the very first (if not the first) major contributions to a (then) incredibly important free software project to come out of Africa at all. 

Midnight Commander's look and feel has been replicated by numerous graphical file managers but for the most part none of them are very popular and non have ever become a user default. The two-pane view which is so incredibly useful on the console is cumbersome to work with in a world controlled by a mouse. Nevertheless it's legacy lives on, the gtk-view of mc (which was never it's default) was the first official file manager of the gnome project and some of it's code directly inspired early code in nautilus.

When you browse an sftp server, or deep into a tarball in nautilus today (or for that matter in a KDE file manager) this is done using virtual-fileystem support, a concept that was largely pioneered by mc and which lives on as a lasting legacy on the free software world even when we don't use it anymore. 

Having said that, unlike most of the other projects in this series mc is very much alive and well and I highly recommend every Linux user installs it. Even if you never work on the console, the one time you have to – it could make your life a million times simpler and (in a major break with tradition) it could do so while increasing rather than decreasing your productivity.

Jul 112011
 

In the early parts of the noughties a patch was added to the mainline kernel. It was called DEVFS written by Richard Gooch. DEVFS broke one of Linux's oldest mechanisms – device files as actual files on the filesystem and replaced it by turning /dev into a virtual filesystem (much like /proc or the later /sys). 

It had many advantages, devices only had files once their drivers were loaded, and those files were correctly configured. It also changed the default names of many devices (though it came with a configuration to symlink those to the old names for compatibility reasons). 

It was a major step forward for Linux and especially for distribution developers. Userspace could tell what drivers the kernel had loaded (e.g. what supported devices were available) by just glancing at the filesystem.  At the same time however, HAL was in early development and HAL had a major desktop (gnome) backing it.

DEVFS was disabled by default due to the fact that enabling it by default would mean systems not configured with compatibility measures in place would not function properly. It never really took off.

HAL took off instead – though it didn't do nearly as good a job. Ironically although devfs was cut from the kernel by 2005 – it wasn't just removed – it was replaced. The new tool was UDEV – now standard in all distributions – and all UDEV did was to take what DEVFS did and move it into userspace. That's what the name means – userspace-devfs. 

UDEV had all the difficulties of DEVFS – but not being directly in the kernel the work-arounds were simpler, and it took off. It became HAL  integrated and then surpassed it. Recently KDE switched away from HAL entirely – relying directly on UDEV for hardware support, exactly the promise of Gooch's original design.

While devfs was never mainline, many of us old-time Linux geeks had it configured on our systems finding it a delight to have around, as UDEV took over nobody can deny the heritage. It all began with Richard Gooch's idea, even if it wasn't his implementation that won the day.

In fact – he has a history of being ahead of his time and creating the designs on which future main-stream concepts would be based. His other major work was a revamped version of the Linux init system that could start many services safely in paralel and boot up much faster. It wasn't Sys-V compatible however. In recent years the same idea was re-hashed by several developers and with a bit of compatibility added on, it became the norm – Ubuntu's upstart system is a direct ideological derivative of Richard Gooch's init system.

Jul 062011
 

E-Mail wasn't always something that was done on the internet. Most networks since the late 70s had e-mail services and early ISP's often had mail services that only fed their own subscribers. These e-mail systems which allowed you to send mail only to people on your own company network were themselves predated by systems which only worked within the same machine.

That wasn't so useless as it sounds – since back then computers were generally large backroom mainframe servers and various users were on terminals all over the building/campus or whatever – so it had all the advantage of modern corporate mail services between colleagues (if not with outside customers and such).

Modern e-mail systems and their protocols can be directly traced back to those early versions – and they were mostly built on Unix – with a set of systems that Linux inherited and later greatly expanded. The early GNU/Linux killer app was apache and it made Linux the operating system of choice for ISPs – which made it the OS on which internet e-mail was created. 

At the heart of those early in-machine mail systems was sendmail.  Sendmail basically was a command to send mail from one user to another, mail was sent to qeues in /var, to be picked up by the user's mail client. Later it became network aware and could send to from itself to the sendmail on another machine using SMTP – the other machine sendmail would again deliver it into the user's qeue. This process actually still gets used by almost all ISP mail systems – though sendmail is hardly ever the server anymore.

As internet mail became popular, it became needed for people to have a way to download one user's mailqeue only to another machine – for ISP mail to function. POP was developed as a protocol to do so – and fetchpop was written as a tool to collect your mail from another machine and deliver it to your own qeue on your own machine.

Fetchpop stagnated in development by the mid-90's however and Eric Raymond took it over, further developing it. In doing so he added support for newer protocols like IMAP and renamed the program to fetchmail to reflect this change. 

That was internet mail by the mid-90's sendmail to get mail out, using the network when it moves between machines and fetchmail to collect it to individual machines in this post-mainframe age. Fetchmail could collect entire qeues so some companies even had a single qeue at an ISP which would be collected regularly and delivered to their local server from which individual users would then collect. 

Such setups sound cumbersome but to a skilled linux admin at the time they were easy and commonplace setups, many even used more obscure protocols like UUCP to transfer those mailqeues. 

These days ISP pop mail probably still use a qeue system exactly like that, but fetchmail has fallen into disuse as most machines are now genuinely single-user so people don't want their pop/imap mail going through a local multi-user-capable qeue, they just download it directly into a mail-client like evolution or kmail. 

On the sending side, sendmail got hit in the late 80's with several massive security exploits. The famous sendmail worm of 1989 actually led to the infant internet of the time being shut down for 48 hours (it was small enough to still do so) to get the worm to choke itself to death without connections. And that worm was an accident – the author had never intended for it to be released into the wild, he was doing research on computer viruses.

It got worse, sendmail was unlike almost all later mail systems in how it was configured. Not using the typical unix text-config files as such, it basically used a configuration file written in the M4 macro language which had to be compiled before it could be used. Massively powerful as this was, it made the configuration very complex and hard to get right-  and very easy to make very insecure.

Sendmail insecurities flourished and it became the plague of the internet. By the late 90's however other mail clients like postfix were starting to appear and getting better. They had simpler but equally powerful configurations and were built with designs that considered the security issues which had plagued sendmail.

Sendmail's security issues shouldn't be held against it too much – it was written in an age of small networks and the problems were a matter of scale as the internet exploded, things happened which could not have been reasonably foreseen when it was designed so it wasn't that the designers didn't care about security – they just didn't understand it because there was no reference for the future to come.

Ultimately today most Linux users will never run sendmail or fetchmail, even most ISP servers no longer depend on them – but once they were a core part of how even home Linux users worked with e-mail. It was just the way it worked – and even such standard incompliant absolute trash as exchange are built on the foundation that was laid by sendmail and fetchmail – the e-mail programs that once drove the internet.

Both programs are still actively developed though the userbase has shrunk, and can be found in the repositories of the vast majority of GNU/Linux distributions – especially those that have a server focus – but these days other programs have become the default tools of choice.

Jul 052011
 

Back in the year 2000 a group of researchers started predicting that the future of computer interfaces was in three dimensions. Windows far behind other windows, or sitting side-ways, in fact – who needs windows when you can have three dimensional surfaces with which applications could interact with users.

These researchers were very familiar with the top-of-the-line virtual reality hardware of the day (such as S.G.I's cubes) and were developing for a time when virtual reality would not just be for gaming. This was to be the future of computer interfaces – and they were building the infrastructure to support it, not as a part of X, but as a replacement for it.

Free and open-source the software they wrote ran on almost every variety of unix, though the last releases still had some shortcomings in it's design – for example to get proper 3D acceleration on nvidia cards at the time, you had to run 3DWM (the X replacement) inside X itself since nvidia drivers didn't support acceleration on the console.

3DWM had one problem in those early research versions – there were hardly any apps that could make use of it, and while it could run on home hardware it's true usefulness depended on expensive setups that most people never had, and still don't  (especially since SGI for all intensive purposes no longer exists).  It was also by default incompatible with older software designed to run in 2D windows. 

A later version tried to offer a compatibility work-around, it created a 3DWM compatible viewer for VNC, so you could set up a VNC server, connect to it from inside 3DWM and run your apps in the flat surface of the viewer, which you could now spin around and twist in three dimensions (though the apps remained 2D).

It was… not an elegant solution.

Today, 3D interfaces are in fact becoming a reality, modern interfaces make use of 3D for special effects but beyond that a new generation of 3D hardware is emerging. For now that hardware belongs to the gaming market with 3D being available on consoles with glasses and without them in handhelds. Crucially the core hardware added is just a pair of glasses for consoles and a high-definition television (which is still lower resolution than a computer screen anyway) – so no massively expensive new investments are needed to make use of it.

We could predict that future desktop interfaces would start to make use of the same capabilities but this in fact rather unlikely since the world of desktop computing is ever more moving to the world of hand-held devices and computers are becoming the playthings of gamers and programmers only. In this world new rules of design are needed, if 3D was to become a feature of hand-held desktops it will be using the same sort of technology that lies beneath the Nintendo 3DS. 

What we aren't likely to see is a future where we do our day to day computing in a virtual three dimensional environment. 3DWM was doing groundbreaking research to prepare the way for a future that never happened, but it wasn't meaningless.

The ideas and technologies they developed strongly influenced future desktop designs. Even though modern KDE and Gnome systems are 2D (actually Vista and windows 7 belong here too) – they use 3D for effects, for viewing tasks at a glance, switching windows etc. – all using ideas that 3DWM had implemented nearly a decade earlier.