May 202009
 
No Gravatar

Pat has finally done it, the first official slackware-port in years, slackware64-current is public as of today. This is of course great news for kongoni… although it’s going to mean that kongoni x.13.0 is going to be a lot more work than I thought.

The problem is this: the 64-bit version of slackware official follows the proper LHA standards which makes sense since the majority of the work was done by Eric Hameleers from slamd64 fame. This is a good thing, but there was a reason why kongon64 used Bluewhite64 as it’s upstream.
The reason is that to do what Eric does requires the slackbuild scripts to be rather heavilly modified, while the bluewhite64 version basically just needs the ARCH variable to be set right.
For the most part – it meant we could maintain a single ports tree, with only minor architecture specific fields in the slackbuild scripts.

The first question of course is: what will bluewhite64 do ? I posted a forum question about this, since bluewhite64 is more than the port and also has a very wonderful live-dvd project, I could see a future for the project focusing on that.

Now the upside is that post-13.0 release we can safely expect the major slackbuild sites like slackbuild.org to try and support this platform… but what about until then ? More importantly – how do we as kongoni move forward ?
The obvious answer is that we ought to move kongoni64 to slackware64 when we go to x.13.0 – after all, that will be the upstream maintained most closely to 32-bit and it is more standards compliant.
This does mean some work though, we will need to investigate the ports -many of them will need to be significantly hacked – and the bigger one… this may mean we need to, like slackware maintain two seperate trees, and can no longer get away with a single one that installs on either platform.

Not an impossible task – just double the work.
Now a lot of the work in there will be made redundant by slackware 13 anyway as KDE4 is now official and so is it’s dependencies so I have no intention of maintaining these post slackware 13 release. I’ll focus on our customizations.
Or does it…
Well… I know 32-bit slackbuilds won’t build on 64-bit slackware – but perhaps they will build the other way around, after all – the ports tree only kicks in for add-ons. So if we write them right… we may be able to build kongoni’s x.13.0 ports tree in such a way that our slackbuilds build correct regardless of platform, by following the structures in the 64-bit scripts.
At this stage, I’m not going to worry too much about it – just keep an eye on it, I suspect slackware13 official is a still a while away.
What is clear to me now is that the degree of change means that doing a kongoni 1.13.0 is out of the question- we will just about have 1.12.2 out of the door in a month or two. There is no way we can incorporate this level of change (good change) inside our stabilization cycle.
So the first likely 13.0 based kongoni will probably be 2.13.0 – and we can expect to start working on it very shortly after 1.12.2 is officially released.

I guess the pressure is on to get Nietsche out of the door as soon as possible eh. We got a nice big buglist to sort out before then though – so let’s get to it guys.

UPDATE: A look at the 32-bit sources had an interesting result- the slackbuilds there seem to be build according to the same standard now coming to 64-bit… so that would suggest we could still do a single ports-tree, would be good.

Oct 292008
 
No Gravatar

Today I posed this message to several of the LUG’s in South Africa. I am reposting it here without edits.

Hi Everybody,
Sorry for the cross-post, I promise it’s a once-off but this is a bit of a special circumstance.
In the grand tradition of GNU and later the Linux kernel, I am beginning with a mail to announce
my intentions, and a request for anybody who shares my vision to help out.
The interest in my CLUG talk about distribution creation some time ago left me thinking that
perhaps there are enough people out there (particularly here in South Africa) who may feel up to
the fun and work of helping to create something special. Having spent 5 years creating a
successful commercial distribution, I believe I have the skills for such a project to be workable, though this one is meant to be very different as you’ll see.

Starting in the next weeks I want to create a GNU/Linux distribution called kongoni. Kongoni is the
Shona word for a gnu (wildebeest) and this represents the origins of the system: firstly it is African,
secondly it is meant to be a truly free distribution of the GNU operating system.
The name in other words translates literally as: GNU Linux :) (I rather like the wordplay as well).

Fundamental to the design will be an absolute commitment to free software only. That means we will not
include in the installer, nor in the ports tree or any other officially distributed packages any piece of
software that is not under an FSF approved license.
Some degree of the workload can be shared by utilising (and contributing back to) Gnewsense’s list (and blacklist).

Development releases will have a kernel compiled with the no-taint flag – not allowing any non-free drivers to load,
which will be very useful for auditing purposes, where possible we will provide free alternate drivers.
UPDATE: I should have been more clear here. I mean ONLY development releases will have notaint, official releases will not restrict what users can or cannot load.

Where possible I want the system to actively contribute to high-priority free software projects like GNASH and Nouveau,
not least by providing automated scripts in the packages to allow even non-technical users to file automated
bug reports to the projects with usefully information for their needs. Thus possibly increasing the number of testers
exponentially, the improvements that arise will in turn benefit all free software users and developers.
The system will never be commercial, I have no problem with commercial free software (in fact I run a commercial free
software company) but this project would best benefit from being a true community project. If the need arises to
formalize structures, I pledge that it will be done by registering a charity organisation, or joining an existing one
– not by starting a company. If people some day want to start companies that sell services related to the system however
more power to them.

Now on to the initial technical details. First off, I don’t think there is any room in the market for yet another Ubuntu
respin. Ubuntu is a nice system in many ways, but the need is met – and Gnewsense already provides a fully free alternative
to fans of Ubuntu. Instead I believe there is room for new ideas and new thinking.
To this end I want to start with a slackware/bluewhite64 baseline initially targeting x86_32 and x86_64 platforms.
Slackware has many advantages as a baseline and offers enormous power of (easy) customization to give the system a real
unique identity while staying true to standards.
The biggest catch is addressing slackware’s number one shortcoming for desktop users: the limited package manager.
To address this, and also minimise the workload of multiple platforms, I intend to use portpkg to provide a ports tree
that is fully tracked for dependencies. Among my first coding tasks will be a full graphical frontend for portpkg as well
as a series of patches to portpkg itself to allow us to maintain our own ports trees as default. These will consist
of license-audited and dependency-mapped clones of the slackware/bluewhite64 repositories for upstream, and source-only
ports for 3rd-party packages. It is important to maintain our own ports tree since unfortunately all the default ports
available in portpkg include non-free software in their package lists. While we cannot (and should not) prevent users
adding those repositories and installing such proprietary packages – we should not give this action any official support.

The initial default desktop will be KDE4 with intention of including KDE4.2 (due in February) in the first stable release
if possible. OpenOffice.org 3.0 is on the standard packages list, and if the promised GNU/Linux port of Chromium is available by
release time it will be the default browser, otherwise one of the free firefox forks.
An absolute must is a powerful and complete system administration and configuration tool,
utilising things like darkstarlinux’s ALICE suite to complement a full kit for user-admin,
setting up advanced Xorg settings (like multiheads) and other common admin tasks. To ensure
seamless wireless and wired network roaming, wicd will be a default package (and madwifi with the new free ath5k hal for older cards and the newly GPL’d hal from Atheros as well).

It is quite possible that if we have enough volunteers and resources future releases could include parallel versions for
Gnome,xfce,enlightenment etc. and I am happy to include these in the ports tree if somebody helps create the ports.

In terms of project admin I wish to set up a suite of easy-to-use web-apps for contributing, auditing and approving
of ports (the first should be open to all, the latter two to trusted testers only). Designed to make the task
of contributing in this manner not only as simple as possible but to minimize the time needed as far as possible so
that those who choose to contribute their spare time to it can spend as much of that time as possible doing fun stuff
and as little as possible doing drudge work.

The focus of the project is home and desktop users, there are other distro’s aiming at this market but precious few
with a stated mission to be completely free, in both senses of the word.
After freedom, our second most important design principle should be one of “it just works”.

Now of course, as I type this Kongoni is vapourware, the first line of code has yet to be written (though I’ve done
significant amounts of research to make the decisions above, and I have written an installer).
Normally, it isn’t my style to announce something until the first pieces are written but in this case I
find it crucial to the very concept that other people be involved from the start. I have proposed a vision
(not an uneditable one technically) and I want to see who shares my vision and would like to contribute to it’s
realisation. I will be happy to fund hosting for the project and contribute much of my free time to it’s realisation
but I would like to have as many people helping as possible so that this is not just my vision, but our vision.
People who can suggest ideas and improvements, people who can help realise those ideas and help with the
large workload ahead.

If just a few people say “I’m in” – then that’s a go-ahead as far as I’m concerned.

The most useful skills right now will be:
*Web-app programming and web-design
*Ports builders and co-maintainers of the tree
*Graphic design
*Testers

These will likely get official lieutenants appointed on a first-come, first-serve basis.
There is much more to do so if you feel that you can contribute something please feel free to speak up.
If any of the mirror maintainers would be willing to host local mirrors of the ports tree and ISO’s when
we get to release time, please let me know as I have learned from hard experience how even a small distro
release can hit a server.

May I request that those who wish to contribute also reply to me directly as I do not want any
names to get lost in the noise as people discuss the idea.
Finally, I would like to suggest that those who are in Cape Town (once we have a list) meet up
for a face-to-face planning session. Perhaps over coffee on Saturday somewhere in Rondebosch ?

Thank you for reading this far :)
I hope to hear from you.

Ciao
A.J.

Oct 082008
 
No Gravatar

I got my hands on a copy of NeverWinterNights for Linux the other day, and I’ve been playing it whenever I have spare time at night – what a great RPG. Now before the flame comments start, I’m on record as saying I don’t think it’s ethically crucial that games be free software because they aren’t software to begin with – they are art. At least, they art part is far more important than the programming part.
Which is not to say it’s not very good (and certainly a lot better) when they are free software, but like with music it’s good when it happens, but not evil when it doesn’t.

So back on topic, I really enjoy NWN. It’s rules are familiar to anybody who knows even the basics of DnD or has played Nethack for that matter, and it’s filled with tremendous flexibility of gameplay (as befits an RPG). I haven’t tried the online version at all I must admit, but the single player version is really nice. A compelling storyline with the kind of environment that allows you to live that storyline out.

NWN is of course, 32-bit only but I had no real trouble running it on Bluewhite64, all I had to do was grab the 32-bit SDL packages from slackware.com install them in a temp root and copy the usr/lib files into /usr/lib32 and it worked fine ever since.

I did find one nasty – it doesn’t play (no pun intended) nicely with twinview, putting itself in the middle of the two screens spanning halfway onto each. With Xinerama, it works perfectly. Of course Xinerama on NVidia means no compiz effects but I have also found that with twinview enabled my system is really slow and unstable, using Xinerama instead is much faster and works way better under KDE4.

I made one change though, I don’t run it under KDE at all, seeing as I have two screens, KDE needs to keep managing the one NWN is not on, and it’s not like I can multitask that way since the mouse is trapped inside NWN, so that was just a waste of resources, instead I created a .desktop file to launch NWN by itself and copied it into /usr/share/xsessions, now when I want to play it I just select “Neverwinter Nights” from my session menu on the login screen and log in, when I exit the game I’m back at the login screen. I tend to do this with most heavy-on-resource games anyway and I highly recommend it. Being able to completely switch off your desktop while playing games is just part of the real power that GNU/Linux with it’s immense customization offers over other OS’s.

Sep 052008
 
No Gravatar

Today has been a good day. First off, I just uploaded an updated set of lancelot packages. These packages are now compiled against KDE4.1.1 (they’ll probably work on older ones but don’t quote me on that) and contain a fix in the slack-desc file (thanks Kenjiro Tanaka for pointing that out).

Secondly, I’m now running KDE4.1.1 for bluewhite64 mere days after the official release, another big thank you going out to Kenjiro once more as he was the one who built these packages. I love how alive the community around BW64 is :)
KDE4.1.1 is quite impressive for a mere maintenance release and I definitely has a slicker feel to it. Things just work that tiny bit better especially in plasma. Still runs slow on my system though and I still blame NVidia, despite the fact that it now renders at a good speed, kwin still uses massive amounts of CPU even with effects turned off.

Finally I also discoverd wicd, courtesy of Robby Workman – finally a lovely userspace tool for network management that doesn’t bugger up the slackware philosophy, handles WPA with sweet beauty and doesn’t depend on half of gnome like NetworkManager does.

Aug 262008
 
No Gravatar

When I saw the blog post for the 1.0 release of lancelot. Today, I felt like a small contribution so I created a slackbuild script for it and built a bluewhite64 package from that. The slackbuild should work just fine on normal 32-bit slackware versions and maybe even on the official sparc version and I am actively using the bluewhite64 package.

More details and downloads here:

Aug 202008
 
No Gravatar

I am canceling the slamdunk project, sorry to anyone who was hoping on it. My reasons are simple: slamd64′s multilib setup is just too far from slackware, for most packages it’s fine but for kde4.1 – it’s a disaster and makes it very hard, 3 days later and I still cannot get QT to build.

Add to that, slamd64 is just not up to date enough, there isn’t a current and this meant it was a backport too. It also means that, since Fred is low on time, I have no idea when the next slackware will become slamd64.
Some slamd64 users have flamed bluewhite64 as a ripoff, and it’s true that the early version used a lot of Fred’s work – but the fact is- bluewhite64 is up-to-date (meaning KDE4.1 packages are already there). The pure 64bit distro can be much closer to it’s 32-bit cousin so it’s also easier to port packages. So tonight, I’ll be replacing slamd64 with bluewhite. I know I’ll pay a price as the ia32 emulation layer for 32bit apps is not complete (you need a multilib to do it right) but everything I use either already has a native 64bit version or I can build one.

Ultimately, it was just more work than I could handle, and too long without a KDE desktop to get there.
I would rather donate time hacking my box and working on KDE and since this is all hobby-stuff I do not want it to be bogged down with hundred hour wastes that doesn’t seem to get anywhere.
Kudos to Fred for his great work, but if he cannot maintain the current tree he should allow somebody else who has time to do it, even if he maintains an oversight role – that’s just my feeling. I was a distro developer for five years and if you develop a distro you take on a responsibility to the users who depend on you. If you are not able to maintain that responsibility anymore you should quit and let somebody else take over. Sometimes, stepping aside from your own pet project is just a part of the way free software works.

Aug 192008
 
No Gravatar

The announcement of KDE4.1 in slackware-current was the push I needed to go back to my old friend. But, woe is me, slackware does not support 64bit platforms natively and 32bit software on a 64bit CPU is slow.
Even if that software is slackware :p

So what to do ? Well there are two major unofficial slackware ports for 64bit platforms. SlamD64 and BlueWhite. I chose Slam64 primarily because it’s not a pure 64bit OS and has 32bit compatibility libs included. A lot of work by one very cool student named Fred. Now just one major dev may be scary with some distro’s but it didn’t bother me too much about slamd64, after all slackware itself only has one major dev and it’s the oldest surviving distro in the world ! And after all these years, still cool.

But there is one catch, Fred has not had time to keep slackware-current up to date so right now slamd64 doesn’t have KDE4.1 available, since the testing tree doesn’t exist.
I decided to do something about it. Now I don’t have that much time either so I sure wasn’t going to port the entire slackware-current to the slamd64 structure – but I did want KDE4.1

I decided to build it. I am about halfway now, with a really proper QT4 package finally compiled. It’s going to take a few days still as every package needs to be compiled and tested many, many times as I hack at the slackbuilds.

I used the slackware current sources to base my packages on, but I have made some crucial decisions.
1) Since I didn’t want to port the whole slackware-current alone – or live without KDE for that many weeks – my packages are not only being crossported to X64, but also backported to slamd12.1. Only where a library absolutely has to be upgraded am I building anything outside the testing/kde tree. So far the only libraries that are upgraded beyond pure slamd64 as it comes from the disks is fontconfig and freetype – needed for nice antialiassing in the new QT.
2) My packages do not include a QT3 backward compatibility lib as a separate package. I tried a dosen times and the changes Pat made in the compat package just isn’t compatible with the slamd64 way of doing things, it keeps breaking all your libs meaning lots of reinstalls. Instead, I enabled qt3support in QT4 which is mostly SOURCE backward compatible. This means it will take some more work than usual to get KDE3 packages to play nice because if they are compiled against stock QT3 it isn’t binary compatible. Sorry folks, nothing I can do about that – I spent hours trying. If somebody else feels like giving it a go once the packages are out, I’ll be happy to include it. I’m going to compile my kdelibs3 against the backward compatible QT4 though, for what it helps. To make a package work, you will need a dedicated KDE3 machine to build it on (as with Pat’s packages) but you will need to install my QT4 package on it – this means it has to be a 64bit machine, running slamd64 -then you will have to change the QTDIR path to make sure you link against my QT in your SlackBuild script.
3) This is not an official part of slamd64 and I have no expectations of support from Fred, he works hard enough already, it’s something I am doing because I want it badly, and I’ll be sharing the results because that way I may save some other people from having to either forgo KDE4.1 or change distros (equal tragedies methinks).

Once all the packages are built, I’ll create a proper slapt-get friendly repo and put it up on this site for others to use. I am calling my project slamDUNK. Which stands for slamd64-UNoficial-K (the K of course for KDE).
Thanks to Fred personally for his advice right at the start which got all this going in the first place.
The packages are not dependency tracked (sheez, I only have so much hobby time) but I will add a metapackage which will have slapt-get dependencies on all the others in the right order so that you can install KDE4.1 as easily as possible from slamDUNK.

I also intend to add a few other interesting package I build to the repo over time, though they will be in an extras directory as this repo will remain primarily focussed on providing KDE packages. I will also share my modified SlackBuild scripts but beware – they are UGLY right now (I am pushing for time here – I’ll clean them up for round 2) – hopefully that will help others who wish to expand it.
And yes, this is going to be an ongoing project for some time – at least until KDE4.X is part of a mainstream slackware release with an official slamd64 port. Even then I will seriously consider doing weekly packages from trunk or something for bleeding-edge people if there’s enough demand.

That’s the great thing about free software. Sometimes you do it because you should. Sometimes you do it because it’s fun and challenging – and sometimes… it’s both – and you have a great time with not nearly enough sleep.
I have been using end-user aimed desktop GNU/linux distro’s for so long that without me realizing it I had begun to get bored with GNU/Linux… things always just working is convenient and was important when I ran a company – but now my computer at home is mostly a place to play and learn again, and it took going back to slackware to remind me how much I actually like fiddling, making hard things work and figuring out tough challenges.
Using KDE is easy – packaging it is challenging, but some of us enjoy that challenge. The lovely thing about GNU/Linux is- whichever you are, we have a distro for you.

I’ll keep everyone posted on the progress, I am hoping to have a working system in another day or two, then just some testing and if all goes well – the repo should be up by the weekend.

Nov 162007
 
No Gravatar

[tag]Thin-client[/tag] [tag]technology[/tag] has massive advantages in many usage cases. Pretty much everybody knows I think that. [tag]Linux[/tag] also has enormous power to do things which would generally not even be possible on (most) other [tag]operating systems[/tag]. Combine the two (as with [tag]LTSP[/tag])  and you have one sweet solution for many tasks.  Call-centers, schools, [tag]cybercafe[/tag]s and community centers are primary examples of what people regularly use this for – and with good reason.

But thin-clients do have some pitfalls as well (the same applies to many similar technologies such as [tag]4-in-1[/tag] style setups). The advantages of running all the apps on a central powerful server are many, but they come at a price – all the apps are running on a central powerful server. This isn’t usually any problem at all, but it does become one when you are dealing with more advanced networking issues. Most specifically, you cannot do per-station firewalling for example. Even that is of no real concern, after all – the kind of environment where thin-clients work tend to have pretty much identical use-cases for all the machines anyway. But one thing does get lost, which has been an essentially unsolved problem for many people for a long time now is to do traffic accounting for LTSP users.

The core problem is this: on LTSP all your network apps appear to have a single source address and interface – this makes traditional traffic accounting tools useless as they cannot differentiate the origin.

Luckily, there is an answer, which I found after several months of struggling. In short, to do traffic accounting for  LTSP you have to force your network packets to come from a different source. Specifically, from the LTSP box itself.

Here’s how (this is a conceptual explanation – the exact implementation will vary from site to site and is left as an exercise for the reader).  I did one so I know it works, but there are several variations you could do.

1) Get the redir package for slackware 9.1 from linuxpackages. The reason we use this older package (not version) is to make sure the glibc version will
 match that of LTSP. Extract the tarball and copy the /usr/bin/redir file into your LTSP tree.

2) Set up squid on the LTSP server, listening on port 3128.

3) Copy redir into your /opt/ltsp tree and write an ltsp bootscript to start it, listening on port 8080 and forwarding to port 3128 on your LTSP servers’s internal IP.

4) Set up your browsers to use a proxy at the LTSP client’s IP, on port 8080 (you may have to use static IP assignments in DHCP to do this).

5) Voila, you can now do source based traffic accounting for all your web protocols.

6) Other protocols are easier, just set up a redir to listen on the port for the service, and forward to the provider of the service (for example a jabber server) on the same port. Then modify the clients to  connect to their LTSP client addresses instead.

 

The approach is far from complete as here, it works by the source IP which has to be set up in clients on the user level – not perfect, so there is much room for expansion and improvement (in the site where I have done it the terminals have automatic logins so this is not an issue), but it at least gives a viable way to overcome the ‘same source’ problem.