Oct 282014

If you ask any geek about his browser, you'll get one of several answers, but if you ask about addons
there is one consistent theme: all of them use some kind of adblocker. Technically savvy people don't
see adds on the web anymore, and generally this has made them much happier browsers.
It has also reduced their risk of spyware and other malware infections.

So far so good but could there be a downside to this ? Not seeing ads means most engineers don't
see how targetted they've really become, don't experience the amount of data collection that 
this reveals – and thus have no itch to scratch on the underlying data collection itself.

Private companies collecting data to do targeted marketing have been shown not to be trustworthy
with that data, we know they've been happy to sell it to third parties – including governments
and government agencies like the NSA.
Some geeks have been warning about this for ages – Richard Stallman predicted it in 1983,
30 years before Edward Snowden revealed it as happening and the organisation he started
to fight for free software was in part motivated by trying to prevent this risk.
It is still one of the organisations on the forefront of fighting to reclaim our privacy with 
projects like diaspora and mediagoblin (which I wrote a short piece about last week).

But for some reason, even now, after Snowden's revelations – these FSF projects aren't getting
mainstream traction among geeks. There is still not enough drive to end them. It's becoming
ever more clear that there is no political solution to this issue – yet the technical ones
are struggling due to a lack of contributors.
Many of the very best engineers are actually working for the biggest culprits ! 

Why is this ? Why do engineers not feel the need to contribute to, make use of, and drive
technologies to end this corpo-government intrusions into our private lives ? I think in 
part because even good things can have unintended consequences. It's just possible that
unlike everybody else – the one group who can appreciate the visible evidence of data
collection and infer the scale required to do it, are not seeing that evidence because
years ago they started blocking the channels it exists on (since those channels are annoying).

Now I would never advocate that we stop using adblockers, if anything, I would advocate that
we should get them more widely used (if enough people use them – the advertising market would
collapse and a lot of the monetary reasons for data collection would dissapear) – but in it's
current state as something mostly used by tech savvy geeks and engineers, it may actually be
having a negative side effect by making those most capable of finding solutions to these issues
less aware and less motivated to do so.

So, no, don't uninstall your adblocker, but remember why you wanted it in the first place and
help us bring about a new true peering internet. Let's contribute to the FSF projects fighting
to change the way people share things online so that, once again, the users can control what
they share with whom. 

Nov 052010

So Fedora 14 coming out meant I wanted to try it. I've been F13 on three machines so far: my work laptop, my media-player machine and my gaming desktop. On my work laptop the upgrade went smoothly and it runs beautifully, the reasons why I first switched to it (resemblance to the RHEL systems running on the servers) still apply – and I have gotten pretty adept at Fedora's little quirks so I'll keep it there- it works wonderfully in the office. The media machine is barely affected by the choice of distro because once set-up the only software on it that matters much is XBMC so I won't be installing any upgrades on it soon – it's not like it's ever going to be at risk of security breaches – all it does is play movies.

My gaming desktop however was another matter. From Fedora 13 to Fedora 14 there was a regression in usability on the kind of setup I got there that was so extreme that I couldn't bear it. Upgrading failed miserably leaving the system barely functional so I did a clean install… and the problems didn't go away (I suppose not using the live media made it harder but Fedora's design means if you want to save bandwidth by reusing your download you already did you can't do so with live media at all) – either way, the nouveau driver while coming along nicely is simply not good enough at the primary task (accelerating 3D) yet to use for gaming. Bugger. That's where things got hectic. It took hours of figuring out and googling to get the nvidia driver to work at all – and then it would only work on one screen at a time – so much for the lovely dual-screen setup I've used for nearly 3 years now ! 

Fedora's pulseaudio has been my biggest annoyance with it ever since F12 as I still think pulse is a solution looking for a problem, not finding it, and thus creating a whole bunch of new ones instead. Fedora 14 however proved to be a massive headache on every level. I don't much blame Fedora for the nvidia difficulties – that's nvidia's fault for not having a free driver, and the third-party packagers for doing the worst job they ever did with it, but yum and packagekit reached new levels of broken integration, the upgrader originally didn't bother to update my repositories (not even the official fedora ones) to know I've changed releases… basically I'm sorry but F14 is the worst desktop release Fedora ever did and it made it completely useless for my home desktop. It seems to work fine for the business oriented usage of my laptop however, if that's all Fedora developers care about, then it's all I'll use their work for.

By 10pm last night I was simply too frustrated to keep fighting with it – I actually had other things I wanted to do on my computer this week and I wasn't getting any of it done. So I decided it was time for a new distribution – fast. I decided it was time to see how fat kubuntu came since I last saw it. Now my history with Canonical's distribution(s) have been shaky. Five years ago I got a copy of the first ubuntu release and it's safe to say I couldn't get what the hype was about. OpenLab was a far more advanced distributon both in terms of ease of installation and ease of use at the time and ubuntu's massive resources made this inexcusable – I was one man and I outdid them. Yes, I'll back that up. Just one example: ubuntu came on two CD's – one live disk and one install disk (which was text-only…) OpenLab came on a single CD, an installable live CD (in fact it was the very first distribution to ever do so, it had been possible to install earlier live disks like knoppix manually but OpenLab had an easy graphical installation built into the very CD from version 4 – which came out the same time as the first Ubuntu). 

Over the years I would sporadically try the Canonical systems again. Kubuntu the KDE version developed a reputation among KDE users and developers as the worst choice of distribution for KDE users – it had barely any resources compared to the many in Ubuntu, was buggy and slow and badly configured with horrible themeing and broken defaults. Well I tried it again last night – and credit where it's due. After 5 years- Canonical has finally impressed me. This is one solid distribution, kubuntu finally doesn't suck – and in fact it worked more smoothly than Fedora by a massive margin. I had everything set up to my liking in under an hour. Including the custom things that I usually want to do. The old "thou shalt not touch" policy has been abandoned and instead the system made it easy to find out how to change what I needed to get what I wanted. I had my chosen display setup in seconds. The only glitch was with nvidia-settings not wanting to save the changes, but that was easy to fix (copy the preview xorg.conf file into a text editor save it and copy it into place). When the only bug I found is in software that Canonical cannot fix if they want to (though it's odd that I've never seen the glitch anywhere else before) it's not their fault.

It gets better.

I can't find any sign of pulseaudio anywhere. Despite their initial bullying "you will like it because we tell you to" attitude about it (which led to at least one Ubuntu MOTU resigning) Canonical seems to have finally listened to the masses of users telling them that pulse is broken, doesn't add significant value and makes our lives harder. Pulse is gone ! I am back to good old works-every-time ALSA sound and it's a thing of beauty ! Chromium is in the default repositories – so no need to go download it manually like I had to on Fedora, Amarok seems to work a lot better than it did on Fedora (read: it was so bad I ended up using rhythmbox in KDE rather than deal with it !).

Well done Canonical – you finally built a distro as good as the one-man projects out there –  you actually finally seem to have let your squadron of ubergeeks listen to your users, listen to your community and you've built not only the best release I've ever seen from you – but in my opinion one of the best distributions currently on the market. I still think it's a major issue that you don't meet FSF criteria because you are at a point where everything works so well that I think most users could actually cope just fine if you did – you'd not be sacrificing any major functionality anymore, a few edge cases (like hardcore gamers) may want or need something that you wouldn't be able to support in repositories anymore – but then, those edge-cases are almost by definition quite capable of figuring out how to add just the one bit they need. You've got an amazing distribution – it took you five years of lagging behind almost every other unsung desktop distribution (PCLinuxOS kicked your butts for years, Mint has outdone you everytime, Kongoni was a better desktop distribution – and that was targetted at hardcore geeks of the gentoo-on-a-desktop variety) – you've finally built a distribution that deserves to be in the market leading position you are. 

I admit it- Canonical did a damn good job on Kubuntu with 10.10 and I will for the first time ever be comfortable recommending it to newbies. Well done to the developers – and keep up the good work.

Oct 292008

Today I posed this message to several of the LUG’s in South Africa. I am reposting it here without edits.

Hi Everybody,
Sorry for the cross-post, I promise it’s a once-off but this is a bit of a special circumstance.
In the grand tradition of GNU and later the Linux kernel, I am beginning with a mail to announce
my intentions, and a request for anybody who shares my vision to help out.
The interest in my CLUG talk about distribution creation some time ago left me thinking that
perhaps there are enough people out there (particularly here in South Africa) who may feel up to
the fun and work of helping to create something special. Having spent 5 years creating a
successful commercial distribution, I believe I have the skills for such a project to be workable, though this one is meant to be very different as you’ll see.

Starting in the next weeks I want to create a GNU/Linux distribution called kongoni. Kongoni is the
Shona word for a gnu (wildebeest) and this represents the origins of the system: firstly it is African,
secondly it is meant to be a truly free distribution of the GNU operating system.
The name in other words translates literally as: GNU Linux :) (I rather like the wordplay as well).

Fundamental to the design will be an absolute commitment to free software only. That means we will not
include in the installer, nor in the ports tree or any other officially distributed packages any piece of
software that is not under an FSF approved license.
Some degree of the workload can be shared by utilising (and contributing back to) Gnewsense’s list (and blacklist).

Development releases will have a kernel compiled with the no-taint flag – not allowing any non-free drivers to load,
which will be very useful for auditing purposes, where possible we will provide free alternate drivers.
UPDATE: I should have been more clear here. I mean ONLY development releases will have notaint, official releases will not restrict what users can or cannot load.

Where possible I want the system to actively contribute to high-priority free software projects like GNASH and Nouveau,
not least by providing automated scripts in the packages to allow even non-technical users to file automated
bug reports to the projects with usefully information for their needs. Thus possibly increasing the number of testers
exponentially, the improvements that arise will in turn benefit all free software users and developers.
The system will never be commercial, I have no problem with commercial free software (in fact I run a commercial free
software company) but this project would best benefit from being a true community project. If the need arises to
formalize structures, I pledge that it will be done by registering a charity organisation, or joining an existing one
– not by starting a company. If people some day want to start companies that sell services related to the system however
more power to them.

Now on to the initial technical details. First off, I don’t think there is any room in the market for yet another Ubuntu
respin. Ubuntu is a nice system in many ways, but the need is met – and Gnewsense already provides a fully free alternative
to fans of Ubuntu. Instead I believe there is room for new ideas and new thinking.
To this end I want to start with a slackware/bluewhite64 baseline initially targeting x86_32 and x86_64 platforms.
Slackware has many advantages as a baseline and offers enormous power of (easy) customization to give the system a real
unique identity while staying true to standards.
The biggest catch is addressing slackware’s number one shortcoming for desktop users: the limited package manager.
To address this, and also minimise the workload of multiple platforms, I intend to use portpkg to provide a ports tree
that is fully tracked for dependencies. Among my first coding tasks will be a full graphical frontend for portpkg as well
as a series of patches to portpkg itself to allow us to maintain our own ports trees as default. These will consist
of license-audited and dependency-mapped clones of the slackware/bluewhite64 repositories for upstream, and source-only
ports for 3rd-party packages. It is important to maintain our own ports tree since unfortunately all the default ports
available in portpkg include non-free software in their package lists. While we cannot (and should not) prevent users
adding those repositories and installing such proprietary packages – we should not give this action any official support.

The initial default desktop will be KDE4 with intention of including KDE4.2 (due in February) in the first stable release
if possible. OpenOffice.org 3.0 is on the standard packages list, and if the promised GNU/Linux port of Chromium is available by
release time it will be the default browser, otherwise one of the free firefox forks.
An absolute must is a powerful and complete system administration and configuration tool,
utilising things like darkstarlinux’s ALICE suite to complement a full kit for user-admin,
setting up advanced Xorg settings (like multiheads) and other common admin tasks. To ensure
seamless wireless and wired network roaming, wicd will be a default package (and madwifi with the new free ath5k hal for older cards and the newly GPL’d hal from Atheros as well).

It is quite possible that if we have enough volunteers and resources future releases could include parallel versions for
Gnome,xfce,enlightenment etc. and I am happy to include these in the ports tree if somebody helps create the ports.

In terms of project admin I wish to set up a suite of easy-to-use web-apps for contributing, auditing and approving
of ports (the first should be open to all, the latter two to trusted testers only). Designed to make the task
of contributing in this manner not only as simple as possible but to minimize the time needed as far as possible so
that those who choose to contribute their spare time to it can spend as much of that time as possible doing fun stuff
and as little as possible doing drudge work.

The focus of the project is home and desktop users, there are other distro’s aiming at this market but precious few
with a stated mission to be completely free, in both senses of the word.
After freedom, our second most important design principle should be one of “it just works”.

Now of course, as I type this Kongoni is vapourware, the first line of code has yet to be written (though I’ve done
significant amounts of research to make the decisions above, and I have written an installer).
Normally, it isn’t my style to announce something until the first pieces are written but in this case I
find it crucial to the very concept that other people be involved from the start. I have proposed a vision
(not an uneditable one technically) and I want to see who shares my vision and would like to contribute to it’s
realisation. I will be happy to fund hosting for the project and contribute much of my free time to it’s realisation
but I would like to have as many people helping as possible so that this is not just my vision, but our vision.
People who can suggest ideas and improvements, people who can help realise those ideas and help with the
large workload ahead.

If just a few people say “I’m in” – then that’s a go-ahead as far as I’m concerned.

The most useful skills right now will be:
*Web-app programming and web-design
*Ports builders and co-maintainers of the tree
*Graphic design

These will likely get official lieutenants appointed on a first-come, first-serve basis.
There is much more to do so if you feel that you can contribute something please feel free to speak up.
If any of the mirror maintainers would be willing to host local mirrors of the ports tree and ISO’s when
we get to release time, please let me know as I have learned from hard experience how even a small distro
release can hit a server.

May I request that those who wish to contribute also reply to me directly as I do not want any
names to get lost in the noise as people discuss the idea.
Finally, I would like to suggest that those who are in Cape Town (once we have a list) meet up
for a face-to-face planning session. Perhaps over coffee on Saturday somewhere in Rondebosch ?

Thank you for reading this far :)
I hope to hear from you.


Oct 102006

By nature of what I do, I get to read litterally hundreds of ((research projects)) funded by various bodies about the viability of ((FOSS)) usage in ((Africa)) and the potential problems with it.I have assume that similiar research is going on constantly in other developing nations – doing it has certainly become a cashcow here. So why are they all exactly alike ? The obvious answer – that they all discovered the same basic facts just doesn't hold water, because they all ignore the same facts as well and they all seem to make the same basic fatal flaws.The first fatal flaw is to judge economic effect in a very short term space. Any economist can tell you that the economic effect of anything takes an average of at least five years to become truly visible. When you're talking about something like FOSS in Africa – where it's usage has only really started recently, not only is the timeframe way to small for measuring economic impact – the sample space is far too small to measure anything.Each failed project seems grossly out of context in such a small sample space. But the figures are, if anything, much better than they were in the early years of the FSF ! If you do a comparative study like that – African FOSS projects have a remarkably high rate of success, and such impacts as can be measured is very hopefull. The projects to roll out in schools for example have the potential for a massive economic impact – but what that impact will actually be cannot be empirically determined before the first of those children leave school. There is absolutely no doubt that in a country where very soon the majority of computer literate school-leavers will be specifically literate on FOSS systems – this impact is going to be measurable. But even there the eventual results won't be visible in the first year.At first in fact, expect growing pains since most companies there are not using FOSS yet – their next potential workforce will be skilled in, and hopefully advocating for, something they are not yet familiar with. In a few cases, the companies will switch. In many, they won't and some of those first school-leavers will be sucked back into a proprietory world.In five years though – when just about every potential employee only knows FOSS – it will not be viable for companies to insist on, or retrain, them all – and most likely, almost all will be forced to switch (never underestimate the power of the workforce).What will be the economic benifit of such ? Who knows. Potentially, the country can become a major contributor to the international software market, an export income it sorely lacks at this stage. At the very least it will have the capacity to become self-sustainable in it's software needs, something very few countries (in fact, just one) can claim. Of course the proprietory vendors always downplay this – and in fact they tell us that we need to increase patent laws and copyright penalties, and add things like DRM into law to grow our IT economies. There is absolutely no evidence to support their claims in this regard of course – and even if they were true, they can only be halftruths. Doing so will help foreign software providers make bigger profits – and maybe a few local companies will develop international quality software in a vacuum and export it. At best we may see one or two local big corps benifiting. There is no way these laws help SME's though – and that is where growth lies. Wealth comes from entrepeneurship. The developing world needs a lot more entrepeneurs -and laws should be made in such a way as to make it easier for people to start new enterprizes, and make them successfull, such laws can have a positive economic impact. And the realities are, as a software developer in a small FOSS company – I can state unequivocally that our costs are at least exponentially smaller than they would have been in the proprietory world. Not paying for licenses saves us initial expense. Using hardware for longer saves us running cost. But more than any of that- being able to build on the works of others, saves us massive R D costs. We could have created the OpenLab (http://openlab.getopenlab.com/) OS, OpenBook, (http://openbook.getopenlab.com/) eduKar (http://edukar.getopenlab.com/) and such entirely alone – but there is no way we could have done that with our small staff in just four years ! We could do that because in each of these products there is a massive amalgamation of code by people all over the world – we just built on top. That is a saving incalculably huge. People want proof that FOSS developers can create better products in a shorter space of time ? Companies like analogous paper on the subject shows this better than I could. So when will see research on that ? How best to serve the human rights issues highlighted by the FOSS community ? For once, the developing world has the opportunity to lead rather than follow – to be champions of a forgotten human right. Will we grasp this opportunity ? Shouldn't the research papers be working out the best ways we can ?

 Posted by at 8:07 pm  Tagged with: