Feb 282013
 
No Gravatar

I've been an occasional, but eager, dungeons and dragons player for many years. However I did not get the full view of what being dungeon master truly entails until recently, when I agreed to become DM of a new campaign I launched with some friends. It was hard work – so many rules, dice rolls, modifiers and class-bonuses, if you don't have a lot of experience, they are a nightmare to all keep track off.

Now of course, I'm a programmer, programmers never do a difficult thing twice if we can get a computer to do it for us. Some googling turned up quite a few useful tools for tabletop gaming and DM-ing but none of what I really wanted – something that I could run on a netbook at the table, to ensure things like combat went correctly, ensure all the rules and modifiers were applied – but would not get in the way of storytelling. It shouldn't even require that you use computer dice. 

So, I set out to write one. EZDM is written in python, using json files for all data – json stores the rules, the character sheets, the modifiers – everything. The current system contains all the data needed for DM-ing an AD&D 2E game – I have no plans to add others but if anybody wants to it should be quite simple provided you can operate a text editor and do a quick study of the JSON format (and also adjust the code as a few bits of the stuff are code-wise, this may change in future).

 

The system's data needs are cut down by it's very core design of "not getting in the way" – so nothing is enforced more than absolutely required.  So for example the DM can choose to create custom modifiers for an attack on top of (or instead off) what the program knows about to account for things that came out of the story. The system can use either automated or manual dice with equal ease as well.

EZDM provides a simple character sheet creator, these don't store all the data about a character (things like inventories remain on the paper-copies), but it does store everything needed to operate combat sequences correctly, including multi-turn spell-casting and interruptions for example. By simply creating sheets for all your players and whatever monsters you have planned, you can have ever combat go exactly as it should.

 

There is a specialized viewer for the character sheets included to let you quickly glance at them, and the character sheet maker also works well for editing sheets if you aren't comfortable editing json by hand.

The next tool is a tool to grant XP to characters, this will automatically add the proper XP and level characters up when they reach the correct points for their class – it will then ensure they get the right number of hit-dice added to their hitpoints and finally remind you to check the DM-guide for updates to their other abilities (I may add these in a future version so they can be displayed, and possibly even stored).

The final tool is the quickcombat tool, which works in a very simple linear manner. You load up all the characters (and monsters) involved in the combat, choosing which ones to use automatic and which ones to use manual dice for. The system then handles initiative rolls and lets the combat commence, each character in turn chooses to attack, cast, flee or heal. Remember that this is a tool for dungeon masters, so players must not use it, with the heal option a dungeon master can account for things like a player drinking a healing potion during the fight – tell it the dice type to roll and the system will restore the rolled HP back to the character. The flee option is completely without rules, the DM simply informs the system of whether the flight is successful or not, if it is the character is removed from combat (If is it not, the DM will probably give a "attack from rear" modifier to the next person who attacks him). Casting simply asks the number of turns (and correctly accounts for spells that take rounds rather than turns) and the target of the spell (characters can target themselves) and then reminds the DM on each turn that the character is still casting, until the cast is zero at which the "complete spell" option becomes available. Choose this and it will roll for spell success based on the characters correctly calculated spell failure rate. Then the DM can choose from three basic spell effects: healing spells (which operate exactly like the heal option but heals the target, whether or not that's the caster), damage spells (which asks the maximum damage and rolls it – and also handles the target's saving throw against spells (and then subtracts the right highpoints if all goes well). 

Finally the attack option, when choosing to attack the DM can immediately choose which specific modifiers apply to this turns. Has the enemy turned his back ? Has the attacker reached higher ground ? You can add any of the standard DM guide modifiers automatically, or create a custom modifier (with just the number) to account for anything else which will affect the attacker's chance to hit and is created by story rather than a specific rule (like I said, this is a DM helper tool, not a replacement for DM-ing properly). Then it will do an attack roll, calculate the attacker's thac0 and the defenders AC – including all appropriate modifiers and work out if the attack succeeds. If it does, it will likewise handle damage rolls and possible saving throws (including saves against death).

All this can be done in the console, or with a GUI – and you can choose :D 

The program is now version 0.0.5, and this is the first stable public release of a program I have been working on extensively for weeks. At this stage I am not aware of any remaining bugs in the extant features. It is released under GPLv3 as free and open source software for anybody who can use it, or would like to help improve it. It has been mentioned to me that this code could easily be used as the basis of a computerized RPG and I do have some rather fun ideas in that regard but more on that later. The next major feature is a map-editor/viewer which will store maps as smart JSON files with object refferences, the code for that is fully written in my brain but needs to be coded and tested  and I didn't want to further hold up the public release of the current program while working on that.

The program should work fine under windows but as I have no windows machines I cannot package it for that, if somebody feels up to doing so – please respond to the issue on GITHUB. For GNU/Linux users, you can grab and install the sources directly from github or for ubuntu/debian/mint users there is a regularly updated PPA you can grab it from. If somebody helps build packages for other distros, that would be great.

May 032012
 
No Gravatar

Inspired by Pinkhairgirls blogpost today I felt I should share the method me and Caryn use to keep our household budget in the green and control our spending. Many years ago my grandmother taught me that to do a budget you draw two columns, in one you write your expenses and in the other your income, then you add them up and compare them.

All fair and well but rather limited. It didn't allow for easy projections (if you're saving for something – it doesn't allow you to see how much you will have saved in six months for example). It also doesn't easily deal with two other of life's major realities:

-Sometimes there are unplanned expenses you could not know about, you need to keep track of them so that you can see their impact in the following months

-Sometimes you are forced to spend more than you planned, and other times you are able to spend less-  you need an easy way to keep track of these differences.

 

So I started to set up a spreadsheet that would meet my needs. It evolved over several years into it's current form. The spreadsheet as it stands has 3 columns for each expense: the actual money spent, the planned expenditure and the difference. It also splits expenses into regular and once-off expenses and provides a series of formulas to allow for extensive forward projections. Each month is a  tab in the sheet. 

I took my own and stripped out all my personal budget data, then populated it with a very simple set of sample data (just enough to show how it is used – but easy to remove/replace). Now I would like to share it with others. Please feel free to make use of it, adapt it or improve it under a share-alike kind of concept. 

If you would like to have a look at it, you can download a copy here.

Nov 052010
 
No Gravatar

So Fedora 14 coming out meant I wanted to try it. I've been F13 on three machines so far: my work laptop, my media-player machine and my gaming desktop. On my work laptop the upgrade went smoothly and it runs beautifully, the reasons why I first switched to it (resemblance to the RHEL systems running on the servers) still apply – and I have gotten pretty adept at Fedora's little quirks so I'll keep it there- it works wonderfully in the office. The media machine is barely affected by the choice of distro because once set-up the only software on it that matters much is XBMC so I won't be installing any upgrades on it soon – it's not like it's ever going to be at risk of security breaches – all it does is play movies.

My gaming desktop however was another matter. From Fedora 13 to Fedora 14 there was a regression in usability on the kind of setup I got there that was so extreme that I couldn't bear it. Upgrading failed miserably leaving the system barely functional so I did a clean install… and the problems didn't go away (I suppose not using the live media made it harder but Fedora's design means if you want to save bandwidth by reusing your download you already did you can't do so with live media at all) – either way, the nouveau driver while coming along nicely is simply not good enough at the primary task (accelerating 3D) yet to use for gaming. Bugger. That's where things got hectic. It took hours of figuring out and googling to get the nvidia driver to work at all – and then it would only work on one screen at a time – so much for the lovely dual-screen setup I've used for nearly 3 years now ! 

Fedora's pulseaudio has been my biggest annoyance with it ever since F12 as I still think pulse is a solution looking for a problem, not finding it, and thus creating a whole bunch of new ones instead. Fedora 14 however proved to be a massive headache on every level. I don't much blame Fedora for the nvidia difficulties – that's nvidia's fault for not having a free driver, and the third-party packagers for doing the worst job they ever did with it, but yum and packagekit reached new levels of broken integration, the upgrader originally didn't bother to update my repositories (not even the official fedora ones) to know I've changed releases… basically I'm sorry but F14 is the worst desktop release Fedora ever did and it made it completely useless for my home desktop. It seems to work fine for the business oriented usage of my laptop however, if that's all Fedora developers care about, then it's all I'll use their work for.

By 10pm last night I was simply too frustrated to keep fighting with it – I actually had other things I wanted to do on my computer this week and I wasn't getting any of it done. So I decided it was time for a new distribution – fast. I decided it was time to see how fat kubuntu came since I last saw it. Now my history with Canonical's distribution(s) have been shaky. Five years ago I got a copy of the first ubuntu release and it's safe to say I couldn't get what the hype was about. OpenLab was a far more advanced distributon both in terms of ease of installation and ease of use at the time and ubuntu's massive resources made this inexcusable – I was one man and I outdid them. Yes, I'll back that up. Just one example: ubuntu came on two CD's – one live disk and one install disk (which was text-only…) OpenLab came on a single CD, an installable live CD (in fact it was the very first distribution to ever do so, it had been possible to install earlier live disks like knoppix manually but OpenLab had an easy graphical installation built into the very CD from version 4 – which came out the same time as the first Ubuntu). 

Over the years I would sporadically try the Canonical systems again. Kubuntu the KDE version developed a reputation among KDE users and developers as the worst choice of distribution for KDE users – it had barely any resources compared to the many in Ubuntu, was buggy and slow and badly configured with horrible themeing and broken defaults. Well I tried it again last night – and credit where it's due. After 5 years- Canonical has finally impressed me. This is one solid distribution, kubuntu finally doesn't suck – and in fact it worked more smoothly than Fedora by a massive margin. I had everything set up to my liking in under an hour. Including the custom things that I usually want to do. The old "thou shalt not touch" policy has been abandoned and instead the system made it easy to find out how to change what I needed to get what I wanted. I had my chosen display setup in seconds. The only glitch was with nvidia-settings not wanting to save the changes, but that was easy to fix (copy the preview xorg.conf file into a text editor save it and copy it into place). When the only bug I found is in software that Canonical cannot fix if they want to (though it's odd that I've never seen the glitch anywhere else before) it's not their fault.

It gets better.

I can't find any sign of pulseaudio anywhere. Despite their initial bullying "you will like it because we tell you to" attitude about it (which led to at least one Ubuntu MOTU resigning) Canonical seems to have finally listened to the masses of users telling them that pulse is broken, doesn't add significant value and makes our lives harder. Pulse is gone ! I am back to good old works-every-time ALSA sound and it's a thing of beauty ! Chromium is in the default repositories – so no need to go download it manually like I had to on Fedora, Amarok seems to work a lot better than it did on Fedora (read: it was so bad I ended up using rhythmbox in KDE rather than deal with it !).

Well done Canonical – you finally built a distro as good as the one-man projects out there –  you actually finally seem to have let your squadron of ubergeeks listen to your users, listen to your community and you've built not only the best release I've ever seen from you – but in my opinion one of the best distributions currently on the market. I still think it's a major issue that you don't meet FSF criteria because you are at a point where everything works so well that I think most users could actually cope just fine if you did – you'd not be sacrificing any major functionality anymore, a few edge cases (like hardcore gamers) may want or need something that you wouldn't be able to support in repositories anymore – but then, those edge-cases are almost by definition quite capable of figuring out how to add just the one bit they need. You've got an amazing distribution – it took you five years of lagging behind almost every other unsung desktop distribution (PCLinuxOS kicked your butts for years, Mint has outdone you everytime, Kongoni was a better desktop distribution – and that was targetted at hardcore geeks of the gentoo-on-a-desktop variety) – you've finally built a distribution that deserves to be in the market leading position you are. 

I admit it- Canonical did a damn good job on Kubuntu with 10.10 and I will for the first time ever be comfortable recommending it to newbies. Well done to the developers – and keep up the good work.

Nov 042010
 
No Gravatar

In a way, I could say every book I've ever read has changed at least some of my views on some things. It would be a pretty piss-poor book that didn't make me question at least some of my ideas, even if the changes it brings about are minor.

But I will focus on one that changed my views fundamentally about something I thought they were already good on – and has been a guiding principle in my career and life ever since. In a break with tradition – I actually met the author of this book before I ever read any of his works. Back in 2001 I was a champion of the open-source idea. I spoke about the technical power that can be unleashed by sharing work and sharing eyeballs. I spoke about the security benefits – and even the fun of being able to customize. I avoided closed source stuff but I didn't actually think they were wrong – just… lesser.

Then I went to the first Idlelo conference to deliver a paper on a distributed educational content deliver network I had been developing. At the time it was groundbreaking stuff which is why I got invited. The keynote speaker was Richard Mathew Stallman. A man I had long held in awe for his programing skill, for founding the GNU project, for writing the GPL – but felt had lacked the bit of pragmatism that "linux" would need to become mainstream.

Sitting through his talk though, the passion with which he spoke resonated with me. I found myself agreeing with him, and coming to the same conclusion he did: that free software isn't a nice-to-have it's a right, perhaps more practically speaking it's an obligation of programmers to provide it. I was sold.  I also laughed out loud and caught every joke in the Saint Ignutius comedy routine and frankly I think those rare few young "geeks" who freaked out about it a year ago are simply proving that they are utterly out of touch with the culture that created the very movement and software they claim to be passionate about. With it's playful, anti-authoritarian nature – and that this nature is more crucial to it's very existence than all the programming skill in the world. 

If you can't take and make a joke – you don't belong in free software development, we can get better code out of a worse programmer who has a sense of humor.

Of course a lot of people wanted his time so my initial opportunities to engage with him one-on-one was limited,  Then I learned that he has a deep love of mountains, and offered to take him on a drive around the mountains of the Western Cape winelands. We spent the trip having deep and intense discussions, mostly I was listening like a student at the feet of a master but sometimes I disagreed and he could debate quite graciously (granted none of the disagreements were about software freedom issues about which I believe he is rather unyielding) .

By the end of our trip and talk, he gave me a signed copy of a book containing his collective essays. I treasure that book. I reread it every now and then. I know every argument by heart and I have spent the past decade living by them. I am in occasional e-mail contact with him. While I was leading kongoni development whenever we had to make a judgement call about a piece of software I would mail him for his input and take it as a major voice. It was by his encouragement that Kongoni included software to play DRM'd media – software that is illegal to distribute in the USA or violates patents. My country doesn't have patents and his advice was clear: do not give the bad guys more power than they have, you still have freedom from patents, you don't have a DMCA  - let the tools people need to not be ruled by it be in your system. 

A champion of open source became an unyielding advocate for free software and I can say with pride that kongoni was a fully free distribution under my leadership – and recognized as such by the FSF. That when I handed leadership over to another I did it on condition of his promise that he would maintain that status (of course – he is not under any legal obligation to – all I have is his word, but he's kept it so far). I had a look at the most recent release the other day  -it's quite sweet, he's really done good work using what I built and building on top of that.

Believe it or not, I'm more proud of what was built out of my creation than of the creation itself. That I could write the first versions of those code is a matter of pride, that somebody else could write something better because he could start where I left off – is a matter of greater pride. Newton spoke of standing on the shoulders of giants.  The giant whose shoulder's I stand on is Richard Stallman, and the values I adopted from him – has allowed me to be a giant on whose shoulders somebody else could stand. 

It really is a great book, by a great man.

 

Sep 042010
 
No Gravatar

Today I undertook the task of installing CyanogenMod 6.0 on my HTC Desire, the process can be quite convoluted if you read the various docs, and what’s worse is they are all quite windows centric, in fact as I went along I found it to be much simpler than the docs make it seem and thought I would document how I did it.

Rooting your phone and loading a different Android OS on it automatically violates the warrantee. If these steps cause your phone to explode and blind you then I take the same responsibility as the CyanoGenMod developers do: which is to say – none whatsoever. You have been warned. Note that some parts of this was taken directly from the cyanogenmod wiki – small bits where I really couldn’t add anything useful.

The one mistake I did seem to make was to buy a new spare 2Gb microSD card for the experiment. In retrospect, I did nothing with it I couldn’t have done with the 4Gb (though that is qualified by the fact that I don’t keep important data on the SD card).

The first step was to backup my contacts. I went into the “People” application, hit Menu and then export to SD-card, I did this for both google contacts and sim contacts. I then connected to my PC and copied these files to my hard drive so they would be easy to recover if the data got lost somehow.

Most howtos suggest you will have to have a microSD reader – I am not so sure of this, I used one but you can probably get away without one. I will say that it helped to bypass the phone for some steps however. Even so I the small microSD card reader I bought cost me all of R70 so it was hardly a major expense.

You will need to download the root system for the phone as well as make a gold card.

Let’s start with the goldcard. There are many docs out there that use the Android SDK and make you do difficult steps that aren’t distro or OS neutral… basically it’s a schlep – here’s the easy way. With the SD-card you want to use, go into the android market and install GoldCard Helper. This app will when you run it produce the code you need, then let you copy it to the clipboard, open the website, paste it and have your goldcard.img mailed to you in a very simple set of steps. Why do it the hard way when there’s a nice automated tool to do it for you and protect you from typing errors ?

When you have the goldcard image, stick the SDcard in your card-reader, and copy the image file onto it with dd:
dd bs=512 if=goldcard.img of=/dev/sdd
Note that your disk device name may not be sdd replace with the right device name, right after you plug it in – dmesg should show it to you*

Once done – mount the sdcard, and copy the update.zip from the rootkit I linked above onto it (note that this is the rootkit for bootloader 0.80 – if you aren’t sure what this means, first read up on that as earlier versions need a different rootkit).

Now powerdown your phone, start it up and boot it again while holding down the back button, make sure it’s connected via USB. You’ll get to the fastboot menu.

In the directory where you extracted the rootkit run this on your PC: ./step1-linux.sh

Go to Bootloader|Recovery

You’ll get to a black screen with a red triangle on it. Hold down the volume button and tap the power button. This brings you to the recovery screen. First run through “Wipe data” then when this completes run through ”Apply sdcard:update.zip’ make sure you are connected to the PC the whole time.
The process takes a while but once it completed, pull out the battery, boot back up – and your system is rooted. The Desire will run through it’s initial setup screen at this stage (all your settings having been wiped – I did warn you).

That’s the first phase: your phone is now rooted.

Now go to the market and install Rom Manager. Once in Rom Manager install the ClockworkMod first which Rom Manager uses to select boot images.

This takes a while.

Use the Backup option to back up your current rom.

This will also boot you into the ClockWorkMod recovery system (Rom Manager lets you autoboot in here anytime). And this is where I got stumped – and had to google for an answer. Unlike the HTC’s own recovery menu, clockworkmod does NOT use the powerbutton for select, you still move around menus with volume, but you select with the trackball.

Download the latest version of the radio (5.09.05.30_2).
Place the radio update.zip file on the root of your SD card.
Boot into the ClockworkMod Recovery.
Once you enter ClockworkMod Recovery, use the side volume buttons to move around, and the trackball button to select.
Select Install zip from sdcard.
Select Choose zip from sdcard.
Select the radio update.zip.
Once the installation has finished, select Reboot system now. Now the HTC Desire’s Baseband version should now be 5.09.05.30_2.

When you boot, you will first see a weird icon, and the HTC will appear to hang for several minutes, don’t panic, it boots up eventually.

Boot the phone and run rom manager again.

Go to Download Rom. CyanogenMod should be right at the top of the screen. Tap “Stable Release” and wait for the download to complete. Rom manager has an option (which will pop up now) to automatically add the google apps to cyanogenmod (which the primary distribution of it cannot include for licensing reasons) – add them if you want them.

Once the ROM is finished downloading, it asks if you would like to Backup Existing ROM and Wipe Data and Cache.

If Superuser prompts for root permissions check to Remember and then Allow.

The phone will now reboot into recovery, wipe data and cache, and then install CyanogenMod. When it’s finished installing it will reboot into CyanogenMod.

Aug 172010
 
No Gravatar

Just a few years ago, the running geek joke was that Larry Ellingson was second rate, knew it, loathed it and suffered under the whithering scorn of Microsoft who was at that stage "outcompeting" Oracle on every front. Heck MS-SQL was even outselling Oracle Database as impossible as that may sound today.

Well the joke is over that’s for sure. Larry spent the last few years on a drive of targetted acquisitions that usually ended up buying companies for their products and putting most of the employees who created those products out of a job (that btw. of ye who worship the "invisible hand of the market" does NOT count as economic growth. Bigger companies with FEWER employees is bad for everybody – including customers) culminating now in the acquisition of SUN.

Unlike most such acquisitions Oracle did not need to fire most of SUN’s top engineers – they almost all walked out on the first day in protest. These were people who worked for what was once perhaps the closest thing to noble any corporation could be, a company that was once rated the best I.T. company in the world to work for – founded by engineers. Oracle’s culture is almost a polar opposite – it has always and forever been for them about one thing only: how much money can we make.

Oracle’s purchase of SUN gave them control over a number of major technologies – the sun hardware business being practically speaking the least of them. SUN may not in recent years have been very good at monetizing their assets but the software technologies they owned were nonetheless disruptive, innovative and major forces in the market – and now Oracle owns them all.

They own MySQL – a database that was rapidly chewing away at their market share. Most analysts never realized just how huge a threat to their primary bottom line MySQL really was. A few more years, MySQL may have supplanted Oracle as the market leader in databases and the number two spot would have belonged to PostGreSQL. If you thought Oracles competition was Ingress and IBM’s DB2 they looked untouchable – but while this fooled analysts (and oracle was happy to keep them fooled) it wasn’t a true picture. Oracle knew very well that MySQL and PostgreSQL had the capacity to take over the database market from them and the inevitability of that success which is practically built into any successfull FOSS business model.

So Oracle bought SUN to get MySQL. The other major technology they wanted was Java. That most beloved of academic languages that somehow never took off on desktops or the web it was supposedly created for. It didn’t take off on desktops because frankly the story of Java the web-language was a bit of marketing. James Gossling and his team had designed oak: a language created for mobile and embedded systems, to capitalize on a coming revolution. SUN wasn’t wrong in predicting said revolution – they were just 15 years too early, so in the meantime they reinvented Oak into Java, called it a web-language and got it out there, getting a stable of developers ready.

Java expanded it became a darling of back-end services and application-service systems (tomcat is a lovely example). It became a cornerstone language in the market for many tasks (developing user-facing desktop applications was never it’s strong suit but there’s a lot more to the programming world than those) – and when the embedded revolution did come, Java was it’s darling.

It still is, J2ME is the most widely usable phone development platform there is. Android apps are written in a slight variant of desktop Java (but Android can also run J2ME apps through a compatibility layer). Even Windows7 phones support Java apps. The only exception is Android’s biggest rival: The Iphone.

People talk about Steve Jobs’s refusal to allow flash on the iphone but much more important is his continued prevention of java as a language. Both are prevented for one reason only: it makes iphone into a walled garden, whose apps run on nothing else, and which cannot run apps developed for anything else. Such deliberately blocking of interoperability is bad for the consumers and gets worse in the long run – in fact, it’s a classic Microsoft business technique (less so nowadays because Microsoft is frankly not as powerful as it once was and cannot get away with it so easilly).

Android is the great thorn in Apple’s side – a platform that gives comparable features while being open and interoperable breaks down the value of their walled garden approach. Apple however never had the gutts to sue Google – instead they sued HTC and other handsent manufactuers – their hope being to scare the handsets away from googles stack with the very real threat of patent litigation.

So far, nobody has backed down so I think Apple’s plan isn’t working very well for them. Larry Ellingson however, did not sue HTC. Larry went after google itself. It doesn’t have much choice really – their claim is that google’s adapted desktop JVM on a phone (rather than a desktop computer) violates the Java licensing (those parts that aren’t GPL’d at least) and patents. Patents which recent posts by people like James Gossling reveal to have been filed for absolutely no other reason than to build SUN a defensive position when other companies sued the once patentless company over trivial patents and won. Patents created through a "lets see who can get the stupidest patent granted" competition among the staff !

Now those patents belong to one of the most unscrupulous businessmen in I.T. today. The suit against google is about one thing – firmly cementing Oracle as the dictator over Java. They who shall decide which java features are available on which platforms. Google perhaps has some room for a defense based on stretching the defintions of desktop computer. Android is pretty close to a desktop OS as it is, and tablets will bring it even closer (much as it did for Apple). As the line between "phone" and "pc" has gotten blurrier – perhaps the legal seperation of the concepts aren’t so clear anymore either. I’m no lawyer so I won’t debate the viability of this but it’s worth considering that when J2ME was created with it’s smaller feature-set (a feature-set not good enough for Androids capabilities) phones (and the apps they could run) were far less powerful than they are now. My HTC Desire has more processing power than any of my first 5 computers. It just happens to fit in my pocket.

Oracle wants control over Java at that level. Sun already gave us the core java technologies under the GPL which makes oracle weak in what they can do with it, but here they are showing the power of patents. The Harmony class-libraries from apache were based on the GPL’d java source code, Android’s JVM is based on Harmony – yet Oracle is asserting a power that the GPL specifically removes: to control where and how the code may be run. Harmony remains an uncertified Java set – because to get certified requires one to comply with an additional license that removes almost all the GPL freedoms.

Oracle didn’t go after Harmony, at least – not yet, they went after Google and they have one goal in mind here: to take back control over Java. Ironic because it’s exactly the fact that SUN has been evermore relaxed about controlling it over the years that allowed it’s continued growth. It remains one of the few parts of SUN’s software business that was actually profitable right to the end.

But control Java, and you control a huge section of the software market, particularly that part where Oracle is the strongest. If you destroy it in the process ? So what. Oracle DB will only get stronger if that happens – they would much rather lose the Java revenue to protect their database market at all costs.

So does this mean the end of Java ? This lawsuit already has companies clamoring to start processes to move their code from Java to other platforms which has a largely negative knock-on effect on everybody (and ultimately the worst on consumers) so it’s already done terrible harm. It is likely to get worse. If Google prevails, or comes out with a good settlement – then mobile Java may yet survive – it’s too huge a market to die easily. If they fail – even that is dead.

But Java as we know it died the day Larry Ellingson filed that lawsuit. It will spend quite a few years on involuntary muscle spasms as the case drags on – but it’s dead. In the interest of consumers and corporates and everybody else outside Oracle it is now truly vital to viably replace all of Java with a truly free alternative. The good news is that the core Java technologies ARE GPL’d. Java may be dead – but it is now time to ressurect it, in a new form without corporate control. Use th GPL’d code that SUN gave us before it’s demise and rebuild the rest from the ground up. We weren’t far from it even before -nothing should stop us now.

I propose this as the new number one entry on the FSF’s important-projects list. We need a free J2ME, a free JVM, a free servlet engine. I write as somebody who learned Java at University and never voluntarily used it since. I despise the language, I find it clunky and hard to read and harder to build with and I much prefer leaner and cleaner languages like python myself, but I recognize the value Java and it’s position has brought to computing, I recognize the harm it can do to once more revert this power into a single corporate entity’s hands. In fact it will be far worse now. Java is much more powerful, and it’s not Oracle’s primary product for them it is nothing BUT a means of control – so they will fight to control it entirely, and with it a thousand companies and a million developers and a hundred million users.

I may not like Java – but I know we cannot let that happen.

Jul 072010
 
No Gravatar

My name is pretty well known. I have spent more than ten years as a free software and gnu/linux contributing developer and as such my name is stuck in GPL copyright notices on several thousand downloadable files on the internet, along with e-mail addresses over that time.
I also happen to share a name with a celebrity (at least a local South African one) – and if you google my name – him and me make up the entirety of the first three pages. My name is at the top of this blog. It is also not hard to link my name to my WoW characters, I tweet their achievements and write roleplay stories about them on this blog on a regular basis.

My name being well known has simply never bothered me. But I do understand that it bothers people. The odds of any given person being stalked may be incredibly low but it’s non-zero and I most fervently believe that people should get to choose who they share their name with online. Blizzard’s decision to not only link posts to a RealID but to put real names with each post on the forums in future removes this choice.

As a way to combat trolls it’s among the stupidest things I’ve ever heard off. If they can’t moderate the forums, then start building a decent system for users to be able to. No need to even invent one, Slashdot has perfected one over the past decade and hell, their source code is available for free download. User-moderated, to the point where trolls nowadays just drop below the radar before you ever even have to see them.

The fact is, it doesn’t matter how real or perceived risks are here – much more crucially than that. This is a case where people’s right to choose who they want to share what information with is being removed by a company. Isn’t the customer supposed to always be right ? This probably closes on violating the privacy laws of some countries. Even where it doesn’t, it’s a matter of principle that people should get to choose these things.

Me, as well known as my name is in my particular fields of endeavor, as much as I have never made an effort to hide it (I’m a writer and an artist and a creator of software, on the contrary – it’s to my ADVANTAGE to get my name as widely known as possible. For a large part of my career, exposure was a very valuable TOOL for my own use) – I won’t be using the official forums with this live. Because I don’t like not getting to choose, and out of solidarity with everybody who really doesn’t WANT their names known. As much as I never considered my name (or even phone number) particularly private – I CHOOSE which communities it is known in and when I prefer to talk under a pseudonym.

For blizzard to turn the forums into some kind of WoW social network is for me to permanently stop using them. So it is for a LOT of other people – privacy is a right and only we have the right to choose when and where to claim it. The backlash from this has been tremendous and blizzard is fighting hard to save face now – but frankly, there is none to be saved. Undo this blizzard.

Having a single-sign on via your battle.net account is fine, mapping that to SHOWING your real name is not. Not everybody has a common name, not everybody shares one with a well-known mainstream media figure. I choose who and where I put my name down, you don’t get to decide for me. If you don’t give me the option to post anonymously when I so choose, then I choose not to post at all.

UPDATE: Since this post appeared Blizzard rescinded it’s decision thus the post no longer applies.

May 052010
 
No Gravatar

It’s vital to note that the two companies pushing the hardest to promote H264 rather than the open-format Ogg/Theora as the video codec of choice for HTML5 is Microsoft and Apple – both of which are members of the MPEG-LA consortium – which holds the patents on H264 and has already outright threatened to demand royalties from users who use the format – while actively pushing it on devices and in browsers.

The general public have barely noticed, as sadly they usually don’t. Somehow, they just don’t believe that anybody will demand money from me for taking a home video and putting it on my blog. But history, not even all that long ago has a perfect example for us to learn from.
In the 90′s as the web was exploding the gif format became incredibly popular for displaying simple animated images. Banner-ads and bouncing icons all used it. Webmasters couldn’t live without it. Then … wham those webmasters started getting letters. The letters were from a company called Unisys, indicating that the company held a patent on the gif format and that for using it – these webmasters were now liable to pay a (massive) royalty fee. They went after big and small sites alike and made plenty of money while every site that could got rid of gif images.

This was about 10 years ago – so most of the people now online weren’t around – the tech population of the net was a much larger percentage and while there was no effective way of fighting back, we could get the word out and get most sites to drop their gifs before they got those letters of demand.

The problem didn’t go away until 2006 when the gif patent finally expired. Well the patent problems with H264 is not going to go away that fast – it’s early in the process. They could truly milk the web and it’s users dry. Wouldn’t Microsoft and Apple just love to demand nice big royalty checks from google for Youtube ?
No wonder google is pushing VP8 and even decided to OpenSource it. VP8 is the codec on which Theora was based and it’s creators have made a no-patent-royalties promise. Google recently bought it from On2 and promised to make it free software in an effort to push back against the threat of a core technology of the web once more being subject to patent problems.
Animated gifs were a big but not insurmountable loss to websites a decade ago. Video today is going to be far more important – the web has changed in these years, and it’s crucial that it remains open and free for all to develop on if the innovation and world-changing technologies that has already happened is to continue.
The web and the internet has broken every rule of sociology and changed the world in ways that would never have been possible before. Primarily – it did this by giving everybody a voice. Many corporations would love nothing more than to turn it into a world of few-speakers and many listeners like television. For corporations – that means increased profits. But our world does not exist just to make corporations rich. The technology of the internet has already lived up to the potential to bring massive positive change to the lives off all people, and it’s only in its infancy.
Allowing it to lose the very attribute that makes it such a powerful force for change would be to destroy the very reason it exists and depriving ourselves and our children of whatever better future it can promise. Once the battle was over animated gifs. Now it’s online video. Make no mistake- the time will come when it will be over the very concept of actually interacting online instead of just consuming. To keep the web free and open always was, and always will be, the only way to win that fight.

*Note: the exact story about gifs is a bit more complex and I simplified the history a bit to show the parallels. In truth for example gif was actually covered by TWO separate patents, one belonging to Unisys and one to IBM. At one point Unisys had declared using gif images to be “doable freely for non-commercial purposes”, at another they had demanded royalties and at yet another time they demanded royalties only from programmers who wrote applications to create gifs with. I left these details out above, not to obscure or mislead but simply because they were irrelevant to the point I was making. That the history of the GIF fiasco was a major event in Internet history which has largely been forgotten and which we currently risk repeating stands without question to me. That this history was convoluted, complex and full of strange side-tracks only proves that history is never simple.
This article has a nice detailed version of the history if you are curious: http://www.freesoftwaremagazine.com/node/1772 all in all though – the fact that the H264 case is simpler only makes it worse. We know they are going to demand money, they’ve said it openly. If we manage to get in the same boat as we did with gif despite having been warned – it would really be the stupidest event in the history of the web.