HTML 5 Intro

Of all the recent HTML 5 noise of late, and some pretty flashy webapps to be demoed from them, I got a little concerned that the HTML we know and love was going to turn into some RIA scripting beast.  Alas there is not much to be concerned with so far, this little tutorial starts with very small baby steps that people with limited CSS, HTML and JS exposure can appreciate.

Starting with a cool intro on the geolocation feature, then diverting to a javascript library to make code that tests for HTML 5 features more readable, its currently up to a modest part 3, talking about the new input types in the classic form element. 

Testing Presentations

Their has been a presentation I watched last year that absolutely changed my opinion on how I tested and how I designed.  It was one of those presentations that just made sense and I cant believe I haven’t blogged about it until now.

Integration Tests are a Scam by Joe Rainsberger

This was my favourite presentation from last year. It talks about writing the correct type of unit tests to get get fast results and reduce the need for slower integration tests that are generally slower are require a lot of maintenance. He makes a compelling argument about the number of tests in your system don’t improve the sense of security you get from your tests by the same amount. So he talks about what needs to be unit tested from the contract class and the opposite collaborator class and how doing so gives you a better picture of what tests you need to give you a sense of security with quick feedback. The blurb explains it better

Integration tests are a scam. You’re probably writing 2-5% of the integration tests you need to test thoroughly. You’re probably duplicating unit tests all over the place. Your integration tests probably duplicate each other all over the place. When an integration test fails, who knows what’s broken? Learn the two-pronged attack that solves the problem: collaboration tests and contract tests.


TDD of Asynchronous Systems

This presentation by the author of a new book, “growing object oriented software, guided by tests” Nat Pryce, which talks about testing at the integration/functional level and techniques to get around all the painful ‘flickering tests’ & ‘false positives’ issues that occur when you have them. The examples talk about testing legacy systems, swing gui’s and JMS queues.

One of the key ideas is that one must find a point to synchronise on before executing the assertions and those assertions must wait (and retry if not true) until the state has changed or a timeout occurs (and the test then fails)

As a user of UISpec4j with its Assertions that have a timeout, and some perilious test code when someone innocently mixed junit assertions with a UI test, I could relate to this really well.

Seven Groovy usage patterns for Java developers

A colleague at my work recently added a Groovy console into one of our applications and it was this that reminded me of the ‘keyhole surgery’ pattern that I watched in a presentation last year by Groovy In Action author Dierk Konig last year…

Seven Groovy usage patterns for Java developers

Dierk König, Canoo Engineering, Switzerland

Groovy is the new dynamic language for the Java platform that any Java?project can benefit from.?Learn when and how you can use Groovy as a “Super Glue”, heat up your Java?application with a “Melted Core”, alleviate pains with “Keyhole Surgery”,?adapt it to local needs with “Smart Configuration”, let it achieve?”Unlimited Openness”, enjoy subserving “House-Elf Scripts”, and provide?early feedback from a “Prototype”.

via Viddler.com – Øredev 2008 – Cool Languages – Seven Groovy usage patterns for Java developers – Uploaded by oredev.

How to Start Getting Professional about Software Development

A great blog that simply states management philosophy and the flak that you may experience trying to apply, sometimes simple, dev practices.  It relates to keeping yourself grounded, and patient, and trying out new practices where you can.  The final point in the blog which I’m going to paste here is pure gold.

The grass is always greener on the other side. Switching jobs might be the right thing to do when you get a great offer, or if your current job resists all attempts to change it to the better. But consider this: When you have a real interesting software development position to fill. Whom would you consider? The guy complaining about the last job where he wasn’t allowed to work professionally and therefore hasn’t any real experience in writing tests, never worked with a CI system or a build tool. Or would you prefer someone who introduced testing into a project team, show initiative by setting up hudson on his own machine and is now looking for an employer who supports this kind of attitude. I know what kind of coworker I am looking for. So make sure you tried to make your job a better one, before you leave.

Schauderhaft » How to Start Getting Professional about Software Development.

Now reading


JavaFX 1.2 Application Development Cookbook The JavaFX Application Dev Cookbook arrived for me to review today.

It promises to provide some worked examples of using JavaFX in the real world, which is always handy.  After all, why re-invent the wheel? I’m keen to see how it fares in the correctfulness and usefulness of its samples, as well as how they have weathered the newest JavaFX 1.3, and weather the authors site addresses any issues substantially.

Keep your eyes peeled for a review, real soon now.

Tempus library for programmatic thread dumps

Turns out doing a CTRL+Break (or equivalent kill <signal> <pid> on *nix) programmatically is a bit harder than I thought.  This thread talks about ways to do it, but the easiest was to use the Tempus library which has a lot of threading tools, including a simple

ThreadDump.dumpThreads(yourPrintstream);

to get a stacktrace of running threads in a current environment.

Mongo DB

Watched this presentation about how Sourceforge chose MongoDB for their customer facing webapp.  You know, the one you go to download Azuerus and all those open source apps from Winking smile

Sourceforge chose Mongo because it offered them high read performance although write was sucky but not needed for their app.  They learnt that even though you can put things in one document, you didnt need to retrieve everything all at once and how they easily used up all network bandwidth between the web and mongo server.  Additionally they found they didnt need memcache in between their persistence and the webservers as Mongo was fast enough to serve out the data as it was.

Net Neutrality

Recently a colleague sent me a link to an online petition regarding the concept of Net Neutrality being under threat by a recent deal between Verizon and Google.  http://www.commoncause.org/site/pp.asp?c=dkLNK1MQIwG&b=1234951

The dialog signals a prioritisation of internet traffic for the rich, fast lanes for large corporations who can pay for them as well as the ability for ISPs to block traffic and prevent legitimate usages of the internet.

I found this ‘cause’ to be more fear mongering rather than based on true facts and whilst some points were correct, the practicalities of what was being proposed were apparently lost in fear, uncertainty and doubt. So switching into soapbox mode, here was my reply:

There are some valid concerns here, but I think there is a bit of FUD going on here. They are trying to say ISPs have these tools to control traffic and shouldn’t be allowed to use them, but I think those same tools are necessary to have a working network for all users.  

What got my back up in particular, was the bit where they claim that a corporation can buy a fast lane to their websites and supposedly lock smaller sites into slower speeds.  I wonder how technically speaking that they can do this feasibly and economically?  I mean its always been possible for an ISP to route traffic whichever way it chooses and prioritise some data over others & shape the speed of traffic also but I cant see how a router could handle slowing ALL traffic except the ones going to a certain address to create this artificial highway.  I don’t think network admins would want the headaches of shaping all their users en-masse.

Given my previous working life was all about building products to shape users who had gone over their download limits as well as products to do with prioritisation of data (Quality of Service), I understand this a fair bit.  Unfortunately I see it being taken a bit out of hand.  To implement this practically, loads on a router have to be considered, the more shaping and prioritisation you do, the hotter a router runs and then because you are applying features to thousands of users, not just a small handful who are paying for particular features, you increase the chances of bugs and other wierd stuff occuring.  Eg more calls to Cisco to have them diagnose your equipment because you are using a feature in a way that they didn’t envisage.

Another thing about the practicalities of this is to what effect Google (primarily a content provider) and Verizon (one bandwidth provider in a market of many) can do together to influence the speeds of internet surfing for those outside their networks… pretty much zip, nada, zilch from a technical perspective. Maybe Google can use its weight to make other ISPs pay a Google tax for content leaving its network, just as the big 4 ISPs of Australia (Telstra Optus AAPT Primus) do to the littler ISPs, but all it does is have an effect on the cost of providing the service, and the fact that although the industry is in a period of consolidation we still have plenty of ISPs in Australia means that such network access costs cant be prohibitive otherwise we’d only have 4 internet providers in Australia.  Ingenuity comes in, people change providers, companies build other networks to connect countries together and they build a market whereby if an ISP, or an end user of an ISP doesnt like the service they are getting, they can switch to another.  How can these laws distinguish from an ISP putting efforts to shape your traffic versus the existing efficiencies used to deliver you a cheap internet service that would otherwise be more expensive if you had your own 24/7 line direct to the ISP?  (It probably could but would have to be worded carefully and require auditing of ISPs unlike that previously seen before in order to ensure that the lack of service you were getting was because of a infrastructure or product limitation rather than a ISP governance one) 

In my opinion this thing they have been trying to fight has been happening in the industry from the days of dialup. Ever been with an all you can eat popular dialup ISP who was oversubscribed?  You’d get shaped without knowing if you were a big kahuna user, and your phone line would get disconnected and you’d have to dial back in again.  Its unfair that an ISP oversells its services such that the paying customers experience is pretty weak and stops decent ISPs from offering more value add services for free because they are too busy trying to cut costs to compete with these guys.  In an ideal world Net Neutrality can come into play here, forcing the industry to guarantee bandwidth and making them accountable to external auditors to ensure that they have enough bandwidth for subscribers, but its not a simple equation – your internet has always been oversubscribed from day one.  Even your DSL or cable link gets shared with your neighbours using the same exchange – you’ve been contending for bandwidth with your neighbourhood and probably didnt know it.  The business model is that not all users are using the network 100% of the time.  Days of large internet traffic like when Obama gets inaugurated and its an unusually busy day happen, but usually there is enough bandwidth to go around to support it.

There is some ISP filtering that is good, for example blocking Windows port 445 so that common viruses don’t get in to PCs and spread, preventing even more network traffic and headaches for users and admins alike.  And as much as users hate getting shaped when they exceed their monthly bandwidth, its probably a good thing they aren’t slowing it down for everyone else.  I have seen flooded links and network engineers sweating when a particular part of Australia goes overboard.  The thing I remember about internet protocols are that they are generally not very good when they get saturated.  A link is good at about 70-80% capacity but more of that starts making performance take a huge nosedive.  Having some control so everyone gets their fair share is a good thing, although always unpopular…. eg who likes water restrictions?

Everything else they say about blocking services, the example about dropping VOIP packets so people use old school telephone instead of cheaper internet calls, is a perfectly valid concern.  Its a restriction of trade.  Could you imagine the outcry when a business (or group of) where denied access to amenities – water, electricity that were used for producing an income just because the utility company was unregulated or just evil?

So, I agree with the concept of net neutrality, but I dont want to see it taken out of hand and removing network administration safeties to keep users free of issues, nor removing restrictions that see them clogging up bandwidth for users with more legitimate internet concerns.