The FingerNails and the Chalkbo9rd

Yesterday I spent the day with my good friend and colleague from my group to work on some coding for the project. Let me just say from the outset that I respect this colleague for their technical prowess and their hard work ethic and this blog does not reflect upon this opinion.

The blog starts with my colleague explaining to me how he had facilitated the design and got a very early prototype of our program working. He explained the tribulations with the Gentle framework and the My Generation tool he and the team member from the other team used last week to turn our database design into a living breathing, business class and database persistence code.

Unfortunately, it doesn’t take me long to find that there has been a cost. And lets say that cost makes a kind of high pitched ‘screeching sound’ in my brain whenever I here details about it.

First up, the table names and columns have been renamed. ‘Gentle is sensitive to case’. Okey doke, so we inherit a naming convention for ourselves that gets rid of MS SQL’s stupid mixed-case, spaces-allowed convention, hooray!  

Then, the database foreign key columns have been renamed to include the _id suffix. This went against what I thought was great pains last week to explain to them why I intentionally left the _id suffix off. The field names in the class for foreign key references are to be the type of the related object with a descriptive name. They weren’t to be primitive types which just stored the ID.

   Bad Good
DB column person_id of datatype number Person of datatype number
Class person_id variable of type int person var of class Person

The db column name doesn’t matter too much since you can configure Gentle to use any column name. But the end result what my concern is.

The Good way means, you can just do something like Membership.getPerson().getName(). It’s already loaded in one swoop into memory when you load the membership. You can use a generation tool to make your code and it picks up on this aspect. (Well perhaps any tool except My Generation)

The Bad way means that you have to go, Person.Retrieve(membership.getPersonId()).getName() after you’ve had to already do a Membership.Retrieve(). Two retrieve calls versus one.

Any database framework worth its pinch of salt in my mind should load in the entire object and all the related collections (objects on the many end of a one to many relationship), not just save and persist the key, making it the developers responsibility to not only maintain the references are correct. In my code, I want to work with my objects, in my database I want to work with keys. The guys have missed the whole point of using a framework here.

Apparently, Gentle or My Generation (they use the term interchangeably, you’d think its one program — another minor screech), needs to see the _id otherwise it wont build list methods from it.

Sccccccccrrrrrrrrrreeeeeeeeaaaaaaaaaachhhhhhhhhhhhh

This is no biggie, I hold my tongue at this stage. They have missed a golden opportunity, I think. Hey I could be arrogant and wrong…. or just arrogant.

Oh, there is another thing, Gentle/My Generation needs the primary keys to be marked as IDENTITY columns so that they autonumber. (I later learn that MS SQL doesn’t have a CREATE SEQUENCE syntax like some other DB’s do). That’s fair enough, they defined a few sequences.

“We couldn’t figure out how to do it in TOAD, so we just changed the database directly and uploaded the database image backup to SVN… we don’t need the database diagram anymore”

Sccccccccrrrrrrrrrreeeeeeeeaaaaaaaaaachhhhhhhhhhhhh

I start fuming inside, given that the version of the database diagram they used do generate the code has been already out of date. It’s lacked columns and renames I’ve done over the last week. These amendments I’ve done were minor, but they were based on the feedback that Dave gave us when Andy took the design for review.

Scccccccc—-rrrrrrrrrr——-eeeeeeeeaaaaaaaaaa———chhhhhhhhhhhhh

“Why didn’t you upload your file to SVN?” I ask.

“I couldn’t, I kept getting this error”. I look on his laptop, and see that there is a small yellow exclamation mark next to the Database Schema file. He then motioned to commit. The error message, to me anyway, tells me that his checkout is out of date, and he needs to do an update first. He does this. He gets another error message telling him that the file is in conflict with another recent change (mine) and he needs to merge it. He then brushes off the error reiterating that we don’t need Toad anymore since we’ve started developing.

schhhhhhhhhhhhhhhhhhwwwwwwwww
eeeeeeeeeaaaaaaaaaaccccccccccccchhhhhh
hhhhhh. 
TScshhhhwwwwweeeeeeewwwweeeeeeeee
eeeeeeaaaaaaaaaaatttttccccccccccccccccccc
cchhhhhhhhhhhhhhhh.

I then proceed to explain that we still need those diagrams. For one, it is the documentation that will appear in our data dictionary, and follow on to the souls that will be doing work for this client next semester. Another point is that I have spent some time adding notes and column descriptions to the Toad model, and again, this documentation will come to help us or the future maintainers of the system later on.

My colleague recognises that those are important points but can’t validate their importance since given the short timeframe we have; ‘the clients indecision has ‘forced’ us into taking these shortcuts’. The changes I’ve made are relatively minor, why don’t we just add them to the un-commited, ‘more up-to-date’ file that my colleague has?

Toad has a useful compare function so we can quickly view these differences. It quickly becomes apparent that not much of the Toad schema has been modified at all but rather the changes to get the My Generation tool to work have all been done at the database level. The case names are still mixed case whereas the database is in lower, and there is the obvious lack of identifier fields. There are now dozens of changes because of this. Also, my colleague has added another table to represent the username and password field that I had added to the person entity in the version of the database schema he couldn’t see because he had a merge conflict he didn’t know how to get rid of.

Sccrrreeeeech. (One for me, I take some responsibility for only showing them the basics. But I did give them links to some really good guides, which told you what that fricken conflict icon meant in a second…. Why are people afraid to read manuals?)

Twwwwwwwwwwwwwwwwwwwwwwiiiiiiiiiiiii

ieeeeeeeeeeeeeiiiinnnnnccccccccccccccccchhhh

hhhhhhhhhhhhhhh
(If only your parents enrolled you in some more team sports as a kid or made you get off that computer as a kid, not that I can talk, so you could express yourself more clearly and realise the importance of keeping everyone on the same page….wait a sec, am I getting mad at him?, or just that he reminds me of a younger version of me???) 

The Toad file and the database schema that our only working code runs on are completely different. This leaves us with only one option, reverse engineer that database into Toad, then add the changes I made. The only problem is that now, we not only have the toad file to worry about, we also have to update the column and class names that were autogenerated by My Generation. But since we’ve started modifying these classes, using My Generation again to get the code into sync is out of the question, since it would write over those business classes and custom logic!  

Tsccccccchwwwwwwwwwwwwwnnnn

nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn

Whilst I still keep my comments to myself, my expression has changed considerably. My colleague, noticing my frustration asks what the matter is. It’s hard to express my disappointment, because I wasn’t there when they performed this work. They did what they could, with the tools they had, and they thought they did a reasonable job considering. My colleague offers that the changes they made weren’t the most ideal way of doing things, but that the looming time constraints have made the shortcuts justifiable.

I can appreciate this position, sort of, but from my experience, not that it is much broader than theirs if at all, tells me that this sort of mashing is a great way to lead to errors and bugs in the code. I can’t give this my blessing, yet I can’t find the words, nor the justification to scrap the work they have already done and rewrite the business layer and persistence in a more reusable, TESTABLE!!!!, maintainable way. All I see/hear, is shortcuts to get the code to compile and the program to run. In the long run, I see the client not signing off on UAT because the thing falls over, or worse, they sign off and the thing breaks after its deployed, if were lucky to keep this jalopy afloat for long enough to the deploy stage. This goes against what I’ve seen as a software developer in the workforce, and as what I’ve been taught as a uni student.  

Q?: If you were on the Titanic, would you have told the guy that asked to increase the speed of the ship, which ultimately led it to being unavailable to avoid the iceberg, to piss off with his ‘lets get to america faster to show off this new boat’ request…

Thoughts of the risk seminar, the RMIT student admin project disaster and Dave Grants question during that seminar, ring in my mind. If they know better, why do they revert to using crap practises??? 
 

SC Ignorance REE is EEC bliss CHH

Its been about 2 hours since we’ve started. We haven’t even started coding like we planned yet. Grudgingly I agree to hold my tongue in order for us to get any work done. I ask to take a look at the code….

NO CLASS INHERITANCE HAS BEEN IMPLEMENTED BECAUSE THE DATABASE PERSISTANCE FRAMEWORK OR PIECE OF CRAP AUTOMATIC CODE GENERATOR CAN’T HANDLE SUBCLASSES. THERE WILL BE NO USE OF ANY POLYMORPHISM IN OUR CODE (and all code reuse benefits that come from it) BECAUSE EACH CLASS MAPS DIRECTLY TO ONE DATABASE TABLE. WHEN WE CODE, WE MUST CODE THE LOGIC THAT LINKS THE RELATED SUB/SUPERCLASSES TOGETHER. (Why don’t we just write the program in C?). OUR METHODS WILL TAKE THE SUPER CLASS + LOWER LEVEL CLASS AS PARAMETERS, THEY WILL HAVE SWITCH, IFS and LOOPS TO HANDLE EACH DIFFERENT TYPE OF SUBCLASS, THEY WILL REQUIRE 3 TIMES THE TEST CASES TO TEST ALL THE EXTRA CODE WE NEED TO WRITE TO DO THINGS THIS WAY. (so we wont bother with unit testing, just write a local application we can use to manually test the control layer functions in a system testing fashion)

All because of the time restrictions that have been placed on us by the client. Its not how we would do it in an ideal world, but lets do it in a completely stupid way that will take longer in the end because we don’t have enough time in the first place. 2 + 2 = 5. We don’t need to write unit tests, we don’t have the time.

Yeah, since you put it that way, that sounds perfectly fine to me. I forgot to add one thing….  

TTTTTTTTTTTTTTTTTTWWWWWWWWWWWWWWWWSSSSSSSSSSSSSSSSSSSSCWWWWW
WWWWWWWWWCCCCCCCCCCCCCCCWWWWWWWWWWWWWWWWEEEEEEEEEEE
EEEEEEEETTTTTTTTTTTTTTCCCCCCCCCCCCCCCCHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHEEEEEEEEEEEEEEEEEEEEEEE
EEEEEEEWWWWWWWWWWWWWWWWWWRTTTTTTTTTTTTTTTTTTTTTTTTTTWWWWWW
WWWWWWWWWWWWCCCCCCCCCCCCCCCCCCCCCCCCCSSSSSSSSSSSSWWWWWWW
WWWWWEEEEEEEEEEEEEEEEEEEAAAAAAAAAAAAAAAAAAAAAA
AAAAACCCCCCCCCCCAAAAAAAAAAAAAAAAAHHHHHHHHHHHHHHHHHHH
HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

(Yes, I’ve been told for using sarcasm, it’s a poor mans humour I know, not funny and I’m sorry)

What I did after that was look for the nearest chalkboard. Perhaps I could get my colleague to understand if I could demonstrate the sound in my ears. The closest thing I could find was an electrical fuse box, I stood on my chair, and started scratching at its panel.

————————————————————————

Believe it or not, we did actually get some work done after that. We now have a new database policy that is designed to only work on the source TOAD file and propagate the changes automagically without intervening in the database directly. It is the responsibility of everyone to keep in sync with the changes and to update their own changes as soon as possible.

As I read over this blog, I realise the bitterness has taken over, I have become what I vowed I wouldn’t. Then again, I got the new Nine Inch Nails record last week. This depressing emo attitude, could be somehow linked to this new music I’ve been digesting recently. Ha!

  

Where for art thou taxes?

Our client, an IT industry group, commissions a system and in the end has no place to put it.

 
 

It doesn’t bother me too much. Mistakes happen, people make arrangements that they don’t fully understand then come to realise that those arrangements aren’t suitable. That’s exactly what happened with us.

But then I ask myself…. What is the cost of these mistakes and omissions and more importantly who ends up paying for the system in the end???.

 
 

Lets see, we are getting paid a grand total of $0 per hour. But, this project is funded by the state government. The state government is funded by the taxes we pay through our normal jobs.

 
 

In fact, I am not working for free, I’m paying them via taxes which are spent on this project. And this project has led me to be dicked around because funnily enough, an IT industry group can’t find a web host. Insert tones of Fred Flinstone grumbling.

 
 

The fact that I’ve hit this rock bottom, scraping up excuses and likenesses to Fred Flinstone is not helped by the lack of communication, or the zero take-up of responsibility by the client to chariot the cause and find another solution where there initial one failed.

 
 

Alright, so enough of this smellfest, lets put it in perspective.

 
 

What could we have done from the start to alleviate the problem?

  1. Set the technology from the get go. No matter how ridiculous sounding, having something instead of nothing leads to an open discussion about reasonable leads and makes the client understand their commitment from the beginning. The solution to all hardware requirements is a chess champion beating mainframe. After that, let the client come back and negotiate with us if they think our selected platform is too expensive.
    1. Get commitment early on about the platform, similarly as you would have consensus with the client about the intent of the system you are to build. The platform is tied into the end product, and so the features follow on from it. But rather than say, well we need to know the platform and we can’t do X, Y & Z without it, tell the truth. We want this platform and we’ll push and shitfight all the way because X, Y & Z could be done by any talented coder in any language, but we know we can lazily do X, Y & Z with more ease if we strongly push our preferred one. It’s like going for a job. A Java developer won’t necessarily go for ASP.NET if they’ve never programmed in it before. If jobs were scarce and they were cluey, they’d bring themselves up to speed in .NET and have a go at it, but it wouldn’t be without looking at the Java options first.
  2. We had this opportunity to elect the platform at the beginning and we didn’t go about it the right way. What we ended up doing was taking time to find out each others allegiances and technologies each were comfortable in. More importantly, we thought that we had to work with the clients existing host, and in our eagerness to impress by integrating, wasted time waiting for specs that would never come.
  3. Initially this point was going to be about picking a platform for your team, sticking with it and forcing your choice on the client, but after writing a few paragraphs justifying that thought, I realise we have a mole in our ranks. Someone who is pushing one particular platform. Maybe a nest of mole’s. The rest of the team don’t hold an allegiance per se, but do hold a resistance to learning new platforms and I fucking hate it. Out of what I thought was fairness, I supported the choice of platform to appeal to everyone’s need of familiarity. But really, as you get more experienced (and I think 3rd year uni students have enough experience), they should be grabbing new platforms like Eve takes apples from a serpent. Yum, yum, stuff the consequences.
  4. If you know your team are going to be shitty if you push the above point, then just pick the consensus platform and push that. Similar to the push a mainframe scenario in point 1, except instead of the client coming back saying its too much, it’ll be the developers crying because those last pieces aren’t fitting together as easily as they thought. At least when the seams start to appear, they will have now learnt a valuable lesson and future projects will benefit because of it.

     
     

    So in summary, if you don’t want problems with your platform, be rich, obnoxious and proud. Then you’ll just have problems with yourself, not the wider stakeholder community.

     
     

    Edit: Today there was good news on the hosting front. The client, not our direct liaison, but a member of that organisation all the same took the ball and gave us the commitment in terms of host we needed to start developing.

Flexible Reporting

I took the opportunity to listen to play back our interview notes from the first two meetings. Since I volunteered to do mock up reports to find out what extra data we hadn’t yet considered for the ‘domain model’, it made sense.

 
 

Shock horror, our tutors were right. There is a plethora of things that the client wants. They don’t know how to ask for it. But they are concerned with the demographic of memberships.

 
 

On a good note, the client has been good and got in touch with our host over the last week. The host has assigned the client a new account manager and I’ve been able to forward our requests, which are now sitting with their ‘technical department’.

 
 

And now, less of the same…

Yesterday began with adventures in HiFi.

 
 

This was good because the drones had alleviated my need to extrapolate the beginnings of a fine art form.

 
 

Then 4 wise men appeared. They thanked us for the food and left. Their gifts were going to some baby. I later heard only 3 of them reached their destination. The 4th guy spilt the oil.

 
 

If only time had a place within my spice rack. It would sit well against the tins of SPAM underneath. Why not, because? Yesterday, Ok!

 
 

HiFi, yeah!….. Your team member has lost the plot. Put it in the risk report.

Toothache

I took the time to read the personal blogs of Jake and Dave. Oh my goodness, what a fricken whinge-fest. If this is how they want us to write blogs then read on. Its great to know that you are tired as hell for taking on too many responsibilities and maintaining the illusion that you are living life to its fullest because you are too busy to do stuff all the time. Yeah that excuse approach worked well for me throughout my time at Monash Caulfield. Just look at my dismal grades because of it. Everywhere I look, not just at these blogs but in my personal and professional life, everyone is working more hours than they should. Taken a labour day holiday recently? What happened to 8 hours of work, 8 hours of rest, and 8 hours of play. (That adds up to 24 quite nicely.) People are getting pushed by their managers to do all this overtime, and regardless of whether it is remunerated or not, their personal lives, relationships, health (yes it is unhealthy if you don’t get enough sleep) and their own existence takes a dive. I’ve done it for too long, (I even go by the monkier, neversleepz, on many internet message boards because of it), and woke up one day to realise that 4 hours of sleep a night isn’t living, its slavery. I feel as outspoken as a vegetarian saying anything about it though. I wonder how you stop a junkie employee, thinking that their drug work is the most fruitful thing in their lives?

Next whinge, I also couldn’t believe the nerve of Jake to blog that “2 teams with the same client didn’t think of sharing data”. Since I worked with members of the other team last semester, I did approach them a couple of weeks ago with the idea. All I got back was ‘we already asked, the client wants the systems to be separate…. I don’t think we have stuff in common anyway’. Granted some members in my team were of the persuasion that it was additional risk, but after Jake and Dave made the point that data could be shared, and I drew on the board that we shared a common point of collecting customer contact data that we could develop together, we agreed that this is something that should be persisted. If only we could we could get hold of the client and run it past them…

Whinge 3: The manager of the clients organisation, the only paid employee, finally emailed Andy and myself with regards to the current host of their CRM, their only online presence, being the host for our project(s). She began a week and a half of annual leave on Friday, but advised in an email to us Thursday night, that she was super busy (this is true, she wears a few different hats…, again see opening rant), and would contact the host on Friday (her day off, how nice), and let us know. I’m just about to walk into our Monday morning meeting, only to report that our client must be well and truly on her holiday and we are still in the dark about were to put our host. What we do know, is that the client contact for the other team working for the same organisation, has no jurisdiction on this kind of thing, they only deal with their program. The organisation is still young and they don’t have much integration yet. I see Jake and Dave’s point about the marrying of the data, but the organisation itself isn’t very married to each other. Its great that these IT projects can facilitate this, but it’s a shame we haven’t been able to get a warning light out to them to say, ‘hey, your next business problem is that one hand doesn’t talk to the other, you are going to need to pull stuff in and work more closely if you want this organisation to run effectively.’ They probably already do know this, I hope they see the opportunity in these teams to help achieve this.

Whinge 4: I was stood-up on Wednesday and Thursday with regard to going over our database design. I was to meet with another team member over MSN, but haven’t heard from them. Yes, errors a plenty on both our behalves (like no phonecall to ask where the other one was), so this wont be a long whinge. It does mean we will have more collaboration today which will cut in to our time today and my time to find a dentist (see next whinge)

Whinge 5: Saturday afternoon felt like my wisdom tooth was pushing into the tooth next to it. Due to my bad habits of no breakfast, and being best mates with the vending machine at work, I think I exacerbated a long standing problem. A few teeth away from said wisdom tooth was what looks like a couple of cavities. Dentist, Dr Danny Lam, as rude as he was when I saw him a couple of years ago, may get my business after-all because I don’t know if I’ll have the time to travel any other locality and lookup a new dentist (refer to opening rant, again)

How’s that? Do I sound like enough of a baby boomer for ya? Feel a bit low, eh? Now go and read Conversations with God, and find out that this negative ranting does nobody any good.

Or don’t, but heed my advice: Never underestimate the power of positive thinking.

It’s not cheating if you blog using OneNote 07

Wow. What a stupidly technological week. If doing I.T. was like eating fruit, I’d have a severe case of the runs.

 

Database Modelling

Ok, so the first interesting item this week was our Domain Model. A domain model, for all you B Comp. students, is essentially a Class Diagram, its just what the B. Info Systems students call ’em. We needed to prepare our Domain Model as the first step of our design for the vicICT Membership System.

 

A membership system sounds fine and dandy until said membership system requires you to facilitate group memberships and changing from one membership type to the other. One thing I’ve learnt is that regardless of what walk of life you come from, you will always argue about the database schema.

 

So, Tuesday morning started with the B Comp students in my team all standing at a whiteboard, arguing about the particulars of cardinalities of relationships and basic stuff to do with ER modelling. Then to assist, along came our tutors, who both disagreed with our approach, and then with each other. Apart from being humorous, in a Schindler’s List sort of way, this was a sign, along with all the other signs encountered in previous group work, that system design and modelling is a great

divider amongst men. I guess that’s why the majority of DBA’s I know are women!

 

Anyhow, if the torture of the morning wasn’t enough, we decided to press on. Find a meeting room in the uni containing a whiteboard. Tucked away in N block where no IT student normally ventures and after we had some time to think more about the data and problem space, we proceeded to draw our designs. This was followed by a series of drawing over each others design intertwined with a lot of explaining of our thought processes and explaining the basics of some database concepts to other members.

 

By the afternoon we were all bored and frustrated with each other, but we had learnt an important lesson:

 

The safest thing when coming up with a DB design is to have each person go away and draw their own models. This way everyone has thought about the perils and at least made a few mistakes in their revisions. They have also tested their schema in their head against different scenarios which can then be used to challenge alternate designs other team members have come up with. They can go and find tools and fill in their own knowledge gaps before we all get together and waste each others time. After that, they can regroup and compare ideas, merge models and go forward. I guess in the real world, a safe analogy would be that someone calls a meeting about a design. In preparation for this meeting you:

 

  1. Actually prepare.
  2. Find your database mojo. If you haven’t had much exposure to databases (ie, you are still a uni student), go and look at your textbooks on databases. Make sure that you don’t come to the meeting proposing silly things that can’t be normalised.
  3. Have a go with as much of the design as possible. Make sure you challenge it and find a few things wrong with it, then fix those things. If your design comes out smelling like roses on the first go, please go sniff a dogs bum and ensure your nose is in good functional order.
  4. Break the design and find test scenarios to break other peoples designs. Write these scenarios down, they will make the basis for some clever test cases when you are implementing your system.

     
     

    Office 2007

    My new toy this week was the new Microsoft Office. Surprisingly, my 3 year old laptop runs it faster than Office 2003. The user interface redesign in Word, Excel & PowerPoint (among others) was a risk but it certainly paid off. A lot of thought to what controls are used most often and common toolbar icons pop up over your document when you highlight something and right click, so you are closer to the editing controls. Some of my senior colleagues think the interface is shit and the ribbon is too big, but I’m very impressed with it. You can turn it off if you like later anyhow.
     

    One Note is still my fave app, and the extra features show that M$ are certainly pushing it. Some cool new features are the text recognition of pictures and audio recordings so that they can be searched just like typed and handwritten notes.
     

    The best feature so far is the blogging tool though. It can integrate with my two faves, WordPress and Blogger and it’s a great welcome to the new suite.
     

    Office Ultimate is still available at the ridiculous uni student price of $75 per license. You will need one license for each computer you want to install this on (per my reply from M$ when I emailed them about it), but its still a bargain compared to the Academic pricing in the shops for a not so Ultimate version ($250~). All you have to do is go to http://www.itsnotcheating.com.au and mention referral code MSP9.
     

    That Linux Box that’ll run a version control server and upstairs heater

    Fun and games on the linux front. The Cobalt Raq4 server is now running Fedora 5. All it took was to upgrade the firmware on the cobalt, pull out the old hard drive, install Fedora on a normal PC, then replace the kernal and modules with premade ones as well as code to run the LCD, configure the serial port and terminals to run on startup and then put the Hard Drive back in the unit, configure the unit to use the hard drive rather than network boot and tell Fedora it’s the /boot partition rather than the / one, I want it to start when it boots from disk.
     

    Yes, I am being sarcastic, it took an age and a day (two of them in fact) to get this thing up and running.
     

    My friend who has provided the hardware, has met with much frustration in linux land, particularly with user and group permissions, but has managed to resolve these, and also install SVN and Apache2 and have them run our repositories nicely.
     

    We have a backup strategy and he has also wrote a script to do an incremental backup of the repository onto a series of large rotated USB disks. One thing that still puzzles us is how to do a backup of both the repository configuration and the files in the repository. svnadmin dump does the first bit, but not the second.
     

    “By the power of SSH tunneling, I am HE-MAN” aka “now I can escape my countries Internet Censorship Proxy by finding an SSH server to bounce off”

    We had the box running solo throughout the end of the week while we were at uni/work. Something that was interesting were the amount of connection attempts from overseas. People from China, France and Russia trying a dictionary attack of usernames and passwords. On a bad day we had 600 connection attempts of which only a couple of them legitimate. Thankfully of all the successful logins, they were only done from our own networks in Aus, but its still interesting to see how many people just sniff ports on the web in search of a place they can squat and do all kinds of nasty stuff to. Side note: If we were hacked, the hackers could have gone and edited the logs to make us believe no one ever came in :S
     

    And the winner is Sydney, Australia

    As for hosting the server, it was sitting in my house for most of the week but the spare room is small and along with another computer, router, 2 monitors, a laser printer and god knows what else, the room heats up really quick. It’s also not suitable since my partner runs a home business from the same room!
     

    We were going to secure a place in a Sydney datacenter but our contacts advised that the operators of said datacenter had a habit of unnecessarily unplugging things they decided on a whim didn’t belong. My friend on the other hand is keen to host the server at his place, which I’m glad. Its a win / win since my friend has a dedicated internet connection just for the server (big family, 2 internet accounts), a large bedroom and personal tolerance for loud PSU fans, as well as a bedroom that gets really cold.
     

    The only limitation about hosting it there is the bandwidth involved, 64k upload will mean slow checkout times but once a checkout is done, then doing the updates shouldn’t suck too badly. We can consider an upgrade of speed and since it’s a wireless connection the provisioning lead time to upgrade shouldn’t be long at all.
     

    And after all our hard work, I managed to fubar it all up by doing a yum update. In doing so, it updated the kernal and heaps of other software. Great we have patched many known security holes. However it must have installed packages intended for a 686 instead of 386 architecture, and now every command is returning an illegal instruction message. I’ll use this opportunity to reformat the thing as a Raid device, and also set up our repositories using the FSFS type rather than the Berkely DB filesystem. Berkely DB is tried and true, but much less resilient to unexpected shutdowns. I read on one of the cobalt pages how to configure yum / rpm to only look at 386 packages, so I’ll take this on board next time so we can get back together.
     

    I’m am so haX0r. Give me admin rights so I can install developer tools

    Good news (I hope), the ‘development managers’ have met with the ‘internal services’ people at Monash and met our requests to install TortoiseSVN on the student labs. The dev managers seem quite happy with the idea. I believe Monday morning (a few hours away) I’ll be meeting with one of the internal services people in charge to run through Tortoise and determine if it can run on the Novell setup and talk to our servers, great and small.
     

    There could be a few issues with this. Tortoise can accept a proxy configuration but the problem is that it will store the said configuration in its configuration file. This means that if another student logged in, they could potentially have the password settings from the previous user. We would need a logout script to clear this value or have Tortoise ask for the password each time and not save user credentials. Or it could be that the students documents and settings where all this stuff is stored, is wiped on logout anyhow, so worrying about this may not be too much of a hassle.
     

    The other issue is that TortoiseSVN appears on every context menu in Windows explorer. This can be switched off I believe, just have to find out how.
     

    It all ends with V

    Shit its 1:33AM and I’ve already had my daily recommended intake of V. Time to sign off this stupidly long blog.
     

    Till next time

    -K

Of server racks and linux distro’s

Ok, so for my industrial experience project, an opportunity to have a server housed in an ISP’s datacenter for our source control system was made available.

What an excellent proposition, no bandwidth charges, fast link for everyone concerned (no more sharing my precious torrent upload bandwidth, ha!). The only catch, you have to install your software on an aging Cobalt 4i Rackmount Server.

Just to give you a rundown, the cobalts use an OS based on Redhat Linux 7. Not too bad, they also have an aging pkg format to install stuff on, however there aren’t many packages available nowadays for this.

So, the unit itself is great, but I do want to get svn running and apache2 to make use of committing over port 80. This meant it was time for a new distro.

Basically getting a recent linux distro to run on it is possible, but there are a couple of hoops you have to jump through.

  1. The ROM

    The box we had come with a 2.3.35 rom installed. The first step was to go to the sourceforge Cobalt-ROM project and update to 2.10.3-ext3-1M rom in order to make available the bulk of the linux distro’s available to us.
    Since the original cobalt site has been down for a long time, so too were the instructions of how to install the flashrom. Although its work-out-able, flashing ROM’s especially when you are going by guesses greatly increases the chances of bricking your router. Thankfully the bottom half of this page explains the steps involved. Especially making a backup of the existing rom first just in case things go awry.

  2. Weapons of Mass Panic

    W00t! Power cycling the router bought some nice new LCD graphics (ok, Kon, get a grip, they are just mono-colour pixel graphics). This allowed me to guess that the flash was successful but after a little waiting all I had on the screen was a clock icon, with Sun Cobalt written at the top and a knight rider esk cursor moving from left to right and back again. Ooops! Maybe its taking longer getting used to its new life in the new rom? Once ten mins had passed and it was still the same, I recall reading that the boot rom defaults to booting from a network. This means that there must be a way to configure it. Here was my saving grace. Hold down the S key on bootup and you’ll get to the boot options menu where you can choose to boot from disk instead of network. The network option i guess will come in handy later.

  3. Re-Conception

    So now we have our Rom upgraded, the next step is to prepare a new distro. There are a couple of options for this. One is to use a slave PC to install the linux distro of your choice on, install some additional packages to allow the boot time stuff to kick in. There is one distro in particular CentOS which has a BlueQuartz system which basically is the open source equivalent of the Cobalts current web UI and server config management (along with the server packages themselves). This may be the most ideal option. To get the a linux distro onto your system you can do one of two things (there may be more but these seemed the most appropriate)

    1. Pay $85 AUD and buy Strongbolt, the CD installer for the CentOS + Bluequartz. Strongbolt is a bootable ISO that will install the updated ROM and CentOS over Ethernet.
    2. Install onto a PC then transfer the hard drive across. Cheap, a bit of a muckaround, but doable. If you want Bluequartz, then Nuonce have the distro as a free (and slow download)
    3. Null Modem serial cable< - Does anyone have these anymore?
    4. I’ve seen Gentoo, Fedora, Debian and other distros have Cobalt installation pages dedictated to them.

I’m going with Nuonce, because of the additional packages they have available and the much less stuffing around. Beware that the installer will assimilate itself to any hard drive you have in the PC that you put the CD into. My friend unfortunately found this out the hard way.