Friday, November 07, 2008

When to inherit, when to compose?

I suppose if you're reading this you've read some of the material out there on when to inherit, when to compose. You've also probably familiar with the "is-a", "has-a" principles...

My brother Peter, who is a dotNet programmer commented on the issue and came up with a useful principle...

Don't do it.

Or more politely... When you think you need to inherit, you pobably don't.

That is probably a good best practice approach - I need to post about best practices and why I say it's a good best practice approach. The problem is that inheritance is abused and the naive rule of "is-a", "has-a" is responsible for to many situations where inheritance is not the answer - composition would have been more than adequate and without the constraints and complexity that inheritance introduces.

It's a little like threading. If you can do it in one thread, then do it in one thread - you don't want the concurrency problems.

Wednesday, October 22, 2008

Developers do what they see...

This is probably the first really big project that I have been on and I have learnt a lot. One of the key lessons I have learnt is that the average developer is a sheep. He (or she) will do what they see and very seldom will they go against that.

In fact, even if it's wrong, they will do what they see.

It is has been quite remarkable.

So changing habits is not easy, especially if you're on a complex enterprise project which has been going for 18 months with a very fast development cycle. There are _many_ different ways of doing almost anything in the code so the poor developer has no idea what approach to follow.

It also means that changing habits is not easy, is has thus been quite encouraging to see some habits changing - I introduced commons collections into the code base, used it in my code, and showed a few dev's what it offers - and I'm now finding it is used more and more.

The other good thing about the way developers do what they see is that if what they see is good and right, those habits are persisted. I guess the opposite also applies, if what they see is low quality, they will extend that low quality.


Thursday, October 16, 2008

Hibernate best practices...

I have been on a very large enterprise project using hibernate for the persistence. Our data model is upwards of 200 objects. It is a clustered web application and thus requires high performance so we've implemented caching and have all our objects and relationship set to lazy load. The typical database interaction is lots of reads and few rights. The data of one user does not affect the data of another.

Here are some of my best practices which I would have loved to have had in place from the beginning. They would have saved a lot of time and reduced bugs...

Use id based Equality
This is contrary to a lot of the typical recommendations but I don't see a problem with it. It means that you have a working equality method without any further effort. Furthermore, you have a guaranteed equality, the same equality that the database uses. There are some issues with this when it comes to one-to-one relationships, but those can be resolved by adjusting the equals method. The other issue is that there are instances where there is no business based unique key.

Don't use auto generated Keys
So in your model classes set the id. This will mean you need another field as a persistence marker. The fundamental reason for this, coupled with the fact that you're using the id for equals, is that you have immediate equality, both prior to persisting the object and in your tests. By using auto generated keys you you have to wait until the object is persisted before its equals method works. You cannot thus use it in a set prior to persistence, and when you run tests you have to manually set the id every time you create the object.

Use instrumentation
Hibernate has an issue (a big issue IMHO) that if you execute the following code (Person is set to be Lazy loaded).

          Person p = session.load(Person.class, id);
      ContactDetails cd = p.getContactDetails();

And on the database that particular object (contact details) is actually a subclass of ContactDetails called AddressDetails, the real type of the variable cd will be ContactDetails and _not_ Address Details as you would have expected. In order to get the real type you have to do first ask hibernate for the real type of the object and then call load again with that real type. So in order for the variable cd to be the correct type you would require the following code:

          Person p = session.load(Person.class, id);

      ContactDetails cd = p.getContactDetails();
      cd = (ContactDetails)load(id, Hibernate.getClass(cd));

An extra line is required.

Instrumentation however injects code into the compiled class of ContactDetails and intercepts the call to getContactDetails to add the bit to load the real type. It's a small little ant job that runs within the context of our IDE and sorts it all out.

Use Field access over method access
Unfortunately, I only discovered that hibernate had the ability to directly act on the fields rather than go via methods too late to make use of this ability. It would have enabled me to setup some nice validation and/or side effects on the setting of a value in a hibernate managed object. I'm sorry I didn't do it earlier. This is a practice which I have thus not been able to test, it does sound like a good idea but maybe there are other issues I do not know about.

Think hard about your Cascades
Hibernate is not good at saving a whole object tree in one go. The problem is that as soon as any sql is executed on the database, the sql is validated against the state of the database. So any constraint violations are immediately raised. If you think hard on the cascades the ability so save whole trees is significantly improved though it is not totally enabled. One obvious practice is that if it has a not-null constraint the cascade must be set. If this is not set unless you save the foreign object before the local one you will get a constraint violation. The objects go together (thus the not-null) so the cascade should be on.

The above recommendations are fairly generic to any project that uses hibernate in the real world. There's a few that I've learnt on this project that are specific to a high performance and high concurrency web application...

Prefer trawling the object model over running queries
The kind of application we have is one where a user would login and probably send on average 10 - 20 minutes on the web site. Thus we would typically have a lot of the objects they require in cache already. Thus when you need to find some data for the person concerned it is a lot better to simply trawl the objects rather than use a hibernate query. The reason for this is twofold..
  • Querying in hibernate always causes a flush. Data is therefore written to the database at a time when it shouldn't necessarily be. Even though you know the query does not touch any of the pending writes, the flush will still happen.
  • A data connection is made. On our application we gained significant performance concurrency improvements by removing queries - the objects were already in the cache in any case. Where we had deadlocks we simply removed the query from the equation (changed to trawling the model) and the deadlocks were significantly reduced.
Make everying lazy
On our application where isn't a single relationship which is not lazy. We can't think of a case where having lazy off would be useful to us. Yes, I understand that it means the first read will be slow, but after that it'll be in the cache so it will be fast. If lazy is off even if the object is in the cache it will still load that object from the cache

In other words, if you have an object A which has a reference to object B and the reference is specified as non lazy. When you load A it will _always_ load B even if you don't go near B. Even if B is in the cache it will still load B from the cache. On our application we have a lot of static data. This static data was initially set to be lazy disabled (prefetching). Then we would preload all this static data. We found that the preload was very slow - even though it was coming from the cache - because it would have to load the root object plus all the non lazy relationship stemming from that object.

Thursday, October 09, 2008

What Every Development Shop cannot do without!

For those of you who have not heard of hudson, where have you been. It is the new kid on the block in the Continuous Integration space, and it (IMO) stands head and shoulders above the competition.

It has added an incalculable amount of value to our development environment, enough that we depend on it as much as we depend on our IDE. Furthermore, it has single handedly raised the quality of the code such that I am so disappointed we did not have it in since the beginning of our project.

Why do I say all of that?

To my mind, the killer feature that hudson has is that it supports plugins. This is a feature what sets it apart.

And the killer plugin, which has contributed to the code quality is a fairly simple plugin which allows the results of various code quality metrics to be summarised and tracked in hudson.

We now have continuous metrics for the following...
  • Checkstyle - this is a tool which examines source code against a number of rules. It checks for instance, the formatting is correct, cyclomatic complexity and npath as well as simple class/method lenghts. It flags as a warning if the rule goes outside of the allowable scope.
  • FindBugs - this is a class level code checking tool. It can find things like equals being executed against different types
  • Cpd (Copy-Paste) - a cool plugin that checks for where code is the same and thus probably was copied and pasted.
  • Pmd - another class level checker
Now I could have run these code checking tools without hudson. Hudson however, allows me to track changes over time and to know when new violations have been added etc. And the running of these tools has already avoided many potential bugs.

So if you haven't checked out Hudson and its Plugins, it's never too late. The longer you leave it the worse your code is going to get.

If you want to know more, then let me know.

Tuesday, October 07, 2008

JDK 1.4 is being retired...

RIP ... JDK 1.4

If you haven't heard, JDK 1.4 is to officially entier End of Service Life. Over at dzone, Alex Miller comments about and what I find most interesting is that the fact that it's going to be retired is not going to make much difference to the software community. They are going to happily continue using it. If they're still using 1.4 now, I doubt whether it reaching end of Service is going to push them over the edge.

For one thing, software shops which use Websphere are typically still on 1.4 - this is because the big bear IBM is controlling the VM version of those shops and it's going to be a long time before WPS on Java 5 sees the light of day - so though WebSphere 6.1 supports Java 5, Websphere Process Server 6.0 does not. So spare a though for people like us who are still stuck on Java 1.4 and the news that it is going to be retired will be a non even in our lives.

Thursday, October 02, 2008

Testng is Cool but flawed

A few months ago we took a closer look at our testing strategy. We assessed testng and noticed it had a number of features we thought we'd find useful. e.g. data driven testing, ability to run only the tests the failed, configurable testing in xml based on annotations and/or xdoclet tags.

Based on the features and a little prototyping we decided to use testng as a standard test framework. We setup the tests to run on our CI server (hudson) and we were good to go...

And things moved a long smoothly, we used a lot of the extra features of testng and also found that having an xml file that indicated what was a test was quite useful.

But then issues started to surface...

1. Testng cannot run each test on a different VM. The makers of testng do not give you the option to run your tests in a totally new VM so you can potentially run into issues when doing integration like tests requiring the hibernate session factory and database. In junit I have the option of runnning each test case in a totally new VM.

2. Testng doesn't really work with 1.4 xdoclet annotations. Unfortunately we are still on 1.4 and thus had to use the xdoclet annotation mechanism for tagging a test. This was problematic almost all the time. I wouldn't be surprised if the issues we had with testng would be a lot less if we were able to use class annotations.

3. Testng tries to skip tests when the setup fails, i.e. it doesn't fail fase.This was the major issue. The problem was that when a test's setup does fail and thus the test is skipped, testng struggles to identify how many tests were supposed to run. Thus in our CI server we found that the number of tests kept on changing. Permanently. Thus it because very difficult to get predictable results and a good idea of what is going on in the tests.

4. The Eclipse plugin for testng is not as good as junit's. The developers in my shop were unhappy partly because of this. When I announced that we were moving back to junit they were happy. Some of them had continued to subclass TestCase because they felt the assertions provided were a lot richer when this is done (and they are). With Testng, because it is not required that you subclass when creating a test, you generally use the java keyword "assert".

So now we are moving off testng back onto the old faithful, junit. There are however tests which are built with the data provider and/or parameter subsystems which testng offers and these will stay testng tests. We have no plans to move these off testng. And if our developers really need the extra functionality provided by testng, they can use it, within reason - our testng CI run has not totally gone away.

The approach now is, if the tests only requires the basic features which junit3 has, then use junit. If you need testng functionality, get permission for it before you use it.

Tuesday, September 30, 2008

The Developments at SpringSource

The blogosphere has been abuzz with the news that SpringSource is going to be charging for support, and for maintenance releases. SpringSource is becoming a more traditional, more mainstream enterprise. Rod Johnson, one of the progenitor's of Spring has seen Spring become like hibernate in that though it is not a standard it might as well be.

Now they SpringSource, the entity which focuses on Spring development are looking at ways to extract value out of that popularity, but also, I think, take Spring to the next level.

And the developments are not unusual, or should not have been totally unexpected. How many other open source applications have gone the similar route. Redhat linux, JBoss, to a lesser extent hibernate... What other way do you monetize and open source application apart from selling support etc...

From all that I've read, the change to the release system is that "official" releases will only be available to non paying customers for three months after an official release. So 2.6.2 will only be made freely available if it is released in the 3 month period after version 2.6.

The bug fixes that go into any release will still be put into the source code and the licensing of the source code has not changed. So, then, why can't some developer checkout the source code, build it, and then make that freely available for all the other developers. I guess the only draw back is that it's not an "official" release. It's like a new car is only a new car if you drive it off the show room floor.

Personally, I don't have a major issue with where Spring is going. I think it's an acceptable way to build a viable business model out of their operation. Though it may serve to alienate purists, it might also cause people on the fringes to be brought in.

Thursday, September 25, 2008

Programming is Hard

I can remember my brother once remarking that "Programming is Hard", and I'm not sure he realised the importance and relevance of what he was saying.

On our large project - about 20 Java devs - I monitor the checkins and do some basic QA on them. The developers in this environment, who are apparently fairly good, still make school boy errors.

Fortunately we have hudson setup to do all the checks possible - checkstyle, pmd, findbugs and cpd - and so I can find most of the basic mistakes, like copy and pasted code for instance. But then often I find more complex issues, just bad design for example, and I've often thought about this issue.

I also maintain that computer programming is engineering. There is debate about this, but I think there are enough similiarities with "classic" engineering for it to be labelled engineering (I think it helps quality if you call it that).

Why then does computer programming not have the same aura that classic engineering has, and also the same obsession about quality...

Two reasons...

The first one is safety. When building a computer programming you're not building a bridge and thus you do not have to worry about people dieing if it fails. A "bug" in a bridge could prove very costly, not only in terms of money, but more importantly in terms of people's lives. There have been many instances when bridges have failed and their failure is put down the human error. Something that would have been called a "bug" in a computer program.

The second one is that the programming world is abstract. The result of this is that the world is a lot less limited. You're not limited by something physical when building a computer program. The result of this is that the barrier to entry is a lot lower, it's much easier to get into and also that it makes programming "easier" than other engineering. Easier in quotes because I think the problem we have with quality when building computing systems is because people think it is easier.

Programming with Quality is IMHO every bit as hard as building a bridge successfully or desigining and building an alarm system. If we want to produce quality we need to put as much effort into it as an qualified and certified engineer would put into building a bridge. We cannot think that our often mediocre, slap-dash efforts measure up.

Problem is, if you're reading this, you probably do see yourself as on a par with an Engineer.





Thursday, August 14, 2008

Spring and JEE

A helpful article I found on the value add of Spring...

Why do J2EE applications need Spring? The core of Spring explained...

It says most of the things I would say and doesn't "get ahead of itself".

I would strongly agree with points 1 and 2. Point 3, though it is a feature, is not something that I think, is used as much as people think. The reason I say that is how often do you actually have two implementations of the same interface, and switch between them? In our application we have loads of DAO objects, loads of service interfaces and yet we without fail have a one to one mapping between the interface and its implementation. The interfaces, IMO are useless fuzz. What I would prefer in fact would be for them all to be accessed directly (no interface) and then if and interface is required, then I can quickly and easily with the tools we have today, extract an interface and it's now abstracted. I only have the fuzz where it is absolutely necessary.

Though I must admit that we do use this feature and it does add value. I just don't think it is is the killer feature of the Spring Framework.

One of these days I think I should write a "why I like Spring Article..."

Monday, August 11, 2008

JEE take up could be better...

I work on a fairly large Jave Enterprise Edition application, deployed onto Websphere. The company is a large corporate and the application is a publicly available web site, involving a .net front end, java back end and a database.

The application runs on websphere and is deployed by a team of people who manage the server environment. There are lots of similar applications which they manage.

We have our own "dev" application servers - these are little more than developer spec boxes running linux. They are used for basic testing and to validate the application before it gets promoted. And, because we manage these environments, have carte blanche access.

The amazing thing is that the application always works just fine in this environment, but when promoted, it suddenly no longer works. And it is no coincidence that the application does not work in environments we have no access to.

Now, when the application does not work, we get blamed. The onus always falls on us, the developers to fix the problem even though we have little to no access to their environments the application is deployed on. The fact that the applications work in the environments we do manage means that the problem lies in the fact that we cannot configure these environments ourselves.

What does this cause?

It causes us developers to build our applications in such a way as they have as little depedency as possible on the environment they're running in. Thus we use as little of the JEE spec as possible because those are the bits that need to be configured on the running application server.

I am convinced that if we had better access to the test servers, we would be more inclined to use the rich functionality provided by the JEE specification.

Wednesday, June 04, 2008

Setting Coding Practices

Let's face it, anyone that has worked in development for any length of time will know that there is always change, and the "quality" of the code is a key area where this change is seen.

When you start on a project, you're not too familiar with the technology, you're not familiar with the problem space and you have a lot to learn. So you code in a certain way, and you use particular techniques to achieve your goals in code. Over time you learn better techniques, but the problem is you don't retrofit your old code to use the new techniques.

Recently I had to work on code that was written right at the beginning of my current project. Wow! Was it legacy.

This change and development is a good thing in the sense that it is a sign that things are improving, however it presents a challenge in that developers are very good copy cats. They will typically do things the way they see them done (even if that way is wrong) and don't as a rule go against that. In a sense that is also a good thing because we don't want a group of cowboys all doing their own thing.

The challenge is how do we accept this change and improvement, allow for it and even encourage it, and at the same time communicate the best practice approach to doing things. Furthermore, at some point we need to limit improvements (there's always trade offs), because any change that is made, needs to be tested. In our code base we still have all the various ways that things have been done from the beginning until now. When you have a project that has been going for two years, this is inevitable.

Tuesday, June 03, 2008

The Continuous Integration Game!

If you use hudson, then you really should take a look at the CI game plugin. It's actually hilarious!

It stems from an idea originally (I think) by Daren Cummins described here.

The way the game works in hudson is that you get points according to your build activity. Plus points if you do something positive, like check in on a successful builds and minus points if you do something negative, like checkin to break a build.

Other examples of ways to score (or not) are if you change test failure/success, add or remove code checking violations (checkstyle, pmd etc).

It's been a great source of humour and a little bit or rivalry amongst our devs. Obviously you can't read too much into it, because it's easy to game, but it has added some value.

The plugin home page is here.

Firefox 3 Rocks!

If you haven't downloading and are using firefox 3 you are missing out BIG time.

It is unbelievably fast - the whole browsing experience is super fast, and the javascript engine is about about 5 - 10 X faster. Even faster than Opera's.

The memory leak issues have also been fixed. With 17 tabs open, Task manager reports 237 Megs Used. I don't think I could have functioned with 17 tabs with firefox 2.

Friday, April 25, 2008

An Improved Skill level indicator

With reference to my previous post on skill level indicators, I have come up with a new version... which I think is better...

It works in terms of "problems", since when employers ask for skill what they're really asking is, how useful are you to me with this technology?


  1. Problems, what problems? (iow, I haven't had any problems because I haven't used it enough)
  2. Somebody else solves my problems
  3. I solve my problems but have to sometime ask someone else to help
  4. I solve other people's problems, sometimes someone comes with a new problem
  5. I solve other people's problems and there is never a problem that I don't know how to solve immediately

Saturday, April 19, 2008

A working definition of the skills scale

Have you ever done a "skills matrix"?

It entails writing down all your "skills", i.e. API's, technologies, languages, whatever might be relevant and putting a number from 1 to 5 next to them, and maybe a time length in years of your experience on the topic.

It is to get a prospective employer and idea of how valuable you could be.

The problem however, has always been, what does "1 to 5" mean. What is the reference point? You might regard yourself as an expert in Swing because you've built some funky table structure where you can edit the cells and render the cells according to their contents, you might put down 4, maybe if you're brave 5. But then what if one of the interviewers is a contributer to the swing API? In comparison to them you're only a 2...

I have thus always been skeptical of someone who puts down 4's and 5's. I'm going to make sure they're backing that up with something real. In fact, I'd love to interview someone who puts 4 or 5 for hibernate, it could be fun to show up their lack of skills.

Now what about if we come up with a more absolute definition of what the numbers 1 to 5 actually mean. So they're more than just mere numbers. Here is my stab at it...

1 – only read about and played a little with. I have indirect knowledge or experience (was on a project where they used it but I was not involved).


2 – shallow understanding, worked with by duplicating other work.


3 – becoming familiar, able to work with and beginning to understand what is going on. Extensive experience programming with.


4 – Understand the inner workings of the technology. Thus can solve problems where they appear. Extensive deep experience in working with.


5 – Extensive understanding of the whole technology. Totally knowledgeable and experienced with everything about the technology. Possibly spotted and fixed bugs in the API if applicable.


Is this helpful, or relevant? If you apply this grading system to some of the technologies you know would that change your score? I don't think any ranking method would be perfect, maybe people can comment on how it can be improved. Maybe there's a totally different solution.

Technorati Tags: ,

Thursday, April 17, 2008

Open Source and the cost you don't see

The story on the register about Sun moving towards charging customers for certain enterprise features included the following quote from research house "The Standish Group"

"Open Source software is raising havoc throughout the software market. It is the ultimate in disruptive technology, and while it is only 6 per cent of estimated trillion dollars IT budgeted annually, it represents a real loss of $60bn in annual revenues to software companies."®
The full report is available here.

This supports something I've been saying for a long time and that is that companies who use open source should not, in fact, see it as a free ride. They are getting a significant amount of value out of the open source software they use, $60bn worth. Personally, I think that is under what the real value is - think how many versions of apache are running out there (compare the price of an IIS license and its features).

I'm not demanding however, that companies start paying for the open source software they use - that would be like the good guy (open source) becoming the bad guy, just not on the outside. The open source community is not in it for the money.

Companies should give back to the open source community, they could donate cash if they so wish, but a better idea is for them to let their developers work on open source projects, on company time. Those very companies are using the open source software and will thus benefit from the work their developers do on it, because the developers will work on features that they require, and on bugs they have found.

The company at large will not see money going out to open source projects, even though value is being added to the open source project. There will thus be no nominal effect on the "bottom line". They will have a happier developer (what developer does not want to work on an open source project), they will gain a lot more street cred and the open source software they use, will be improved.

It's a win-win situation.

Tuesday, April 08, 2008

Programming: where Humility is a genuine virtue

You often hear of Sports players as being arrogant, and typically, it is not a compliment. The phrase is "He's an arrogant, good sports player". You do not hear the line, "He is not good sports player, pity he is not arrogant."

The point I'm getting to, is that the sports players arrogance or lack thereof, does not make a material difference to his performance on the pitch. Compare two top football (soccer) players. Thierry Henry and Christiano Ronaldo. Both artists on a football pitch, the one, Ronaldo, supremely arrogant (IMO) and the other the picture of sophistication, humility and just plain decency (he's got that thing that only Frenchman can have).

But, the arrogance or lack thereof does not make a material difference to their performance. No one ever said about Henry that he needed to be more arrogant in order to improve his performance on the soccer field.

However, it is not like that in software development.

In software development, humility is a virtue that can make a good programmer into a great one.

The other day I had to make some changes to improve performance on the application I work on, and then once those changes were made I deployed them to the clustered WAS server to test them. Quite an involved process this is. I figured, it's okay, there won't be errors in them so I didn't bother to test them on my own development machine.

Well, turns out, after many tries and many days of pain when only after I got the code onto the clustered server did I discover that it did in fact have bugs, that I finally got the code to run.

So my time saver at the beginning in not testing locally turned out to be much more time wasted in the long process of getting the buggy code onto the clustered environment to test it.

I was arrogant to assume that my code changes would be bug free. I thus lost more than a day because of this. If I had been humble then I would have checked myself before deploying to the server and thus saved a lot of time.

So unlike in sports where arrogance does not make a difference to performance, in software development, being arrogant is a liability. In a sense, it's counter intuitive. Even the best make mistakes (they've said as much).





Technorati Tags: ,

Is Software Art?

I was reading a post the other day, can't quite remember where the issue was "is software Art?".

Probably the typical response to that is no, computer software is utilitarian and thus cannot be construed as Art. But that approach belies a limited view of Art and what constitutes Art. To say something is functional and thus not art is to limit the world of Art significantly, but also to limit the potential and possibilities of the functional world.

Let me explain... take a knife. You can create a knife the "cuts", i.e. is fulfills its function. However, you can also get a knife which _really_ performs its functions. i.e. it is well balanced, has a sharp true blade, has a firm easy to use handle and is aesthetically pleasing. Now, this better knife was not produced only by an engineer using his "engineering" faculties but possibly by an artist type person as well (could have been the same person). Thus artistic elements were introduced. Furthermore, on use, though the simple knife that "cuts" would also fulfill its function - as does the more "arty" knife, anyone who uses both knives would admit the latter knife is better, and not necessarily for a reason that can be defined in merely utilitarian terms - they both "cut".

And to my mind, a similar dynamic occurs in Software.

Have you ever presented your end product to the client and been frustrated that they don't truly appreciate the elegance of the application. Why not? They are simply looking at the software from a functional point of view, i.e. the non art point of view. However, you know that the software is a lot more than just utilitarian. It has an elegance that is hard to explain to people that don't "get it".

The question is, is that elegance non functional? If a picture is beautiful, it does not have any functional value, however, what is interesting about computer software is that elegance follows function. That elegance may not be leveraged now, but in the future, the good design decisions that were taken earlier can easily lend to better functionality later. When I look at a good program, I respond to it the same way as I'd respond to art. It activates the same receptacles. Furthermore, there is truth in that the way to write good software is to study good software, similar to the way to paint well or write good poetry.

Very often I have struggled to sell the benefits of a particular approach to non techies. For instance, the benefits of refactoring to your manager. When your manager looks at your application they just see a bunch of brush strokes, when you look you see a Rembrandt.




Tuesday, February 12, 2008

Dynamic Languages, Wow!

The other day I went to a presentation on Dynamic languages. The case studies were python, ruby and groovy.

Well, it was nothing short of Wow. The potential for dynamic languages is _huge_ . I was blown away by what you can do when you take away the compile step. An application that can _change_ itself, at will. An application that can write itself.

An already built application that you don't like bits of, you can change.

Not having a clue whether your program runs, until you run it. i.e. all the pre-run step can really do is check your basic syntax is correct! There is of course, no compile time checking.

It was also interesting to realise how much we actually depend on the compiler when using static languages.

I think it would be like being a child again were I to start using dynamic languages as there is just so much you can do. You can of course shoot yourself in the foot much more effectively as well. It's a context where unit tests and test driven development is just a non-negotiable.

Thursday, January 03, 2008

Certification, Is it any good?

Recently a discussion has sprung up on the local java user group here in Cape Town (CTJUG) as to the value of certification.

A number of the members have expressed disappointment that though they have certification they still struggle to find work, and thus they ask on the forum for help in this regard. They've noticed that most software companies are looking for tertiary qualification rather than certifications and because of this they get despondent, and I can fully understand why.

The issues are numerous around this. If you search on the net you'll find much discussion on this very subject.

Certification does have value, but typically, depending on the context, more or less value.

Personel/Human resource people tend to value it highly, as this could be all they have to go on. In other words, because they don't understand the context they will put more value on certifications which probably means that in larger companies where human resources are involved in hiring, you'll find your certification more valuable.

However, in my experience, I don't find them valuable, unless it's all you have.

The fundamental issue here is that if a company hires me only because I have the certification then they are hiring as a mercenary just to do a particular job. To be honest, I'd rather not work in that environment because they have not understood the true value that I can add. I'm only speaking for myself here but I think you'll get an idea why I say this fairly soon. The value that I add is problem solving, research into new technologies, finding new ways of doing things, putting in new applications etc... I'm not a person who sits on his hands and waits for someone to tell him what to do. A person who punts his certification, IMO is. A person who punts his certification also does not understand that the real value in developers come from adding values in techs they do _not_ know. Because the industry is moving all the time (I'd rather say moving than changing), developers need to move with it. If you don't have developers that can move with it then you're going to be left behind and certifications tend to be dated. I'm certified for Java 1.4 you might boast, well what happens when java 1.5 comes along. How does the company know you'll be able to handle what 1.5 throws at you. Generics, for instance, which are not elementary.

And furthermore, I did a project for a large retail company where I had to "finish off" a program written by another developer who was certified! in Java on the Web. The application he wrote was _crap_, of the smelliest most disgusting kind. I'm afraid from then on I've been skeptical. Besides, a company is never going to take you only on the basis of your certification.

In closing however, I would like to point out that the certification is important from one perspective and that makes them infinitely valuable. If you do not have a lot of experience, say < 2 years and you haven't some formal tertiary qualification like a degree of diploma then a certification might be the difference between getting the interview and not getting the interview, and for that reason it could be _very_ important.


Powered by ScribeFire.

How important is the programming language?

  Python, typescript, java, kotlin, C#, .net… Which is your technology of choice? Which one are you currently working in and which do you wa...