Thursday, April 15, 2010

Findbugs, Hudson and Pizza Driven Development (PDD)

Original post at the kaChing Eng Blog

As you may know, kaChing is an test driven engineering organization. Test driven is not an option, its a must. We move fast and push code to production few dozens of times a day in a five minutes release cycle, so we must have high confidence in our code.
In complex systems there is no end to testings, each test system is an another line of defense which eventually gets broken but the more you have, the less chances bugs will reach production. We do not have QA team and do not want to have one, the reasoning is that if a human is involved in testing then there is a higher chance of missing things and you simply can't test all the site dozens of times a day.

Lately we decided to add a yet another line of defense: Code Static Analysis (e.g. Findbugs, PMD and CPD). We decided to start with Findbugs which has a great ANT task and Hudson plugin (both of these we use). The problem with these tools is that they're producing tons of warnings and most organizations ignore them after since they're too noisy to deal with.

David V. from recommended: "Pizza Driven Development" (aka PDD) which works as following:
Step one: Order Pizza. Most engineers would commit on doing something if it will get them Pizza. For the minority that would not be seduced by Pizza, good old fashion violence would do.
Step two: Give each member of the team two cards. Go over the list of rules with the team and have them vote on them. Voting is done using the cards where:
  • No cards: I think the rule is stupid and we should filter it out in the findbugsExclude.xml
  • One card: The rule is important but not critical.
  • Two cards: The rule is super important and we should fix it right away.
The test sherif them creates three lists using the voting and the team works on fixing the "must fix" list which should have no more then few dozens of issues so it can be all fixed in couple of hours. Once you have that done the findbugs build is green and we're ready to go.

Next step is having hudson run findbugs on every post commit so the build is considered to be broken if a new findbugs issue is introduced. The engineer who introduced the issue must either filter the class from that rule in the xml file or fix the bug as a first priority. Since the engineers get a notification few minutes after the commit then they are probably still messing with the code and its easy for them to fix it on the spot.

In the next few weeks we are adding a new rule from the "not critical" list every few days. The goal is to have all the rules we think are important without the common "its to noisy, lets ignore it" approche. Only after we're done with that we're going to add the next static analysis tool to build. The good thing about these tools and hudson is that you can run them in parallel to the unit/integration tests, on another machine, so they won't slow down the overall release cycle.

Testing with Hibernate

Original post at the kaChing Eng Blog

Some of the unit tests I wrote lately involved setting up a data setup in the DB and testing the code using and mutating it. To make it clear, the DB is an in memory DB so the setup is super fast without IO slowdowns. Its very easy to do the setup using hibernate but the problem comes when you set a large collection of objects and a "java.sql.BatchUpdateException: failed batch" is being thrown. The frustrating point there is that hibernate won't let you know what exactly went wrong, even if you set your logger to "trace" level.

In order to solve it you can add the following to the system properties of that test:

It will execute the statements one by one and will give more precise details of what went wrong.
Note: you do NOT want to use these properties in production or in the continuance integration (CI) environment.

Creative Commons License This work by Eishay Smith is licensed under a Creative Commons Attribution 3.0 Unported License.