Original post at the kaChing Eng Blog
In complex systems there is no end to testings, each test system is an another line of defense which eventually gets broken but the more you have, the less chances bugs will reach production. We do not have QA team and do not want to have one, the reasoning is that if a human is involved in testing then there is a higher chance of missing things and you simply can't test all the site dozens of times a day.
Lately we decided to add a yet another line of defense: Code Static Analysis (e.g. Findbugs, PMD and CPD). We decided to start with Findbugs which has a great ANT task and Hudson plugin (both of these we use). The problem with these tools is that they're producing tons of warnings and most organizations ignore them after since they're too noisy to deal with.
David V. from testdriven.com recommended: "Pizza Driven Development" (aka PDD) which works as following:
Step two: Give each member of the team two cards. Go over the list of rules with the team and have them vote on them. Voting is done using the cards where:
- No cards: I think the rule is stupid and we should filter it out in the findbugsExclude.xml
- One card: The rule is important but not critical.
- Two cards: The rule is super important and we should fix it right away.
Next step is having hudson run findbugs on every post commit so the build is considered to be broken if a new findbugs issue is introduced. The engineer who introduced the issue must either filter the class from that rule in the xml file or fix the bug as a first priority. Since the engineers get a notification few minutes after the commit then they are probably still messing with the code and its easy for them to fix it on the spot.
In the next few weeks we are adding a new rule from the "not critical" list every few days. The goal is to have all the rules we think are important without the common "its to noisy, lets ignore it" approche. Only after we're done with that we're going to add the next static analysis tool to build. The good thing about these tools and hudson is that you can run them in parallel to the unit/integration tests, on another machine, so they won't slow down the overall release cycle.