Saturday, January 09, 2010

Complement TDD with MDA

Original post at the kaChing Eng Blog.

Test Driven Development (aka TDD) is on the rise. Good developers understand that code with no proper testing is dead code. You can't trust it to do what you want and its hard to change.
I'm a strong believer in Dijkstra's observation that "Program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence."

Dijkstra's statement doesn't contradict TDD. The test is testing a limited state machine. We do hope will cover the bloody battlefield of production confronting live data from users but if when we find the users did something unexpected which broke our software, we add a test emulating the users behavior and fix the problem.

Introducing Monitoring Driven Architecture (aka MDA)!
MDA is a second line of defense for TDD. MDA means that you bake monitoring into your architecture. Once you have MDA and the software is written in a monitorable (it is a word) way, you can have a faster detection of problems and auto roll back of faulty code. On the other hand, it is not uncommon that a small number of users suffer from a problem which manifests itself in some NPE thrown in one of the logs once in a blue moon and the operations team finds about it after a long while.

This is why I'm so excited about John's new Flexible Log Monitoring with Scribe, Esper, and Nagios deployment. It means that when we do find a problem we'll of course fix it but in addition make sure we express it in the logs and have our monitoring tools pick it up and send alerts about its existent without counting on anyone to manually look at the logs.

Wednesday, January 06, 2010

Outbrain's Password Recovery Mail

I forgot my Outbrain password and got this nice recovery email in return:
From: Outbrain CEO <****>
Hi eishay,

Click on the following link to reset your password:****
You will be prompted to choose a new password for your account.

Feel free to drop me a note with any questions you have.

Yaron Galai,
outbrain CEO
First time I notice a "system" email coming from a human that you can actually reply to, nevertheless the CTO of the company. Its an interesting concept, especially since I'm not a paying costumer. Yaron explained their take on it in this tweet:
@eishay Cool!... we have a rule here @Outbrain - all system emails must go out from an email of a human being. I hate all the info@ BS...

I found the approach very appealing, but can it scale? What is the max # of users it can support?
I assume its about the type of uses as well.

Reflection on 2009

This post is only to myself. Nothing interesting, just few personal notes.

The last year was very interesting!
Working in both LinkedIn and kaChing was/is a great experience. These are two fantastic companies, with very bright future, leading in their own fields and with amazingly talented engineers.

The SiliconValley CodeCamp '09 was a lot of fun. Had a full room and got high speaker evaluation and some candidates to kaChine (yes, we're still hiring).

The other talk was at QCon which was exciting. There where so many people that we had to move to a larger room that was packed as well :-)
Got some nice feedback via twitter twit twit twit twit twit twit twit twit twit.

It was great to do three sessions with the Reversim podcast (in Hebrew) about Scala, scalability and startups.

The open source serialization comparison project had significantly grown, it now has a dozen committers who contributed something in some point in time and still helping to update the project.
Overall I had lots of fun, and feel like 2010 is going to be even better !

Saturday, January 02, 2010

Subversion Backup

Posted on the kaChing Eng Blog.

Yes, we're using Subversion. I know that distributed version control systems (e.g. Git) are cool and we might get there sometime, but for misc reasons we're still using SVN. For the records, some of us are using GIT-SVN and we're working and releasing from trunk (part of a the lean startup methodology) so the branching merging is less of an issue.
I did some work to migrate our repository and spent some time to setup our SVN repo. Here are some bits and pieces I collected from scattered sites or made up myself to facilitate the SVN backup. Hope it will help anyone starting from scratch.

For the backup I'm using the great svnbackup script. Here are parts of our script (launched by crontab):

now=$(date +%F) --out-dir $OUT_DIR --file-name $FILE_NAME -v $REPO_LOCATION
if [ $RETVAL -ne 0 ]; then
    mail -s "ERROR: SVN backup on $now" $KACHING_OPS
    exit 1
Then the script sync's up the backup directory with S3 and verifies that the content of the last_saved file matches the last revision from SVN which it gets using
last_revision=$(svn -q --limit=1 log | head -2 | tail -1 | cut -c 2-6)
Backup is not enough, we must constantly test that when the time comes we'll be able to use it. Therefor we added a script, triggered by Nagios, to run on another machine and try to do a full repo rebuild from scratch.
The first thing the script is doing is to brute force clean up the repo:
rm -rf $SVN_REPO
svnadmin create $SVN_REPO
Then do a S3 sync to get all the backup files and load the files into the svn repo in the right order:
for file in $(ls $SVN_BACKUP_FILES_DIR/*.bzip2 | sort -t '-' -k 4 -n)
  bzip2 -dc $file | svnadmin load $SVN_REPO
Next step is getting few revisions and checking that their attributes (e.g. comments) match in both live and backup test repos.

Just because I'm paranoid we're also have an svn sync on an SVN slave server our second data center where every commit is backed-up on the fly and some of our systems (e.g. WebSVN) are reading from it.

Creative Commons License This work by Eishay Smith is licensed under a Creative Commons Attribution 3.0 Unported License.