Wednesday, March 04, 2009

Thrift vs Protocol Buffers in Python

I've read Justin's post about thrift and protocol buffers and verified the results. I also found it hard to understand why protobuf is considerably slower then thrift.
In the example Justin did not add the line

option optimize_for = SPEED;
but it appears that it does not have any effect on performance. A bit strange since it definitely appears in the protobuf python docs.
Anyway, as stated in the java protobuf/thrift post it seems that at least in java protobuf performance is better then thrift, and there there is a great performance improvement with the "optimize_for" option.

The test without speed optimization:
5000 total records (0.577s)

get_thrift (0.031s)
get_pb (0.364s)

ser_thrift (0.277s) 555313 bytes
ser_pb (1.764s) 415308 bytes
ser_json (0.023s) 718640 bytes
ser_cjson (0.028s) 718640 bytes
ser_yaml (6.903s) 623640 bytes

ser_thrift_compressed (0.329s) 287575 bytes
ser_pb_compressed (1.758s) 284423 bytes
ser_json_compressed (0.067s) 292871 bytes
ser_cjson_compressed (0.075s) 292871 bytes
ser_yaml_compressed (6.949s) 291236 bytes

serde_thrift (0.725s)
serde_pb (3.156s)
serde_json (0.055s)
serde_cjson (0.045s)
serde_yaml (20.339s)
And with speed optimization:
5000 total records (0.577s)

get_thrift (0.031s)
get_pb (0.364s)

ser_thrift (0.275s) 555133 bytes
ser_pb (1.752s) 415166 bytes
ser_json (0.023s) 718462 bytes
ser_cjson (0.028s) 718462 bytes
ser_yaml (6.925s) 623462 bytes

ser_thrift_compressed (0.330s) 287673 bytes
ser_pb_compressed (1.767s) 284419 bytes
ser_json_compressed (0.067s) 293012 bytes
ser_cjson_compressed (0.078s) 293012 bytes
ser_yaml_compressed (7.038s) 290980 bytes

serde_thrift (0.723s)
serde_pb (3.125s)
serde_json (0.056s)
serde_cjson (0.046s)
serde_yaml (20.318s)
As noted before, there is no noticeable difference. If would be interesting to run the same test in java.
Anyway, the conclusion is that the language and probably the data structure counts when coming to decide which serialization method to pick and one language does not necessarily infer to the next.

2 comments:

cowtowncoder March 10, 2009 at 6:05 PM  

For what it's worth, in my tests on Java, PB with "optimize" attribute was 3x faster than without.
My biggest gripe is that (a) why on earth is it not enabled by default (why not use it? explanations in docs do not make much sense), and (b) data model/definition is completely wrong place to define it
(should be a runtime option ideally; but if not, then outside of data def).

Eishay Smith March 10, 2009 at 6:44 PM  

On my benchmarks the difference was even bigger.
With or without the speed optimization effects the generated code. The code is much larger with the speed optimization. Since its all generated code, I don't see why should anyone care, and like you wrote, seems like it should always be optimized for speed.
Since the opt defines the way the code gets generated, it can not be a runtime option.

Creative Commons License This work by Eishay Smith is licensed under a Creative Commons Attribution 3.0 Unported License.