Edward Capriolo

Monday Feb 06, 2012

Cassandra compression is like more servers for free!

A naive but convenient way to look at Cassandra is to think of it as a persistent cache. If your dataset is small and fits into memory you have great performance. When your data set grows it may no longer fits in memory. Based on your usage pattern /active set a given read might hit memory and be fast, but if the request causes a disk access it is going to be slower. Cassandra's number one weapon (to me) is that the same Linux disk cache that makes reading frequently read files like your /etc/passwd file fast also makes reading your Cassandra data fast.

Since 0.6 Cassandra's data files were fairly terse. That is to say for a row with 9 columns the 'row' information is not repeated on disk for each column. Cassandra 1.0 added compression, which works much like winzip, in that data with many repeats shrinks down smaller. This has a major impact as I will explain.

Imagine you have a machine with 44GB of RAM and 71GB of data. On average a good number of reads will 'hit' a cache. It may be a Cassandra cache like the key cache or row cache, or it might be the Linux disk cache. However some reads will miss cache and have to come from disk.

Now, imagine you enable compression and 71GB of data shrinks down to say 31GB data. Well heck! Now all that data fits in memory again. And what does that mean? Well you can stop imagining and guessing, because I will show you!

After upgrading my cluster to Cassandra 1.0.7 I enabled compression on all of our column families. This is super easy btw.

update column family my_stuff with compression_options={sstable_compression:SnappyCompressor, chunk_length_kb:64};

After altering the column family new sstables will use compression as they are flushed. Current sstables will not be rebuilt automatically. Cassandra added a new command 'rebuildsstables' which is like scrub but is only for updating the format. It does not do extensive checking.

/usr/local/apache-cassandra-1.0.7/bin/nodetool -h cdbla120 -p 8585 rebuildsstables <keyspace> <column_family>

I enabled compression and you can see our data shrink.


I kicked off the rebuild at 3:20. Below is a disk IO graph. The rebuildsstable command triggered quite a lot of disk activity, however when it was done you can see what happened....

Compression limiting disk access

BAM! The IO just kept dropping and dropping.

As a comparison I am showing the other nodes in the cluster that I have not ran rebuildsstables on. You can see that steadily as cassandra compacts tables there is less disk io, but you can see since I forced the issue on cdbla120 with rebuildsstables it is already doing almost no IO.



What this means is that my disks are not spinning as much as they were pre-compression. if my disks are not spinning that means I must be serving requests out of ram! If I am serving out of ram that means my clients are seeing better performance, and it means I now have room for more data, and more overhead for the future. The only cost is some extra CPU to decrypt data on the fly.

To wrap up... Enabling compression essentially gave me lots of performance and disk space for free. This is a big big deal. It is like a hardware upgrade that did not require buying, installing, or configuring new hardware!

WIN! WIN! WIN! WIN!

Note: Not everything compresses well. If you are storing images or something that is already compressed, cassandra compression will not help you much. Also data with repeats compresses better then more random data. (you might know this). Go to datastax for a nice write up on compression.

Someone in the replies asked to see the CPU impact.

 

The impact is minimal. This is a 2 socket quad core machine. With hyperthreading and all the ra-ra it shows up as 16 CPU system. Aka max CPU utilization would be 1600% cpu. Before compression the system held at ~100% cpu. After compression the system is at 300% cpu. Unless you have really weak CPU in your system you will almost always become disk bound before being iobound. Essentially I still have ~1300% CPU free :)

Comments:

What impact did you observed on cdbla120 CPU usage? Could you plz publish plots of CPU usage for all nodes ?

Posted by Olegs Anastasjevs on February 07, 2012 at 03:39 AM EST #

I do not agree with all your conclusions. Less I/O does not necessarily mean that more data is served from RAM. Imagine you hava a file with a size of 100MB on your harddisk which you want to serve to a client without caching. Now imagine that you compress this file to a size of 50MB. Serving the compressed file without caching will reduce I/O by 50% since disk access is halved -- without any caching in RAM. Compression is always a tradeoff between I/O and processor load. However, nice writeup anyway :-).

Many thanks, Michael.

Posted by Michael Jaeger on February 09, 2012 at 08:44 AM EST #

@Michael Jaeger. Yes but the decompression time is orders of magnitude faster then disk time. If your processor is only 3/16 utilized you can afford to do more on the fly decompression. More commonly people run into disks is seeking all the time rather then running out of cpu.

Posted by Edward on February 09, 2012 at 06:16 PM EST #

Hi Edward Good write-up. Unrelated - what tool are you using to collect the metrics to RRD, and to graph them ?

Posted by Mina Naguib on April 13, 2012 at 04:17 PM EDT #

The tool that produced this graphs is cacti BTW.

Posted by edward on April 26, 2012 at 12:50 PM EDT #

Thanks for posting this. I wish it was able to be translated, but for some reason Google toolbar isn't working. I copy pasted it into another application and read the post.

Posted by http://www.promptessay.com/ on October 13, 2013 at 03:07 PM EDT #

Post a Comment:
Comments are closed for this entry.

Calendar

Feeds

Search

Links

Navigation