cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Widmann <>
Subject Re: Cassandra to store 1 billion small 64KB Blobs
Date Sat, 24 Jul 2010 07:05:32 GMT
Hi Peter

We try to figure that out how much data is coming in to cassandra once in
full operation mode

Reads are more depending on the hash values (the file name) for the binary
blobs - not the binary data itself
We will try to store hash values "grouped" (based on their first byte
writes will sometimes be very fast (depends on the workload and the clients
writing to the system)

Question: is concurrent compaction planned for the future?


2010/7/23 Peter Schuller <>

> > We plan to use cassandra as a data storage on at least 2 nodes with RF=2
> > for about 1 billion small files.
> > We do have about 48TB discspace behind for each node.
> >
> > now my question is - is this possible with cassandra - reliable - means
> > (every blob is stored on 2 jbods)..
> >
> > we may grow up to nearly 40TB or more on cassandra "storage" data ...
> >
> > anyone out did something similar?
> Other than what Jonathan Shook mentioned, I'd expect one potential
> problem to be the number of sstables. At 40 TB, the larger compactions
> are going to take quite some time. How many memtables will be flushed
> to disk during the time it takes to perform a ~ 40 TB compaction? That
> may or may not be an issue depending on how fast writes will happen,
> how large your memtables are (the bigger the better) and what your
> reads will look like.
> (This relates to another thread where I posted about concurrent
> compaction, but right now Cassandra only does a single compaction at a
> time.)
> --
> / Peter Schuller

-- - Professional Online Backup Solutions for Small and Medium Sized

View raw message