cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jack Krupansky <>
Subject Re: What's to think of when increasing disk size on Cassandra nodes?
Date Wed, 08 Apr 2015 11:26:09 GMT
The preferred pattern for scaling data with Cassandra is to add nodes.
Growing the disk on each node is an anti-pattern. The key strength of
Cassandra is that it is a DISTRIBUTED database, so always keep your eye on
distributing your data.

But if you do need to grow disk, be sure to grow RAM and CPU power as well.
More disk without more RAM AND CPU is just asking for trouble. But even
that has its limits relative to the preferred pattern of adding nodes.

-- Jack Krupansky

On Wed, Apr 8, 2015 at 4:36 AM, Thomas Borg Salling <>

> I run a 10-node Cassandra cluster in production. 99% writes; 1% reads, 0%
> deletes. The nodes have 32 GB RAM; C* runs with 8 GB heap. Each node has a
> SDD for commitlog and 2x4 TB spinning disks for data (sstables). The schema
> uses key caching only. C* version is 2.1.2.
> It can be predicted that the cluster will run out of free disk space in
> not too long. So its storage capacity needs to be increased. The client
> prefers increasing disk size over adding more nodes. So a plan is to take
> the 2x4 TB spinning disks in each node and replace by 3x6 TB spinning disks.
>    - Are there any obvious pitfalls/caveats to be aware of here? Like:
>    - Can C* handle up to 18 TB data size per node with this amount of RAM?
>       - Is it feasible to increase the disk size by mounting a new
>       (larger) disk, copy all SS tables to it, and then mount it on the same
>       mount point as the original (smaller) disk (to replace it)?
> ( -- also posted on StackOverflow
> <>
> )
> Thanks in advance.
> Med venlig hilsen / Best regards,
> *Thomas Borg Salling*
> Freelance IT architect and programmer.
> Java and open source specialist.
> :: +45 4063 2353 :: @tbsalling
> <> :: ::

View raw message