incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From China Stoffen <chinastof...@yahoo.com>
Subject Re: commodity server spec
Date Tue, 06 Sep 2011 16:33:48 GMT
>>In general, more smaller is better than fewer big. Probably go for
>>what's cost-effective.

Cost effective solution is few and fat servers because it also saves hosting cost.



>>The exception to that would be if you're truly only caring about
>>writes and have *very* few reads that are not latency critical (so
>>you're okay with waiting for several disk seeks on reads and the
>>number of reads is low enough that serving them from platters will
>>work). In such cases it might make sense to have fewer Big Fat
>>Machines with lots of memory and a lot of disk space. But... even so.
>>I would not recommend huge 48 tb nodes... unless you really know what
>>you're doing.

I want writes should be as fast as possible but reads are not necessary to be in milliseconds.

If you don't recommend 48tb then how much max disk space I can go with?






----- Original Message -----
From: Peter Schuller <peter.schuller@infidyne.com>
To: user@cassandra.apache.org; China Stoffen <chinastoffen@yahoo.com>
Cc: 
Sent: Saturday, September 3, 2011 1:08 PM
Subject: Re: commodity server spec

> Is there any recommendation about commodity server hardware specs if 100TB
> database size is expected and its heavily write application.
> Should I got with high powered CPU (12 cores) and 48TB HDD and 640GB RAM and
> total of 3 servers of this spec. Or many smaller commodity servers are
> recommended?

In general, more smaller is better than fewer big. Probably go for
what's cost-effective.

In your case, 100 TB is *quite* big. I would definitely recommend
against doing anything like your 3 server setup. You'll probably want
100-1000 small servers.

The exception to that would be if you're truly only caring about
writes and have *very* few reads that are not latency critical (so
you're okay with waiting for several disk seeks on reads and the
number of reads is low enough that serving them from platters will
work). In such cases it might make sense to have fewer Big Fat
Machines with lots of memory and a lot of disk space. But... even so.
I would not recommend huge 48 tb nodes... unless you really know what
you're doing.

In reality, more information about your use-case would be required to
offer terribly useful advice.

-- 
/ Peter Schuller (@scode on twitter)

Mime
View raw message