incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Franc Carter <franc.car...@sirca.org.au>
Subject Re: RE 200TB in Cassandra ?
Date Thu, 19 Apr 2012 11:54:24 GMT
On Thu, Apr 19, 2012 at 9:38 PM, Romain HARDOUIN
<romain.hardouin@urssaf.fr>wrote:

>
> Cassandra supports data compression and depending on your data, you can
> gain a reduction in data size up to 4x.
>

The data is gzip'd already ;-)


> 600 TB is a lot, hence requires lots of servers...
>
>
> Franc Carter <franc.carter@sirca.org.au> a écrit sur 19/04/2012 13:12:19 :
>
> > Hi,
> >
> > One of the projects I am working on is going to need to store about
> > 200TB of data - generally in manageable binary chunks. However,
> > after doing some rough calculations based on rules of thumb I have
> > seen for how much storage should be on each node I'm worried.
> >
> >   200TB with RF=3 is 600TB = 600,000GB
> >   Which is 1000 nodes at 600GB per node
> >
> > I'm hoping I've missed something as 1000 nodes is not viable for us.
> >
> > cheers
> >
> > --
> > Franc Carter | Systems architect | Sirca Ltd
> > franc.carter@sirca.org.au | www.sirca.org.au
> > Tel: +61 2 9236 9118
> > Level 9, 80 Clarence St, Sydney NSW 2000
> > PO Box H58, Australia Square, Sydney NSW 1215




-- 

*Franc Carter* | Systems architect | Sirca Ltd
 <marc.zianideferranti@sirca.org.au>

franc.carter@sirca.org.au | www.sirca.org.au

Tel: +61 2 9236 9118

Level 9, 80 Clarence St, Sydney NSW 2000

PO Box H58, Australia Square, Sydney NSW 1215

Mime
View raw message