cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hontvári József Levente <hontv...@flyordie.com>
Subject Re: Using cassandra at minimal expenditures
Date Thu, 01 Mar 2012 12:33:02 GMT
For Cassandra testing I am using a very old server with a one core 
Celeron processor and 1GiB RAM, and another one with 4GiB and 4 cores, 
both with two consumer SATA hard disks. Both works, i.e. there is no out 
of memory error etc. There are about 10 writes and reads per second, 
maybe more, but not more than 40. The "database" size was extremely 
small even after a few days, about 50 megabytes. The configuration is 
absolute stock configuration, I have not changed anything, except 
separating the LOG and DATA disk.

This was a noticable node on the small server, I do not remember, 
somewhere between 0.1-0.5. On the other hand it was not noticable on the 
larger server.

It was interesting that the disk IO is higher on the LOG hard disk,  
which also contained the system, than on the DATA disk.

Take these with a grain of salt, my intention was to test setting up a 
cluster in two distant datacenters, not to do some performance test.




On 2012.03.01. 11:26, Ertio Lew wrote:
> expensive :-) I was expecting to start with 2GB nodes, if not 1GB for 
> intial.
>
> On Thu, Mar 1, 2012 at 3:43 PM, aaron morton <aaron@thelastpickle.com 
> <mailto:aaron@thelastpickle.com>> wrote:
>
>     As others said, depends on load and traffic and all sorts of thins.
>
>     if you want a number, 4Gb would me a reasonable minimum IMHO. (You
>     may get by with less).  8Gb is about the tops.
>     Any memory not allocated to Cassandra  will be used to map files
>     into memory.
>
>     If you can get machines with 8GB ram thats a reasonable start.
>
>     -----------------
>     Aaron Morton
>     Freelance Developer
>     @aaronmorton
>     http://www.thelastpickle.com
>
>     On 1/03/2012, at 1:16 AM, Maki Watanabe wrote:
>
>>     Depends on your traffic :-)
>>
>>     cassandra-env.sh will try to allocate heap with following formula if
>>     you don't specify MAX_HEAP_SIZE.
>>     1. calculate 1/2 of RAM on your system and cap to 1024MB
>>     2. calculate 1/4 of RAM on your system and cap to 8192MB
>>     3. pick the larger value
>>
>>     So how about to start with the default? You will need to monitor the
>>     heap usage at first.
>>
>>     2012/2/29 Ertio Lew <ertiop93@gmail.com <mailto:ertiop93@gmail.com>>:
>>>     Thanks, I think I don't need high consistency(as per my app
>>>     requirements) so
>>>     I might be fine with CL.ONE instead of quorum, so I think 
>>>     I'm probably
>>>     going to be ok with a 2 node cluster initially..
>>>
>>>     Could you guys also recommend some minimum memory to start with
>>>     ? Of course
>>>     that would depend on my workload as well, but that's why I am
>>>     asking for the
>>>     min
>>>
>>>
>>>     On Wed, Feb 29, 2012 at 7:40 AM, Maki Watanabe
>>>     <watanabe.maki@gmail.com <mailto:watanabe.maki@gmail.com>>
>>>     wrote:
>>>>
>>>>>     If you run your service with 2 node and RF=2, your data will be
>>>>>     replicated but
>>>>>     your service will not be redundant. ( You can't stop both of
>>>>>     nodes )
>>>>
>>>>     If your service doesn't need strong consistency ( allow
>>>>     cassandra returns
>>>>     "old" data after write, and possible write lost ), you can use
>>>>     CL=ONE
>>>>     for read and write
>>>>     to keep availability.
>>>>
>>>>     maki
>>>
>>>
>>
>>
>>
>>     -- 
>>     w3m
>
>


Mime
View raw message