cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rock zhang <r...@alohar.com>
Subject Re: OOM when Adding host
Date Mon, 10 Aug 2015 21:21:59 GMT
I logged the open files every 10 mins, last record is : 

lsof -p $cassadnraPID | wc -l

74728

lsof |wc-l
5887913       # this is a very large number, don't know why.

After OOM the open file numbers back to few hundreds (lsof | wc -l ). 




On Aug 10, 2015, at 9:59 AM, rock zhang <rock@alohar.com> wrote:

> My Cassandra version is 2.1.4.
> 
> Thanks
> Rock 
> 
> On Aug 10, 2015, at 9:52 AM, rock zhang <rock@alohar.com> wrote:
> 
>> Hi All,
>> 
>> Currently i have three hosts. The data is not balanced, one is 79G, another two have
300GB. When I adding a new host, firstly I got "too many open files" error, then i changed
file open limit from 100,000 to 1, 000, 000. Then I got OOM error.
>> 
>> Should I change the limits to 20,0000 instead of 1M?  My memory is 33G, i am using
EC2 c2*2xlarge.  Ideally even if the data is large, just slower, should not OOM, don't understand
why .
>> 
>> I actually got this error pretty often. I guess the reason is because my data is
pretty large?  If cassandra try to split the data evenly on all host, then Cassandra need
to copy around 200GB to the new host. 
>> 
>> From my experience, An alternative way to solve this is add new host as seed, do
not use "Add host", then data would be move, so so OOM. But not sure data will be lost or
cannot be located. 
>> 
>> Thanks
>> Rock 
>> 
> 


Mime
View raw message