incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rohit reddy <rohit.kommare...@gmail.com>
Subject Re: Cassandra node going down
Date Fri, 14 Sep 2012 13:50:00 GMT
Hi Robin,

I had checked that. Our disk size is about 800GB, and the total data size
is not more than 40GB. Even if all the data is stored in one node, this
won't happen.

I'll try to see if the disk failed.

Is this anything to do with VM memory?.. cause this logs suggests that..
Heap is 0.7515559786053904 full.  You may need to reduce memtable and/or
cache sizes.  Cassandra will now flush up to the two largest memtables to
free up memory.  Adjust flush_largest_memtables_at threshold in
cassandra.yaml if you don't want Cassandra to do this automatically

But, i'm only testing writes, there are no reads on the cluster. Will the
writes require so much memory. A large instance has 7.5GB, so by default
cassandra allocates about 3.75 GB for the VM.



On Fri, Sep 14, 2012 at 6:58 PM, Robin Verlangen <robin@us2.nl> wrote:

> Hi Robbit,
>
> I think it's running out of disk space, please verify that (on Linux: df
> -h ).
>
> Best regards,
>
> Robin Verlangen
> *Software engineer*
> *
> *
> W http://www.robinverlangen.nl
> E robin@us2.nl
>
> Disclaimer: The information contained in this message and attachments is
> intended solely for the attention and use of the named addressee and may be
> confidential. If you are not the intended recipient, you are reminded that
> the information remains the property of the sender. You must not use,
> disclose, distribute, copy, print or rely on this e-mail. If you have
> received this message in error, please contact the sender immediately and
> irrevocably delete this message and any copies.
>
>
>
> 2012/9/14 rohit reddy <rohit.kommareddy@gmail.com>
>
>> Hi,
>>
>> I'm facing a problem in Cassandra cluster deployed on EC2 where the node
>> is going down under write load.
>>
>> I have configured a cluster of 4 Large EC2 nodes with RF of 2.
>> All nodes are instance storage backed. DISK is RAID0 with 800GB
>>
>> I'm pumping in write requests at about 4000 writes/sec. One of the node
>> went down under this load. The total data size in each node was not more
>> than 7GB
>> Got the following WARN messages in the LOG file...
>>
>> 1. setting live ratio to minimum of 1.0 instead of 0.9003153296009601
>> 2. Heap is 0.7515559786053904 full.  You may need to reduce memtable
>> and/or cache sizes.  Cassandra will now flush up to the two largest
>> memtables to free up memory.  Adjust flush_largest_memtables_at threshold
>> in cassandra.yaml if you don't want Cassandra to do
>> this automatically
>> 3. WARN [CompactionExecutor:570] 2012-09-14 11:45:12,024
>> CompactionTask.java (line 84) insufficient space to compact all requested
>> files
>>
>> All cassandra settings are default settings.
>> Do i need to tune anything to support this write rate?
>>
>> Thanks
>> Rohit
>>
>>
>

Mime
View raw message