hadoop-zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Benjamin Reed <br...@yahoo-inc.com>
Subject Re: Zookeeper on 60+Gb mem
Date Tue, 05 Oct 2010 19:13:23 GMT
  you will need to time how long it takes to read all that state back in 
and adjust the initTime accordingly. it will probably take a while to 
pull all that data into memory.

ben

On 10/05/2010 11:36 AM, Avinash Lakshman wrote:
> I have run it over 5 GB of heap with over 10M znodes. We will definitely run
> it with over 64 GB of heap. Technically I do not see any limitiation.
> However I will the experts chime in.
>
> Avinash
>
> On Tue, Oct 5, 2010 at 11:14 AM, Mahadev Konar<mahadev@yahoo-inc.com>wrote:
>
>> Hi Maarteen,
>>   I definitely know of a group which uses around 3GB of memory heap for
>> zookeeper but never heard of someone with such huge requirements. I would
>> say it definitely would be a learning experience with such high memory
>> which
>> I definitely think would be very very useful for others in the community as
>> well.
>>
>> Thanks
>> mahadev
>>
>>
>> On 10/5/10 11:03 AM, "Maarten Koopmans"<maarten@vrijheid.net>  wrote:
>>
>>> Hi,
>>>
>>> I just wondered: has anybody ever ran zookeeper "to the max" on a 68GB
>>> quadruple extra large high memory EC2 instance? With, say, 60GB allocated
>> or
>>> so?
>>>
>>> Because EC2 with EBS is a nice way to grow your zookeeper cluster (data
>> on the
>>> ebs columes, upgrade as your memory utilization grows....)  - I just
>> wonder
>>> what the limits are there, or if I am foing where angels fear to tread...
>>>
>>> --Maarten
>>>
>>


Mime
View raw message