hadoop-zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Maarten Koopmans <maar...@vrijheid.net>
Subject Re: Zookeeper on 60+Gb mem
Date Tue, 05 Oct 2010 21:59:22 GMT
Yup, and that's ironic, isn't it? The gc tuning is so specialistic, as is the profiling, that
automated memory management (to me) hasn't brought what I hoped it would. I've had some conversations
about this topic a few years back with a well respected OS designer, and his point is that
we (humans) can trace back almost all problems because we're adding complexity, in stead of
reducing it.

Sorry for the slight rant.... Anyway, it's one of the things I like about zookeeper (and,
e.g. voldemort): it makes a hard thing doable.

--Maarten


Op 5 okt. 2010 om 23:27 heeft Patrick Hunt <phunt@apache.org> het volgende geschreven:

> Tuning GC is going to be critical, otw all the sessions will timeout (and
> potentially expire) during GC pauses.
> 
> Patrick
> 
> On Tue, Oct 5, 2010 at 1:18 PM, Maarten Koopmans <maarten@vrijheid.net>wrote:
> 
>> Yes, and syncing after a crash will be interesting as well. Off note; I am
>> running it with a 6GB heap now, but it's not filled yet. I do have smoke
>> tests thoug, so maybe I'll give it a try.
>> 
>> 
>> 
>> Op 5 okt. 2010 om 21:13 heeft Benjamin Reed <breed@yahoo-inc.com> het
>> volgende geschreven:
>> 
>>> 
>>> you will need to time how long it takes to read all that state back in
>> and adjust the initTime accordingly. it will probably take a while to pull
>> all that data into memory.
>>> 
>>> ben
>>> 
>>> On 10/05/2010 11:36 AM, Avinash Lakshman wrote:
>>>> I have run it over 5 GB of heap with over 10M znodes. We will definitely
>> run
>>>> it with over 64 GB of heap. Technically I do not see any limitiation.
>>>> However I will the experts chime in.
>>>> 
>>>> Avinash
>>>> 
>>>> On Tue, Oct 5, 2010 at 11:14 AM, Mahadev Konar<mahadev@yahoo-inc.com
>>> wrote:
>>>> 
>>>>> Hi Maarteen,
>>>>> I definitely know of a group which uses around 3GB of memory heap for
>>>>> zookeeper but never heard of someone with such huge requirements. I
>> would
>>>>> say it definitely would be a learning experience with such high memory
>>>>> which
>>>>> I definitely think would be very very useful for others in the
>> community as
>>>>> well.
>>>>> 
>>>>> Thanks
>>>>> mahadev
>>>>> 
>>>>> 
>>>>> On 10/5/10 11:03 AM, "Maarten Koopmans"<maarten@vrijheid.net> 
wrote:
>>>>> 
>>>>>> Hi,
>>>>>> 
>>>>>> I just wondered: has anybody ever ran zookeeper "to the max" on a
68GB
>>>>>> quadruple extra large high memory EC2 instance? With, say, 60GB
>> allocated
>>>>> or
>>>>>> so?
>>>>>> 
>>>>>> Because EC2 with EBS is a nice way to grow your zookeeper cluster
>> (data
>>>>> on the
>>>>>> ebs columes, upgrade as your memory utilization grows....)  - I just
>>>>> wonder
>>>>>> what the limits are there, or if I am foing where angels fear to
>> tread...
>>>>>> 
>>>>>> --Maarten
>>>>>> 
>>>>> 
>>> 
>>> 
>> 

Mime
View raw message