hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Buttler, David" <buttl...@llnl.gov>
Subject RE: Setting the heap size
Date Tue, 02 Nov 2010 15:14:58 GMT
For setting the memory on the zookeeper node, I think you can simply use the heap size for
hbase, if you are using hbase to manage zookeeper.  I don't think you will need more than
1G, depending on what else you are using it for.


-----Original Message-----
From: Tim Robertson [mailto:timrobertson100@gmail.com] 
Sent: Friday, October 29, 2010 7:21 AM
To: user@hbase.apache.org
Subject: Re: Setting the heap size

Hi Sean,

Based on the HBase user recommendations:

It's a mixed hardware configuration.  In truth, we will likely run 1
mapper on each DN to make the most of data locality.
The 3 TT nodes are hefty dual quad with hyper threading and 24G, but
the 9 RS are only single quad and 8G


On Fri, Oct 29, 2010 at 4:11 PM, Sean Bigdatafun
<sean.bigdatafun@gmail.com> wrote:
> Why would you only run 9 RS and leave 3 mapreduce-only nodes? I can't see
> any benefit of doing that.
> Sean
> On Thu, Oct 28, 2010 at 2:52 AM, Tim Robertson <timrobertson100@gmail.com>wrote:
>> Hi all,
>> We are setting up a small Hadoop 13 node cluster running 1 HDFS
>> master, 9 region severs for HBase and 3 map reduce nodes, and are just
>> installing zookeeper to perform the HBase coordination and to manage a
>> few simple process locks for other tasks we run.
>> Could someone please advise what kind on heap we should give to our
>> single ZK node and also (ahem) how does one actually set this? It's
>> not immediately obvious in the docs or config.
>> Thanks,
>> Tim

View raw message