hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Segel <michael_se...@hotmail.com>
Subject RE: Hardware configuration
Date Mon, 02 May 2011 15:15:42 GMT

Ian,

You're not running a single JVM per node.

You have your datanode, task tracker and then the number of m/r tasks that you run on the
node.

With Xeon chips, depending on your configuration, you can run 8 mappers and 8 reducers.
Add in HBase which you'll want to increase the amount of memory on the Region Server up to
4-8GB range... you'll see your memory use going up, and that's with 8 cores. Add in the addtional
4 cores if you have 6 core cpus and you will end up with 48GB of memory.

And as to the number of disks per node...

With 4 disks per node, we end up seeing disk as our limiting factor. Cloudera and others recommend
2 disks per core, and that makes some sense so we're not blocked on disk i/o. W 8 core that's
3 disks per core. With 12 core that's only 2 disks per core.

And while its been pointed out that 24TB per node is a lot of disk... add 10GBe to the mix
and you won't have as much of an issue with respect to balancing. 

So there's no money being wasted. 
Again... We're talking about 125-150 nodes in a cluster that has 1PB of HDFS...

If you limit yourself w 12TB of disk per node... that's 300 machines. You've essentially doubled
your power consumption and footprint in your machine room. If you've got to expand past 1PB,
... you really need to plan for that density.

This is why I said that the answer isn't straight forward and that you had to plan out your
cluster appropriately. 

It goes back to the OP's initial question about starting with a heterogeneous cluster where
the nodes aren't roughly the same size and configuration.

HTH

-Mike
 


----------------------------------------
> Date: Mon, 2 May 2011 10:30:21 -0400
> From: roughley@gmail.com
> To: user@hbase.apache.org
> Subject: Re: Hardware configuration
>
> I think that there are two important considerations:
> 1. Can the JVM you're planning on using support a heap of > 10GB, if not, you're wasting
money
> 2. Putting more disk on nodes, means that a failure will take longer to re-replicate
back to it's
> balanced state. i.e. Given you're network topology, how long will even a 50TB machine
take, a day a
> week, longer?
>
> /Ian
> Architect / Mgr - Novell Vibe
>
> On 05/02/2011 09:57 AM, Michael Segel wrote:
> >
> > Hi,
> >
> > That's actually a really good question.
> > Unfortunately, the answer isn't really simple.
> >
> > You're going to need to estimate your growth and you're going to need to estimate
your configuration.
> >
> > Suppose I know that within 2 years, the amount of data that I want to retain is
going to be 1PB, with a 3x replication factor, I'll need at least 3PB of disk. Assuming that
I can fit 12x2TB drives in a node, I'll need 125-150 machines. (There's some overhead for
logging and OS)
> >
> > Now this doesn't mean that I'll need to buy all of the machines today and build
out the cluster.
> > It means that I will need to figure out my machine room, (rack space, power, etc...)
and also hardware configuration.
> >
> > You'll also need to plan out your hardware choices too. An example.. you may want
10GBe on the switch but not at the data node. However you're going to want to be able to expand
your data nodes to be able to add 10GBe cards.
> >
> > The idea is that as I build out my cluster, all of the machines have the same look
and feel. So if you buy quad core CPUs and they are 2.2 GHz but 6 months from now, you buy
2.6 GHz cpus, as long as they are 4 core cpus, your cluster will look the same.
> >
> > The point is that when you lay out your cluster to start with, you'll need to plan
ahead and keep things similar. Also you'll need to make sure your NameNode has enough memory...
> >
> > Having said that... Yahoo! has written a paper detailing MR2 (next generation of
map/reduce). As the M/R Job scheduler becomes more intelligent about the types of jobs and
types of hardware, the consistency of hardware becomes less important.
> >
> > With respect to HBase, I suspect there to be a parallel evolution.
> >
> > As to building out and replacing your cluster... if this is a production environment,
you'll have to think about DR and building out a second cluster. So the cost of replacing
clusters should also be factored in when you budget for hardware.
> >
> > Like I said, its not a simple answer and you have to approach each instance separately
and fine tune your cluster plans.
> >
> > HTH
> >
> > -Mike
> >
> >
> > ----------------------------------------
> >> Date: Mon, 2 May 2011 09:53:05 +0300
> >> From: iulia.zidaru@1and1.ro
> >> To: user@hbase.apache.org
> >> CC: stack@duboce.net
> >> Subject: Re: Hardware configuration
> >>
> >> Thank you both. How would you estimate really big clusters, with
> >> hundreds of nodes? Requirements might change in time and replacing an
> >> entire cluster seems not the best solution...
> >>
> >>
> >>
> >> On 04/29/2011 07:08 PM, Stack wrote:
> >>> I agree with Michel Segel. Distributed computing is hard enough.
> >>> There is no need to add extra complexity.
> >>>
> >>> St.Ack
> >>>
> >>> On Fri, Apr 29, 2011 at 4:05 AM, Iulia Zidaru wrote:
> >>>> Hi,
> >>>> I'm wondering if having a cluster with different machines in terms of
CPU,
> >>>> RAM and disk space would be a big issue for HBase. For example, machines
> >>>> with 12GBs RAM and machines with 48GBs. We suppose that we use them
at full
> >>>> capacity. What problems we might encounter if having this kind of
> >>>> configuration?
> >>>> Thank you,
> >>>> Iulia
> >>>>
> >>>>
> >>
> >>
> >> --
> >> Iulia Zidaru
> >> Java Developer
> >>
> >> 1&1 Internet AG - Bucharest/Romania - Web Components Romania
> >> 18 Mircea Eliade St
> >> Sect 1, Bucharest
> >> RO Bucharest, 012015
> >> iulia.zidaru@1and1.ro
> >> 0040 31 223 9153
> >>
> >>
> >>
> >
>
 		 	   		  
Mime
View raw message