hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jim Kellerman (POWERSET)" <Jim.Keller...@microsoft.com>
Subject RE: Hbase stops working
Date Thu, 26 Feb 2009 19:01:43 GMT
> -----Original Message-----
> From: Andrew McCall [mailto:andrew.mccall@goroam.net]
> Sent: Thursday, February 26, 2009 10:23 AM
> To: hbase-user@hadoop.apache.org
> Subject: Re: Hbase stops working
> 
> What is a rough sort of minimum of cores or machines I should be
> looking at for a deployment for development purposes? Also where would
> I be best putting everything?

It depends on how big your data is, how intensive map-reduce jobs are, etc.
Our test cluster is 4 nodes each with two dual core Opterons, 8GB RAM,
16GB swap, and 4 500GB SATA disks.

We run a regionserver, data node, task tracker on each machine and one 
machine also runs the name node job tracker and master.

So for your testing, you could get by with one such machine, but you
would not be able to run huge jobs due to constraints of file space, etc.

Our production cluster is 100+ machines with two quad core Intel CPUs
and 16GB RAM.
 

Mime
View raw message