hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From llpind <sonny_h...@hotmail.com>
Subject Re: HBase looses regions.
Date Wed, 27 May 2009 17:49:45 GMT

Andrew Purtell-2 wrote:
> Also the program that is pounding the cluster with inserts? What is the
> hardware spec of those nodes? How many CPUs? How many cores? How much RAM? 
 I'm currently running the client loader program from my local box.
       2 Duo CPU P8400 @ 2.26GHz, 3.48GB of RAM.

I've tried a Map/Reduce job as well, but it also does the same thing.  I
need help running a map/reduce job in a dist. manner.  The way I run it now
is iterating over the ResultSet and doing batch updates while Row key is the

Master box: 

  1  [                                                                         
0.0%]     Tasks: 162 total, 1 running
  2  [                                                                         
0.0%]     Load average: 0.00 0.00 0.00
  3  [||||                                                                     
2.6%]     Uptime: 3 days, 19:31:54
  4  [                                                                         

Quad core: Intel(R) Xeon(TM) CPU 3.00GHz

Slave box1, box2, and box3 are all the same as above but with more HD

Andrew Purtell-2 wrote:
> The regionservers are running on the same nodes as the DFS datanodes I
> presume

Yes.  that is correct.  The slaves have:
3809 DataNode
3938 HRegionServer
3601 Jps

The master has:
1293 NameNode
7363 Jps
1464 SecondaryNameNode
1568 HMaster

Andrew Purtell-2 wrote:
> Can you consider adding additional nodes to spread the load on DFS? 
 Yes.  If that will help.  Right now I'm not seeing any splits happening, so
I don't know how much adding more boxes will help.  It seems to not be
balanced.  All writes go to a single slave, when that box dies, it moves to
the next.
View this message in context: http://www.nabble.com/HBase-looses-regions.-tp23657983p23747484.html
Sent from the HBase User mailing list archive at Nabble.com.

View raw message