hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wellington Chevreuil <wellington.chevre...@gmail.com>
Subject Re: high load average in one region server
Date Fri, 13 Jun 2014 09:52:27 GMT
Might be worth check hbase ui (http://hbase-host:60010/), you will see a page where there is
a table “Region Servers”, and you can check there if regions are evenly spread around
your RS. From there, you can click on the link for each of your RS, and you can find more
information specific for each region being managed by the RS, like read and write requests
for each region, and the start key and end key of each region. Then, if some regions are having
much more requests than others, or the store files are much bigger, than you can think on
split these regions, and also change your row-key design to spread your records. You can find
some useful information about rowkey design here: http://hbase.apache.org/book/rowkey.design.html.

Cheers 

 
On 12 Jun 2014, at 02:49, Ted Yu <yuzhihong@gmail.com> wrote:

> Which hbase release are you using ?
> 
> Could this be related to how your schema is designed ?
> 
> Have you run jstack for region server on mphbase2 ?
> 
> BTW the tables are not easy to read.
> If you have pictures, you can put them on some website and include links.
> 
> Cheers
> 
> 
> On Wed, Jun 11, 2014 at 6:38 PM, Li Li <fancyerii@gmail.com> wrote:
> 
>>   I have 5 region server hbase cluster. today I found one rs server's
>> load average is above 100 while the other 4 is less than 1.
>>   I use vmstat and dstat and found that this high load machine have
>> large number of read(about 30M/s) and network sent.
>>   Does that mean the cluster suffers hot spot? the slow machine is
>> mphbase2
>>   1. base statistics
>>  ServerName Start time Requests Per Second Num. Regions
>> mphbase1,60020,1402298228045 Mon Jun 09 15:17:08 CST 2014 586 35
>> mphbase2,60020,1402298228527 Mon Jun 09 15:17:08 CST 2014 539 32
>> mphbase3,60020,1402298228361 Mon Jun 09 15:17:08 CST 2014 966 32
>> mphbase4,60020,1402298159826 Mon Jun 09 15:15:59 CST 2014 518 35
>> mphbase5,60020,1402298228382 Mon Jun 09 15:17:08 CST 2014 442 36
>> Total:5 3051 170
>> 
>>    2. storefiles
>> ServerName Num. Stores Num. Storefiles Storefile Size Uncompressed
>> Storefile Size Index Size Bloom Size
>> mphbase1,60020,1402298228045 35 67 11872m 11874mb 8783k 34818k
>> mphbase2,60020,1402298228527 32 61 11976m 11977mb 8882k 34976k
>> mphbase3,60020,1402298228361 32 66 18321m 18325mb 13470k 54872k
>> mphbase4,60020,1402298159826 35 72 13842m 13848mb 10753k 31784k
>> mphbase5,60020,1402298228382 36 78 15021m 15027mb 15329k 29321k
>> 
>>    3. hdfs info(from hdfs)
>> Live Datanodes : 5
>> Node Last Contact  Admin Stat  Configured Capacity (GB) Used (GB)  Non
>> DFS Used (GB)  Remaining (GB)  Used(%) Used(%) Remaining(%)  Blocks
>> mphbase1 0 In Service 457.55 53.28 52.23 352.04 11.64 76.94 1150
>> mphbase2 1 In Service 457.55 46.56 48.89 362.1 10.18 79.14 971
>> mphbase3 0 In Service 457.55 52.05 55.6 349.89 11.38 76.47 1128
>> mphbase4 1 In Service 457.55 50.25 36.88 370.42 10.98 80.96 1254
>> mphbase5 2 In Service 457.55 55.2 49.29 353.06 12.06 77.16 1338
>> 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message