hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: Hmaster is OutOfMemory
Date Thu, 12 May 2011 17:23:02 GMT
As far as I can tell, what you describe below is the expected
behavior.  What would you suggest we do otherwise?  Do you find that
these HServerInfo instances are consuming lots of memory?

FYI, HServerInfo is purged from TRUNK.

Thanks Jean,
St.Ack

2011/5/12 Jack Zhang(jian) <jack.zhangjian@huawei.com>:
> In our test cluster, 1 hmaster, 3 regionserver, but there are 1,481 HServerInfo instance.
> So in hbase 0.90.2 , I think there are memory leak in hmaster.
> 1. Regionsever reports it's load to hmaster period, so the corresponding Hserverinfo
will be changed in ServerManager.onlineServers(line 244, 337 in ServerManager) .
> 2. When hbase cluster start up, AssignmentManager will receive RS_ZK_REGION_OPENED event,
then it will construct OpenedRegionHandler(line 431 in AssignmentManager) by the corresponding
Hserverinfo instance refreshed by the Regionserver latest.
> 3. Then OpenedRegionHandler will store this Hserverinfo into assignmentManager.regionOnline(line
97 in OpenedRegionHandler).
>
> After regionserver reported load, hmaster always store the new hserverinfo instance to
assignmentManager.regionOnline if region opened meanwhile.
> So the more the region opened, the more memory will leak.
>
> -----邮件原件-----
> 发件人: Gaojinchao [mailto:gaojinchao@huawei.com]
> 发送时间: 2011年5月12日 15:24
> 收件人: user@hbase.apache.org
> 主题: Re: Hmaster is OutOfMemory
>
> Thanks, Stack.
> The heap dump is so big. I try to dig it and share the result.
> Then ,you can give me some suggestion.
>
>
> Do you give me more suggestion about my cluster?
>
> My application:
> write operation with api put<list> is about 75k Puts/s( one put about 400 Bytes)
> read opreation is rarely, but the latency time demands lower than 5s
>
> machine:
> cpu:    8 core 2.GHz
> memory: 24G, Hbase use 8G
> Disk:   2T*8 = 16T
>
> node number: 13 nodes
>
> dfs configure:
>
> dfs.block.size 256M
> dfs.datanode.handler.count 10
> dfs.namenode.handler.count 30
> dfs.datanode.max.xcievers 2047
> dfs.support.append True
>
> hbase configure:
>
> hbase.regionserver.handler.count 50
> hbase.regionserver.global.memstore.upperLimit 0.4
> hbase.regionserver.global.memstore.lowerLimit 0.35
> hbase.hregion.memstore.flush.size 128M
> hbase.hregion.max.filesize 512M
> hbase.client.scanner.caching 1
> hfile.block.cache.size 0.2
> hbase.hregion.memstore.block.multiplier 3
> hbase.hstore.blockingStoreFiles 10
> hbase.hstore.compaction.min.size 64M
>
> compress: gz
>
> I am afraid of some problem:
> 1, One region server has about 12k reigons or more, if up the hregion.max.filesize ,
parallel scalability will be low and affect scan latency .
> 2, If a region server crashed, It could bring on the other region server crashed?
>
> Can you give some suggestion about the hbase parameter and my cluster ?
>
> -----邮件原件-----
> 发件人: saint.ack@gmail.com [mailto:saint.ack@gmail.com] 代表 Stack
> 发送时间: 2011年5月10日 23:21
> 收件人: user@hbase.apache.org
> 主题: Re: Hmaster is OutOfMemory
>
> 2011/5/9 Gaojinchao <gaojinchao@huawei.com>:
>> My first cluster needs save 147 TB data. If one region has 512M or 1 GB, It will
be 300 K regions or 147K regions.
>> In the future, If store several PB, It will more regions.
>
> You might want to up the size of your regions to 4G (FB run w/ big
> regions IIUC).
>
> Do you want to put up a heap dump for me to take a look at.  That'd be
> easier than me figuring time to try and replicate your scenario.
>
> Thanks Gao,
> St.Ack
>

Mime
View raw message