hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nicolas Spiegelberg (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-3421) Very wide rows -- 30M plus -- cause us OOME
Date Wed, 05 Jan 2011 22:56:46 GMT

    [ https://issues.apache.org/jira/browse/HBASE-3421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12978016#action_12978016
] 

Nicolas Spiegelberg commented on HBASE-3421:
--------------------------------------------

Note that you can limit the number of StoreFiles that can be compacted at one time...

Store.java#204:  this.maxFilesToCompact =
conf.getInt("hbase.hstore.compaction.max", 10)

30M * 10 SF == 300MB.  What is your RAM capacity?  You are likely stuck on an merging outlier
that exists in every SF.  I would run:

bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -f <FILE_NAME> -p |sed 's/V:.*$//g'|less

on the HFiles in that Store to see what your high watermark is.

> Very wide rows -- 30M plus -- cause us OOME
> -------------------------------------------
>
>                 Key: HBASE-3421
>                 URL: https://issues.apache.org/jira/browse/HBASE-3421
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.90.0
>            Reporter: stack
>
> From the list, see 'jvm oom' in http://mail-archives.apache.org/mod_mbox/hbase-user/201101.mbox/browser,
it looks like wide rows -- 30M or so -- causes OOME during compaction.  We should check it
out. Can the scanner used during compactions use the 'limit' when nexting?  If so, this should
save our OOME'ing (or, we need to add to the next a max size rather than count of KVs).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message