hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (Updated) (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HBASE-3421) Very wide rows -- 30M plus -- cause us OOME
Date Wed, 18 Jan 2012 23:02:42 GMT

     [ https://issues.apache.org/jira/browse/HBASE-3421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Todd Lipcon updated HBASE-3421:
-------------------------------

    Release Note: 
A new config parameter, "hbase.hstore.compaction.kv.max", has been added to limit the number
of rows processed in each iteration of the internal compaction code.
Default value is 10.

  was:
A new config parameter, "hbase.hstore.compaction.kv.max", has been added to limit the number
of rows scanner returns in next().
Default value is 10.

    
> Very wide rows -- 30M plus -- cause us OOME
> -------------------------------------------
>
>                 Key: HBASE-3421
>                 URL: https://issues.apache.org/jira/browse/HBASE-3421
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.90.0
>            Reporter: stack
>            Assignee: Nate Putnam
>             Fix For: 0.90.5
>
>         Attachments: 3421.addendum, HBASE-3421.patch, HBASE-34211-v2.patch, HBASE-34211-v3.patch,
HBASE-34211-v4.patch
>
>
> From the list, see 'jvm oom' in http://mail-archives.apache.org/mod_mbox/hbase-user/201101.mbox/browser,
it looks like wide rows -- 30M or so -- causes OOME during compaction.  We should check it
out. Can the scanner used during compactions use the 'limit' when nexting?  If so, this should
save our OOME'ing (or, we need to add to the next a max size rather than count of KVs).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message