hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jean-Daniel Cryans (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-3323) OOME in master splitting logs
Date Wed, 22 Dec 2010 00:02:03 GMT

    [ https://issues.apache.org/jira/browse/HBASE-3323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12973987#action_12973987
] 

Jean-Daniel Cryans commented on HBASE-3323:
-------------------------------------------

I think the patch is missing this, at least adding it back in HLogSplitter fixes TestHLogSplitting:

{code}
} catch (EOFException eof) {
  // truncated files are expected if a RS crashes (see HBASE-2643)
  LOG.info("EOF from hlog " + logPath + ".  continuing");
  processedLogs.add(logPath);
{code}

> OOME in master splitting logs
> -----------------------------
>
>                 Key: HBASE-3323
>                 URL: https://issues.apache.org/jira/browse/HBASE-3323
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.0
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>            Priority: Blocker
>             Fix For: 0.90.0
>
>         Attachments: hbase-3323.4.txt, hbase-3323.5.txt, hbase-3323.6.txt, hbase-3323.txt,
hbase-3323.txt, hbase-3323.txt, sizes.png
>
>
> In testing a RS failure under heavy increment workload I ran into an OOME when the master
was splitting the logs.
> In this test case, I have exactly 136 bytes per log entry in all the logs, and the logs
are all around 66-74MB). With a batch size of 3 logs, this means the master is loading about
500K-600K edits per log file. Each edit ends up creating 3 byte[] objects, the references
for which are each 8 bytes of RAM, so we have 160 (136+8*3) bytes per edit used by the byte[].
For each edit we also allocate a bunch of other objects: one HLog$Entry, one WALEdit, one
ArrayList, one LinkedList$Entry, one HLogKey, and one KeyValue. Overall this works out to
400 bytes of overhead per edit. So, with the default settings on this fairly average workload,
the 1.5M log entries takes about 770MB of RAM. Since I had a few log files that were a bit
larger (around 90MB) it exceeded 1GB of RAM and I got an OOME.
> For one, the 400 bytes per edit overhead is pretty bad, and we could probably be a lot
more efficient. For two, we should actually account this rather than simply having a configurable
"batch size" in the master.
> I think this is a blocker because I'm running with fairly default configs here and just
killing one RS made the cluster fall over due to master OOME.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message