hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4304) Make FSEditLogOp.MAX_OP_SIZE configurable
Date Tue, 11 Dec 2012 23:17:21 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13529465#comment-13529465
] 

Colin Patrick McCabe commented on HDFS-4304:
--------------------------------------------

Here's a patch which makes it configurable during recovery mode, and bumps the default up
to 50MB.
                
> Make FSEditLogOp.MAX_OP_SIZE configurable
> -----------------------------------------
>
>                 Key: HDFS-4304
>                 URL: https://issues.apache.org/jira/browse/HDFS-4304
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 3.0.0, 2.0.3-alpha
>            Reporter: Todd Lipcon
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-4304.001.patch
>
>
> Today we ran into an issue where a NN had logged a very large op, greater than the 1.5MB
MAX_OP_SIZE constant. In order to successfully load the edits, we had to patch with a larger
constant. This constant should be configurable so that we wouldn't have to recompile in these
odd cases. Additionally, I think the default should be bumped a bit higher, since it's only
a safeguard against OOME, and people tend to run NNs with multi-GB heaps.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message