hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Haohui Mai (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6102) Cannot load an fsimage with a very large directory
Date Thu, 13 Mar 2014 20:15:43 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13933985#comment-13933985
] 

Haohui Mai commented on HDFS-6102:
----------------------------------

It might be sufficient to putting it into the release note. I agree with you that realistically
it is quite unlikely to see someone put 6.7m inode as the direct children into a single directory.

I'm a little hesitate to introduce a new configuration just for this reason. I wonder, is
the namespace quota offering a super set of this functionality? It might be more natural to
enforce this in the scope of the namespace quota.

> Cannot load an fsimage with a very large directory
> --------------------------------------------------
>
>                 Key: HDFS-6102
>                 URL: https://issues.apache.org/jira/browse/HDFS-6102
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.4.0
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>            Priority: Blocker
>
> Found by [~schu] during testing. We were creating a bunch of directories in a single
directory to blow up the fsimage size, and it ends up we hit this error when trying to load
a very large fsimage:
> {noformat}
> 2014-03-13 13:57:03,901 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode:
Loading 24523605 INodes.
> 2014-03-13 13:57:59,038 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: Failed
to load image from FSImageFile(file=/dfs/nn/current/fsimage_0000000000024532742, cpktTxId=0000000000024532742)
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. 
May be malicious.  Use CodedInputStream.setSizeLimit() to increase the size limit.
>         at com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>         at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>         at com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>         at com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>         at com.google.protobuf.CodedInputStream.readUInt64(CodedInputStream.java:188)
>         at org.apache.hadoop.hdfs.server.namenode.FsImageProto$INodeDirectorySection$DirEntry.<init>(FsImageProto.java:9839)
>         at org.apache.hadoop.hdfs.server.namenode.FsImageProto$INodeDirectorySection$DirEntry.<init>(FsImageProto.java:9770)
>         at org.apache.hadoop.hdfs.server.namenode.FsImageProto$INodeDirectorySection$DirEntry$1.parsePartialFrom(FsImageProto.java:9901)
>         at org.apache.hadoop.hdfs.server.namenode.FsImageProto$INodeDirectorySection$DirEntry$1.parsePartialFrom(FsImageProto.java:9896)
>         at 52)
> ...
> {noformat}
> Some further research reveals there's a 64MB max size per PB message, which seems to
be what we're hitting here.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message