hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-10312) Large block reports may fail to decode at NameNode due to 64 MB protobuf maximum length restriction.
Date Tue, 19 Apr 2016 21:44:25 GMT

     [ https://issues.apache.org/jira/browse/HDFS-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Chris Nauroth updated HDFS-10312:
---------------------------------
    Attachment: HDFS-10312.002.patch

[~liuml07] and [~xyao], thank you for the code reviews.  That's a great catch on the lack
of {{fail}} in the test.  I'm attaching patch v002 with the fix.

> Large block reports may fail to decode at NameNode due to 64 MB protobuf maximum length
restriction.
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10312
>                 URL: https://issues.apache.org/jira/browse/HDFS-10312
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>         Attachments: HDFS-10312.001.patch, HDFS-10312.002.patch
>
>
> Our RPC server caps the maximum size of incoming messages at 64 MB by default.  For exceptional
circumstances, this can be uptuned using {{ipc.maximum.data.length}}.  However, for block
reports, there is still an internal maximum length restriction of 64 MB enforced by protobuf.
 (Sample stack trace to follow in comments.)  This issue proposes to apply the same override
to our block list decoding, so that large block reports can proceed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message