hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arpit Agarwal (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10312) Large block reports may fail to decode at NameNode due to 64 MB protobuf maximum length restriction.
Date Tue, 19 Apr 2016 23:46:25 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15248947#comment-15248947
] 

Arpit Agarwal commented on HDFS-10312:
--------------------------------------

Pasting the delta inline to avoid confusing Jenkins. I'll kick off a build manually.
{code}
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java
index bd9c0a2..0dff33f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestLargeBlockReport.java
@@ -74,10 +74,10 @@ public void tearDown() {
 
   @Test
   public void testBlockReportExceedsLengthLimit() throws Exception {
-    initCluster();
+    initCluster(1024 * 1024);
     // Create a large enough report that we expect it will go beyond the RPC
     // server's length validation, and also protobuf length validation.
-    StorageBlockReport[] reports = createReports(6000000);
+    StorageBlockReport[] reports = createReports(200000);
     try {
       nnProxy.blockReport(bpRegistration, bpId, reports,
           new BlockReportContext(1, 0, reportId, fullBrLeaseId, sorted));
@@ -91,9 +91,8 @@ public void testBlockReportExceedsLengthLimit() throws Exception {
 
   @Test
   public void testBlockReportSucceedsWithLargerLengthLimit() throws Exception {
-    conf.setInt(IPC_MAXIMUM_DATA_LENGTH, 128 * 1024 * 1024); // 128 MB
-    initCluster();
-    StorageBlockReport[] reports = createReports(6000000);
+    initCluster(2 * 1024 * 1024);
+    StorageBlockReport[] reports = createReports(200000);
     nnProxy.blockReport(bpRegistration, bpId, reports,
         new BlockReportContext(1, 0, reportId, fullBrLeaseId, sorted));
   }
@@ -129,7 +128,8 @@ public void testBlockReportSucceedsWithLargerLengthLimit() throws Exception
{
    *
    * @throws Exception if initialization fails
    */
-  private void initCluster() throws Exception {
+  private void initCluster(int ipcMaxDataLength) throws Exception {
+    conf.setInt(IPC_MAXIMUM_DATA_LENGTH, ipcMaxDataLength);
     cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
     cluster.waitActive();
     dn = cluster.getDataNodes().get(0);
{code}

> Large block reports may fail to decode at NameNode due to 64 MB protobuf maximum length
restriction.
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10312
>                 URL: https://issues.apache.org/jira/browse/HDFS-10312
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>         Attachments: HDFS-10312.001.patch, HDFS-10312.002.patch, HDFS-10312.003.patch,
HDFS-10312.004.patch
>
>
> Our RPC server caps the maximum size of incoming messages at 64 MB by default.  For exceptional
circumstances, this can be uptuned using {{ipc.maximum.data.length}}.  However, for block
reports, there is still an internal maximum length restriction of 64 MB enforced by protobuf.
 (Sample stack trace to follow in comments.)  This issue proposes to apply the same override
to our block list decoding, so that large block reports can proceed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message