hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron Fabbri (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
Date Tue, 25 Jul 2017 21:06:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16100787#comment-16100787
] 

Aaron Fabbri commented on HDFS-12151:
-------------------------------------

Looks straightforward. +1 (non-binding).

Confirmed by inspection that the null you may pass in is handled correctly in {{Sender#writeBlock()}}
via {{PBHelperClient.convert()}} which returns an empty list.

For upstream regression testing, where would be a good place to add a test for this?  I don't
think a test is a must-have for this small improvement, but it does highlight our need for
better compatibility testing here.  Do we have an upstream HDFS protocol compatibility testing
JIRA we could add this case to? 

> Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
> --------------------------------------------------------
>
>                 Key: HDFS-12151
>                 URL: https://issues.apache.org/jira/browse/HDFS-12151
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: rolling upgrades
>    Affects Versions: 3.0.0-alpha4
>            Reporter: Sean Mackrory
>            Assignee: Sean Mackrory
>         Attachments: HDFS-12151.001.patch
>
>
> Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently fails. On the
client side it looks like this:
> {code}
>     17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in createBlockOutputStream
>     java.io.EOFException: Premature EOF: no length prefix available
>             at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280)
>             at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
>             at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
>             at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449){code}
> But on the DataNode side there's an ArrayOutOfBoundsException because there aren't any
targetStorageIds:
> {code}
>     java.lang.ArrayIndexOutOfBoundsException: 0
>             at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
>             at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
>             at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
>             at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
>             at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message