hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Mackrory (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
Date Fri, 28 Jul 2017 22:00:06 GMT

     [ https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Sean Mackrory updated HDFS-12151:
---------------------------------
    Attachment: HDFS-12151.006.patch

So the problem was that it couldn't connect to a socket on the local port I was telling it
to. I had originally had a dummy server as part of the NullDataNode class, but I later found
it to be unnecessary, assuming that it was because it was only using the OutputStream I was
passing in. The reason that worked is because I *happen* to have something listening on port
12345 locally. So I've restored the dummy server, and I'm now using a different port that
I don't have anything listening on locally as well as intelligently switching the URLs and
the dummy server to a different port if there ever is anything already on that port.

Attaching for a more serious test run...

> Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
> --------------------------------------------------------
>
>                 Key: HDFS-12151
>                 URL: https://issues.apache.org/jira/browse/HDFS-12151
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: rolling upgrades
>    Affects Versions: 3.0.0-alpha4
>            Reporter: Sean Mackrory
>            Assignee: Sean Mackrory
>         Attachments: HDFS-12151.001.patch, HDFS-12151.002.patch, HDFS-12151.003.patch,
HDFS-12151.004.patch, HDFS-12151.005.patch, HDFS-12151.006.patch
>
>
> Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently fails. On the
client side it looks like this:
> {code}
>     17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in createBlockOutputStream
>     java.io.EOFException: Premature EOF: no length prefix available
>             at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280)
>             at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
>             at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
>             at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449){code}
> But on the DataNode side there's an ArrayOutOfBoundsException because there aren't any
targetStorageIds:
> {code}
>     java.lang.ArrayIndexOutOfBoundsException: 0
>             at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
>             at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
>             at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
>             at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
>             at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message