hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Mackrory (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-12151) Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
Date Thu, 27 Jul 2017 19:53:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Sean Mackrory updated HDFS-12151:
---------------------------------
    Attachment: HDFS-12151.003.patch

Attaching a patch with the checkstyle issues fixed and also logging a stack trace for exceptions
that happen earlier than required. I tried running parallel tests locally and didn't have
a problem, but many other tests are failing because they think LOG fields are missing (but
in the code, they're not - working on it). Also had a clean Yetus run locally, so I may be
missing some config or something.

I don't want to handle RuntimeExceptions differently because it's  NPE that we receive in
the case of the bug I'm fixing, and it's an NPE that we receive after data has been sent to
the server because I haven't mocked everything. So if we receive an NPE before data is sent
to the server, I'd like to treat it the same as any other exception and fail if it's too early.

> Hadoop 2 clients cannot writeBlock to Hadoop 3 DataNodes
> --------------------------------------------------------
>
>                 Key: HDFS-12151
>                 URL: https://issues.apache.org/jira/browse/HDFS-12151
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: rolling upgrades
>    Affects Versions: 3.0.0-alpha4
>            Reporter: Sean Mackrory
>            Assignee: Sean Mackrory
>         Attachments: HDFS-12151.001.patch, HDFS-12151.002.patch, HDFS-12151.003.patch
>
>
> Trying to write to a Hadoop 3 DataNode with a Hadoop 2 client currently fails. On the
client side it looks like this:
> {code}
>     17/07/14 13:31:58 INFO hdfs.DFSClient: Exception in createBlockOutputStream
>     java.io.EOFException: Premature EOF: no length prefix available
>             at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2280)
>             at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1318)
>             at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)
>             at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449){code}
> But on the DataNode side there's an ArrayOutOfBoundsException because there aren't any
targetStorageIds:
> {code}
>     java.lang.ArrayIndexOutOfBoundsException: 0
>             at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:815)
>             at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
>             at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
>             at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290)
>             at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message