hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kai Zheng (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8347) Using chunkSize to perform erasure decoding in stripping blocks recovering
Date Fri, 08 May 2015 06:22:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533949#comment-14533949

Kai Zheng commented on HDFS-8347:

Hi Yi,

Yes you're right it was caused by my latest commit. I should have run the tests fully instead
of only picking up a few of them. As you suggested, I will fix the test case failure mentioned
here in HADOOP-11938 and leave this issue for further discussing about what's buffer size
should be used when decoding. Thanks.

> Using chunkSize to perform erasure decoding in stripping blocks recovering
> --------------------------------------------------------------------------
>                 Key: HDFS-8347
>                 URL: https://issues.apache.org/jira/browse/HDFS-8347
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Kai Zheng
> While investigating a test failure in {{TestRecoverStripedFile}}, found one issue. An
extra configurable buffer size instead of the chunkSize defined the schema is used to perform
the decoding, which is incorrect and will cause a decoding failure as below. This is exposed
by latest change in erasure coder.
> {noformat}
> 2015-05-08 18:50:06,607 WARN  datanode.DataNode (ErasureCodingWorker.java:run(386)) -
Transfer failed for all targets.
> 2015-05-08 18:50:06,608 WARN  datanode.DataNode (ErasureCodingWorker.java:run(399)) -
Failed to recover striped block: BP-1597876081-
> 2015-05-08 18:50:06,609 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(826))
- Exception for BP-1597876081-
> java.io.IOException: Premature EOF from inputStream
> 	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
> 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
> 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:787)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:803)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
> 	at java.lang.Thread.run(Thread.java:745)
> {noformat}

This message was sent by Atlassian JIRA

View raw message