hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kai Zheng (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-8347) Using chunkSize to perform erasure decoding in stripping blocks recovering
Date Fri, 08 May 2015 03:01:59 GMT
Kai Zheng created HDFS-8347:

             Summary: Using chunkSize to perform erasure decoding in stripping blocks recovering
                 Key: HDFS-8347
                 URL: https://issues.apache.org/jira/browse/HDFS-8347
             Project: Hadoop HDFS
          Issue Type: Sub-task
            Reporter: Kai Zheng
            Assignee: Kai Zheng

While investigating a test failure in {{TestRecoverStripedFile}}, found two issues:
* An extra buffer size instead of the chunkSize defined the schema is used to perform the
decoding, which is incorrect and will cause a decoding failure as below. This is exposed by
latest change in erasure coder.
2015-05-08 18:50:06,607 WARN  datanode.DataNode (ErasureCodingWorker.java:run(386)) - Transfer
failed for all targets.
2015-05-08 18:50:06,608 WARN  datanode.DataNode (ErasureCodingWorker.java:run(399)) - Failed
to recover striped block: BP-1597876081-
2015-05-08 18:50:06,609 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(826)) - Exception
for BP-1597876081-
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:787)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:803)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
	at java.lang.Thread.run(Thread.java:745)
* In raw erasrue coder, a bad optimization in below codes. It assumes the  heap buffer backed
by the bytes array available for reading or writing always starts with zero and takes the
  protected static byte[][] toArrays(ByteBuffer[] buffers) {
    byte[][] bytesArr = new byte[buffers.length][];

    ByteBuffer buffer;
    for (int i = 0; i < buffers.length; i++) {
      buffer = buffers[i];
      if (buffer == null) {
        bytesArr[i] = null;

      if (buffer.hasArray()) {
        bytesArr[i] = buffer.array();
      } else {
        throw new IllegalArgumentException("Invalid ByteBuffer passed, " +
            "expecting heap buffer");

    return bytesArr;

Will attach a patch soon to fix the two issues.

This message was sent by Atlassian JIRA

View raw message