hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pt <pink_tom...@mail.ru.INVALID>
Subject Can not read file from java.io.IOException: Need bytes, but only bytes available
Date Fri, 03 Jun 2016 14:25:05 GMT
 We are running CDH 4.1.3 distro and trying to ready file. It ends up with exception @datanode
saying


2016-06-02 10:43:26,354 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(X.X.X.X,
storageID=DS-404876644-X.X.X.X-50010-1462535537579, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=cluster18;nsid=2115086255;c=0):Got
exception while serving BP-2091182050-X.X.X.X-1358362115729:blk_5037101550399368941_420502314
to /X.X.X.X:58614
java.io.IOException: Need 10172416 bytes, but only 10072576 bytes available
at org.apache.hadoop.hdfs.server.datanode.BlockSender.waitForMinLength(BlockSender.java:387)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:189)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:268)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
at java.lang.Thread.run(Thread.java:662)
2016-06-02 10:43:26,354 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: app112.rutarget.ru:50010:DataXceiver
error processing READ_BLOCK operation src: /X.X.X.X:58614 dest: /X.X.X.X:50010
java.io.IOException: Need 10172416 bytes, but only 10072576 bytes available
at org.apache.hadoop.hdfs.server.datanode.BlockSender.waitForMinLength(BlockSender.java:387)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:189)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:268)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
at java.lang.Thread.run(Thread.java:662)



FSCK shows file as being open for write, however hdfs client that handles writes to this file
closed it long time ago -- so file stucked in RBW for a few last days. How can we get actual
data  block in this case? I found only binary .meta file on datanode but not actual block
with data.



-- 
p t
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message