hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10484) Can not read file from java.io.IOException: Need XXX bytes, but only YYY bytes available
Date Mon, 06 Jun 2016 19:26:20 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15317049#comment-15317049
] 

Steve Loughran commented on HDFS-10484:
---------------------------------------

That is a really old version of Hadoop. It's not going to be anything that the Hadoop open
source team are going to look at directly. Sorry

you're either going to have to take it up with cloudera, or upgrade to a recent Apache release
and see if the problem goes away.

Closing as invalid, again, with apologies.

see: https://wiki.apache.org/hadoop/InvalidJiraIssues

> Can not read file from java.io.IOException: Need XXX bytes, but only YYY  bytes available
> -----------------------------------------------------------------------------------------
>
>                 Key: HDFS-10484
>                 URL: https://issues.apache.org/jira/browse/HDFS-10484
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 2.0.0-alpha
>         Environment: Cloudera 4.1.2,  hadoop-hdfs-2.0.0+552-1.cdh4.1.2.p0.27
>            Reporter: pt
>
> We are running CDH 4.1.2 distro and trying to read file from HDFS. It ends up with exception
@datanode saying
> 2016-06-02 10:43:26,354 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(X.X.X.X,
storageID=DS-404876644-X.X.X.X-50010-1462535537579, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=cluster18;nsid=2115086255;c=0):Got
exception while serving BP-2091182050-X.X.X.X-1358362115729:blk_5037101550399368941_420502314
to /X.X.X.X:58614
> java.io.IOException: Need 10172416 bytes, but only 10072576 bytes available
> at org.apache.hadoop.hdfs.server.datanode.BlockSender.waitForMinLength(BlockSender.java:387)
> at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:189)
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:268)
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
> at java.lang.Thread.run(Thread.java:662)
> 2016-06-02 10:43:26,354 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: app112.rutarget.ru:50010:DataXceiver
error processing READ_BLOCK operation src: /X.X.X.X:58614 dest: /X.X.X.X:50010
> java.io.IOException: Need 10172416 bytes, but only 10072576 bytes available
> at org.apache.hadoop.hdfs.server.datanode.BlockSender.waitForMinLength(BlockSender.java:387)
> at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:189)
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:268)
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
> at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
> at java.lang.Thread.run(Thread.java:662)
> FSCK shows file as being open for write, however hdfs client that handles writes to this
file closed it long time ago -- so file stucked in RBW for a few last days. How can we get
actual data  block in this case? I found only binary .meta file on datanode but not actual
block with data.
> -- 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message