hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Guram Savinov (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-2054) BlockSender.sendChunk() prints ERROR for connection closures encountered during transferToFully()
Date Sun, 24 Apr 2016 13:30:13 GMT

    [ https://issues.apache.org/jira/browse/HDFS-2054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15255588#comment-15255588
] 

Guram Savinov commented on HDFS-2054:
-------------------------------------

I have this IOException in unit-tests with miniDFS cluster. Spark job writes to HDFS a file
which is about 100MB.
Could you explain the problem for me: miniDFS dataNode closes socket right after it gets last
bytes of the file, but block sender tries to transfer full last block which is greater than
last data chunk.
Am I right?

> BlockSender.sendChunk() prints ERROR for connection closures encountered  during transferToFully()
> --------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-2054
>                 URL: https://issues.apache.org/jira/browse/HDFS-2054
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>    Affects Versions: 0.22.0, 0.23.0
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>            Priority: Minor
>             Fix For: 0.22.0, 0.23.0
>
>         Attachments: HDFS-2054-1.patch, HDFS-2054-2.patch, HDFS-2054.patch, HDFS-2054.patch,
HDFS-2054.patch
>
>
> The addition of ERROR was part of HDFS-1527. In environments where clients tear down
FSInputStream/connection before reaching the end of stream, this error message often pops
up. Since these are not really errors and especially not the fault of data node, the message
should be toned down at least. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message