hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3678) Avoid spurious "DataXceiver: java.io.IOException: Connection reset by peer" errors in DataNode log
Date Tue, 01 Jul 2008 21:18:45 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Raghu Angadi updated HADOOP-3678:
---------------------------------

    Fix Version/s: 0.18.0

> Avoid spurious "DataXceiver: java.io.IOException: Connection reset by peer" errors in
DataNode log
> --------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-3678
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3678
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.17.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>             Fix For: 0.18.0
>
>
> When a client reads data using read(), it closes the sockets after it is done. Often
it might not read till the end of a block. The datanode on the other side keeps writing data
until the client connection is closed or end of the block is reached. If the client does not
read till the end of the block, Datanode writes an error message and stack trace to the datanode
log. It should not. This is not an error and it just pollutes the log and confuses the user.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message