hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2018) Broken pipe SocketException in DataNode$DataXceiver
Date Thu, 11 Oct 2007 18:09:50 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12534110

Hadoop QA commented on HADOOP-2018:

-1 overall.  Here are the results of testing the latest attachment 
against trunk revision r583839.

    @author +1.  The patch does not contain any @author tags.

    javadoc +1.  The javadoc tool did not generate any warning messages.

    javac +1.  The applied patch does not generate any new compiler warnings.

    findbugs +1.  The patch does not introduce any new Findbugs warnings.

    core tests +1.  The patch passed core unit tests.

    contrib tests -1.  The patch failed contrib unit tests.

Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/926/testReport/
Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/926/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/926/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/926/console

This message is automatically generated.

> Broken pipe SocketException in DataNode$DataXceiver
> ---------------------------------------------------
>                 Key: HADOOP-2018
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2018
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.15.0
>            Reporter: Konstantin Shvachko
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.15.0
>         Attachments: pipe.patch, pipe1.patch
> I have 2 data-nodes, one of which is trying to replicate blocks to another.
> The second data-node throws the following excpetion for every replicated block.
> {code}
> 07/10/09 20:36:39 INFO dfs.DataNode: Received block blk_-8942388986043611634 from /a.d.d.r:43159
> 07/10/09 20:36:39 WARN dfs.DataNode: Error writing reply back to /a.d.d.r:43159for writing
block blk_-8942388986043611634
> 07/10/09 20:36:39 WARN dfs.DataNode: java.net.SocketException: Broken pipe
>         at java.net.SocketOutputStream.socketWrite0(Native Method)
>         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
>         at java.net.SocketOutputStream.write(SocketOutputStream.java:115)
>         at java.io.DataOutputStream.writeShort(DataOutputStream.java:151)
>         at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:939)
>         at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:763)
>         at java.lang.Thread.run(Thread.java:619)
> {code}
> # It looks like that the first data-node does not expect to receive anything from the
second one and closes the connection.
> # There should be a space in front of 
> {code}
>               + "for writing block " + block );
> {code}
> # The port number is misleading in these messages. DataXceivers open sockets on different
ports every time, which is
> different from the data-node's main port. So we should rather print here the main port
in order to be able to recognize
> wich data-node the block was sent from. 
> Is this related to HADOOP-1908? 

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message