hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andy Isaacson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-8519) idle client socket triggers DN ERROR log (should be INFO or DEBUG)
Date Fri, 22 Jun 2012 01:54:42 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13399070#comment-13399070
] 

Andy Isaacson commented on HADOOP-8519:
---------------------------------------

Super not obvious, the ERROR is coming from hdfs/server/datanode/BlockSender.java following
horrifyingness:
{code}
489     } catch (IOException e) {
490       /* Exception while writing to the client. Connection closure from
491        * the other end is mostly the case and we do not care much about
492        * it. But other things can go wrong, especially in transferTo(),
493        * which we do not want to ignore.
494        *
495        * The message parsing below should not be considered as a good
496        * coding example. NEVER do it to drive a program logic. NEVER.
497        * It was done here because the NIO throws an IOException for EPIPE.
498        */
499       String ioem = e.getMessage();
500       if (!ioem.startsWith("Broken pipe") && !ioem.startsWith("Connection reset"))
{
501         LOG.error("BlockSender.sendChunks() exception: ", e);
502       }
503       throw ioeToSocketException(e);
504     }
{code}
                
> idle client socket triggers DN ERROR log (should be INFO or DEBUG)
> ------------------------------------------------------------------
>
>                 Key: HADOOP-8519
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8519
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 0.20.2
>         Environment: Red Hat Enterprise Linux Server release 6.2 (Santiago)
>            Reporter: Jeff Lord
>            Assignee: Andy Isaacson
>
> Datanode service is logging java.net.SocketTimeoutException at ERROR level.
> This message indicates that the datanode is not able to send data to the client because
the client has stopped reading. This message is not really a cause for alarm and should be
INFO level.
> 2012-06-18 17:47:13 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode DatanodeRegistration(x.x.x.x:50010,
storageID=DS-196671195-10.10.120.67-50010-1334328338972, infoPort=50075, ipcPort=50020):DataXceiver
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be
ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.10.120.67:50010
remote=/10.10.120.67:59282]
> at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:397)
> at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:493)
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:267)
> at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:163)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message