hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-9178) Slow datanode I/O can cause a wrong node to be marked bad
Date Wed, 30 Sep 2015 19:56:04 GMT
Kihwal Lee created HDFS-9178:

             Summary: Slow datanode I/O can cause a wrong node to be marked bad
                 Key: HDFS-9178
                 URL: https://issues.apache.org/jira/browse/HDFS-9178
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: Kihwal Lee
            Priority: Critical

When non-leaf datanode in a pipeline is slow on or stuck at disk I/O, the downstream node
can timeout on reading packet since even the heartbeat packets will not be relayed down. 

The packet read timeout is set in {{DataXceiver#run()}}:


When the downstream node times out and closes the connection to the upstream, the upstream
node's {{PacketResponder}} gets {{EOFException}} and it sends an ack upstream with the downstream
node status set to {{ERROR}}.  This caused the client to exclude the downstream node, even
thought the upstream node was the one got stuck.

The connection to downstream has longer timeout, so the downstream will always timeout  first.
The downstream timeout is set in {{writeBlock()}}
          int timeoutValue = dnConf.socketTimeout +
              (HdfsConstants.READ_TIMEOUT_EXTENSION * targets.length);
          int writeTimeout = dnConf.socketWriteTimeout +
              (HdfsConstants.WRITE_TIMEOUT_EXTENSION * targets.length);
          NetUtils.connect(mirrorSock, mirrorTarget, timeoutValue);
          OutputStream unbufMirrorOut = NetUtils.getOutputStream(mirrorSock,

This message was sent by Atlassian JIRA

View raw message