hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9178) Slow datanode I/O can cause a wrong node to be marked bad
Date Wed, 30 Sep 2015 21:59:05 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14938946#comment-14938946
] 

Hadoop QA commented on HDFS-9178:
---------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 37s | Pre-patch trunk compilation is healthy. |
| {color:green}+1{color} | @author |   0m  1s | The patch does not contain any @author tags.
|
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to include 1 new
or modified test files. |
| {color:green}+1{color} | javac |   7m 59s | There were no new javac warning messages. |
| {color:green}+1{color} | javadoc |  10m 10s | There were no new javadoc warning messages.
|
| {color:red}-1{color} | release audit |   0m 15s | The applied patch generated 1 release
audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 24s | The applied patch generated  1 new checkstyle
issues (total was 61, now 61). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that end in whitespace.
|
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with eclipse:eclipse.
|
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce any new Findbugs
(version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 11s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  64m 32s | Tests failed in hadoop-hdfs. |
| | | 109m 51s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestWriteReadStripedFile |
| Timed out tests | org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
|   | org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | http://issues.apache.org/jira/secure/attachment/12764472/HDFS-9178.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 6c17d31 |
| Release Audit | https://builds.apache.org/job/PreCommit-HDFS-Build/12753/artifact/patchprocess/patchReleaseAuditProblems.txt
|
| checkstyle |  https://builds.apache.org/job/PreCommit-HDFS-Build/12753/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
|
| hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12753/artifact/patchprocess/testrun_hadoop-hdfs.txt
|
| Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12753/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep
3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12753/console |


This message was automatically generated.

> Slow datanode I/O can cause a wrong node to be marked bad
> ---------------------------------------------------------
>
>                 Key: HDFS-9178
>                 URL: https://issues.apache.org/jira/browse/HDFS-9178
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>            Priority: Critical
>         Attachments: HDFS-9178.patch
>
>
> When non-leaf datanode in a pipeline is slow on or stuck at disk I/O, the downstream
node can timeout on reading packet since even the heartbeat packets will not be relayed down.
 
> The packet read timeout is set in {{DataXceiver#run()}}:
> {code}
>   peer.setReadTimeout(dnConf.socketTimeout);
> {code}
> When the downstream node times out and closes the connection to the upstream, the upstream
node's {{PacketResponder}} gets {{EOFException}} and it sends an ack upstream with the downstream
node status set to {{ERROR}}.  This caused the client to exclude the downstream node, even
thought the upstream node was the one got stuck.
> The connection to downstream has longer timeout, so the downstream will always timeout
 first. The downstream timeout is set in {{writeBlock()}}
> {code}
>           int timeoutValue = dnConf.socketTimeout +
>               (HdfsConstants.READ_TIMEOUT_EXTENSION * targets.length);
>           int writeTimeout = dnConf.socketWriteTimeout +
>               (HdfsConstants.WRITE_TIMEOUT_EXTENSION * targets.length);
>           NetUtils.connect(mirrorSock, mirrorTarget, timeoutValue);
>           OutputStream unbufMirrorOut = NetUtils.getOutputStream(mirrorSock,
>               writeTimeout);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message