hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10333) Intermittent org.apache.hadoop.hdfs.TestFileAppend failure in trunk
Date Mon, 16 May 2016 05:42:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15284172#comment-15284172
] 

Hudson commented on HDFS-10333:
-------------------------------

FAILURE: Integrated in Hadoop-trunk-Commit #9764 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9764/])
HDFS-10333. Intermittent org.apache.hadoop.hdfs.TestFileAppend failure (wang: rev 45788204ae2ac82ccb3b4fe2fd22aead1dd79f0d)
* hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java


> Intermittent org.apache.hadoop.hdfs.TestFileAppend failure in trunk
> -------------------------------------------------------------------
>
>                 Key: HDFS-10333
>                 URL: https://issues.apache.org/jira/browse/HDFS-10333
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>            Reporter: Yongjun Zhang
>            Assignee: Yiqun Lin
>             Fix For: 2.8.0
>
>         Attachments: HDFS-10333.001.patch
>
>
> Java8 (I used JAVA_HOME=/opt/toolchain/jdk1.8.0_25):
> {code}
> ------------------------------------------------------
>  T E S T S
> -------------------------------------------------------
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support
was removed in 8.0
> Running org.apache.hadoop.hdfs.TestFileAppend
> Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 27.75 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.TestFileAppend
> testMultipleAppends(org.apache.hadoop.hdfs.TestFileAppend)  Time elapsed: 3.674 sec 
<<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to
no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:43067,DS-cf80da41-3697-4afa-8f89-93693cd5035d,DISK],
DatanodeInfoWithStorage[127.0.0.1:32946,DS-3b08422c-959e-42f0-a624-91b2524c4371,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:43067,DS-cf80da41-3697-4afa-8f89-93693cd5035d,DISK],
DatanodeInfoWithStorage[127.0.0.1:32946,DS-3b08422c-959e-42f0-a624-91b2524c4371,DISK]]). The
current failed datanode replacement policy is DEFAULT, and a client may configure this via
'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
>         at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1166)
>         at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
>         at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
>         at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
>         at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
>         at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)
> {code}
> However, when I run with Java1.7, the test is sometimes successful, and it sometimes
fails with 
> {code}
> Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 41.32 sec <<<
FAILURE! - in org.apache.hadoop.hdfs.TestFileAppend
> testMultipleAppends(org.apache.hadoop.hdfs.TestFileAppend)  Time elapsed: 9.099 sec 
<<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to
no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:49006,DS-498240fa-d1c7-4ba1-b97e-a1761cbbefa5,DISK],
DatanodeInfoWithStorage[127.0.0.1:43097,DS-b83b49ce-fc14-4b9e-a3fc-7df2cd9fc753,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:49006,DS-498240fa-d1c7-4ba1-b97e-a1761cbbefa5,DISK],
DatanodeInfoWithStorage[127.0.0.1:43097,DS-b83b49ce-fc14-4b9e-a3fc-7df2cd9fc753,DISK]]). The
current failed datanode replacement policy is DEFAULT, and a client may configure this via
'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
> 	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1162)
> 	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
> 	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
> 	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
> 	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
> 	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)
> {code}
> The failure of this test is intermittent, but it fails pretty often.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message