hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Jenkins Server <jenk...@builds.apache.org>
Subject Hadoop-Hdfs-trunk-Java8 - Build # 856 - Still Failing
Date Tue, 02 Feb 2016 04:01:29 GMT
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/856/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6022 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project
---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project
---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project
---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:06 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  03:22 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.081 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:26 h
[INFO] Finished at: 2016-02-02T04:01:13+00:00
[INFO] Final Memory: 56M/456M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test)
on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following
articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
4 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being
available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:40389,DS-19339a38-a898-4c38-94a2-25520d0ca637,DISK],
DatanodeInfoWithStorage[127.0.0.1:51435,DS-fa2ccc4d-1404-4230-bea8-e3e53ee6324f,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:40389,DS-19339a38-a898-4c38-94a2-25520d0ca637,DISK],
DatanodeInfoWithStorage[127.0.0.1:51435,DS-fa2ccc4d-1404-4230-bea8-e3e53ee6324f,DISK]]). The
current failed datanode replacement policy is DEFAULT, and a client may configure this via
'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more
good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:40389,DS-19339a38-a898-4c38-94a2-25520d0ca637,DISK],
DatanodeInfoWithStorage[127.0.0.1:51435,DS-fa2ccc4d-1404-4230-bea8-e3e53ee6324f,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:40389,DS-19339a38-a898-4c38-94a2-25520d0ca637,DISK],
DatanodeInfoWithStorage[127.0.0.1:51435,DS-fa2ccc4d-1404-4230-bea8-e3e53ee6324f,DISK]]). The
current failed datanode replacement policy is DEFAULT, and a client may configure this via
'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1169)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1235)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1426)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1341)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1324)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:598)


FAILED:  org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverThreeDataBlocks1

Error Message:
expected:<85196> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<85196> but was:<0>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:743)
	at org.junit.Assert.assertEquals(Assert.java:118)
	at org.junit.Assert.assertEquals(Assert.java:555)
	at org.junit.Assert.assertEquals(Assert.java:542)
	at org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:345)
	at org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverThreeDataBlocks1(TestRecoverStripedFile.java:159)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testReplicatingAfterRemoveVolume

Error Message:
Timed out waiting for /test to reach 2 replicas

Stack Trace:
java.util.concurrent.TimeoutException: Timed out waiting for /test to reach 2 replicas
	at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:768)
	at org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testReplicatingAfterRemoveVolume(TestDataNodeHotSwapVolumes.java:515)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten

Error Message:
Timed out waiting for /test to reach 3 replicas

Stack Trace:
java.util.concurrent.TimeoutException: Timed out waiting for /test to reach 3 replicas
	at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:768)
	at org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWrittenForDatanode(TestDataNodeHotSwapVolumes.java:674)
	at org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten(TestDataNodeHotSwapVolumes.java:599)



Mime
  • Unnamed multipart/mixed (inline, None, 0 bytes)
View raw message