hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Jenkins Server <jenk...@builds.apache.org>
Subject Hadoop-Hdfs-trunk-Java8 - Build # 866 - Still Failing
Date Thu, 04 Feb 2016 01:22:38 GMT
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/866/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 6009 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project
---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project
---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project
---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:48 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  03:27 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.087 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:32 h
[INFO] Finished at: 2016-02-04T01:22:27+00:00
[INFO] Final Memory: 56M/608M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test)
on project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following
articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) ##############################
11 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout

Error Message:
write timedout too late in 1284 ms.

Stack Trace:
java.io.IOException: write timedout too late in 1284 ms.
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
	at java.io.OutputStream.write(OutputStream.java:75)
	at org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1040)


FAILED:  org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being
available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:47948,DS-1dcdd268-d38f-43d2-a193-ee8b98b54fef,DISK],
DatanodeInfoWithStorage[127.0.0.1:49677,DS-d8174ecb-140b-4ec1-bfb9-d2ce833db9cb,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:49677,DS-d8174ecb-140b-4ec1-bfb9-d2ce833db9cb,DISK],
DatanodeInfoWithStorage[127.0.0.1:47948,DS-1dcdd268-d38f-43d2-a193-ee8b98b54fef,DISK]]). The
current failed datanode replacement policy is DEFAULT, and a client may configure this via
'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more
good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[127.0.0.1:47948,DS-1dcdd268-d38f-43d2-a193-ee8b98b54fef,DISK],
DatanodeInfoWithStorage[127.0.0.1:49677,DS-d8174ecb-140b-4ec1-bfb9-d2ce833db9cb,DISK]], original=[DatanodeInfoWithStorage[127.0.0.1:49677,DS-d8174ecb-140b-4ec1-bfb9-d2ce833db9cb,DISK],
DatanodeInfoWithStorage[127.0.0.1:47948,DS-1dcdd268-d38f-43d2-a193-ee8b98b54fef,DISK]]). The
current failed datanode replacement policy is DEFAULT, and a client may configure this via
'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
	at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1169)
	at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1235)
	at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1426)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1341)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1324)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:598)


FAILED:  org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testDataNodeTimeSpend

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:86)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.junit.Assert.assertTrue(Assert.java:52)
	at org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testDataNodeTimeSpend(TestDataNodeMetrics.java:289)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testCheckpointCancellationDuringUpload

Error Message:
org/apache/hadoop/fs/FileSystemLinkResolver

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FileSystemLinkResolver
	at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:760)
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:455)
	at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:367)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1051)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1044)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1907)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.doEdits(TestStandbyCheckpoints.java:464)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testCheckpointCancellationDuringUpload(TestStandbyCheckpoints.java:314)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testSBNCheckpoints

Error Message:
org/apache/hadoop/hdfs/DistributedFileSystem$23

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/DistributedFileSystem$23
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1051)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1044)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1907)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.doEdits(TestStandbyCheckpoints.java:464)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testSBNCheckpoints(TestStandbyCheckpoints.java:150)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testStandbyExceptionThrownDuringCheckpoint

Error Message:
org/apache/hadoop/test/GenericTestUtils$DelayAnswer

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/test/GenericTestUtils$DelayAnswer
	at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testStandbyExceptionThrownDuringCheckpoint(TestStandbyCheckpoints.java:360)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testCheckpointCancellation

Error Message:
Could not sync enough journals to persistent storage due to No journals available to flush.
Unsynced transactions: 200000

Stack Trace:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals to persistent
storage due to No journals available to flush. Unsynced transactions: 200000
	at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
	at org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.createAbortedLogWithMkdirs(FSImageTestUtil.java:228)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testCheckpointCancellation(TestStandbyCheckpoints.java:261)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testCheckpointCancellation

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1895)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
	at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.shutdownCluster(TestStandbyCheckpoints.java:141)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testBothNodesInStandbyState

Error Message:
org/apache/hadoop/hdfs/DistributedFileSystem$23

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/DistributedFileSystem$23
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1051)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1044)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1907)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.doEdits(TestStandbyCheckpoints.java:464)
	at org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testBothNodesInStandbyState(TestStandbyCheckpoints.java:189)


FAILED:  org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testReadsAllowedDuringCheckpoint

Error Message:
org/apache/hadoop/test/GenericTestUtils$DelayAnswer

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/test/GenericTestUtils$DelayAnswer
	at org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints.testReadsAllowedDuringCheckpoint(TestStandbyCheckpoints.java:405)


FAILED:  org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs.testUpgradeFrom22FixesStorageIDs

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
	at org.junit.Assert.fail(Assert.java:86)
	at org.junit.Assert.assertTrue(Assert.java:41)
	at org.junit.Assert.assertTrue(Assert.java:52)
	at org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs$1.verifyClusterPostUpgrade(TestDatanodeStartupFixesLegacyStorageIDs.java:79)
	at org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.upgradeAndVerify(TestDFSUpgradeFromImage.java:609)
	at org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs.upgradeAndVerify(TestDatanodeStartupFixesLegacyStorageIDs.java:103)
	at org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs.runLayoutUpgradeTest(TestDatanodeStartupFixesLegacyStorageIDs.java:70)
	at org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs.testUpgradeFrom22FixesStorageIDs(TestDatanodeStartupFixesLegacyStorageIDs.java:115)



Mime
  • Unnamed multipart/mixed (inline, None, 0 bytes)
View raw message