hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Jenkins Server <jenk...@builds.apache.org>
Subject Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #557
Date Fri, 30 Oct 2015 21:39:10 GMT
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/557/changes>

Changes:

[aw] HADOOP-12133 Add schemas to Maven Assembly XMLs

[kihwal] MAPREDUCE-6451. DistCp has incorrect chunkFilePath for multiple jobs

------------------------------------------
[...truncated 4739 lines...]
Running org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 52.897 sec <<< FAILURE!
- in org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount
testNodeCount(org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount)  Time elapsed:
52.741 sec  <<< ERROR!
java.util.concurrent.TimeoutException: Timeout: excess replica count not equal to 2 for block
blk_1073741825_1001 after 20000 msec.  Last counts: live = 2, excess = 0, corrupt = 0
	at org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:156)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:146)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:130)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.505 sec - in org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.48 sec - in org.apache.hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 16.04 sec <<< FAILURE!
- in org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
testEnd2End(org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped)  Time
elapsed: 9.12 sec  <<< ERROR!
java.io.IOException: Failed to get 6 nodes from namenode: blockGroupSize= 9, blocks.length=
5
	at org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStripedOutputStream.java:443)
	at org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:482)
	at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
	at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
	at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
	at org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:922)
	at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:753)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:427)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:376)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:369)
	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:362)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.doTestBalancerWithStripedFile(TestBalancer.java:1746)
	at org.apache.hadoop.hdfs.server.balancer.TestBalancer.integrationTestWithStripedFile(TestBalancer.java:1706)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped.testEnd2End(TestBlockTokenWithDFSStriped.java:89)

testRead(org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped)  Time
elapsed: 6.581 sec  <<< ERROR!
java.lang.NullPointerException: null
	at org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.isBlockTokenExpired(TestBlockTokenWithDFS.java:628)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped.isBlockTokenExpired(TestBlockTokenWithDFSStriped.java:109)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 65.101 sec <<< FAILURE!
- in org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
testNumberOfBlocksToBeReplicated(org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks)
 Time elapsed: 47.513 sec  <<< ERROR!
java.util.concurrent.TimeoutException: Timed out waiting for /testFile to reach 2 replicas
	at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:768)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks.testNumberOfBlocksToBeReplicated(TestUnderReplicatedBlocks.java:122)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.916 sec - in org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestBlockUnderConstructionFeature
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.539 sec - in org.apache.hadoop.hdfs.server.blockmanagement.TestBlockUnderConstructionFeature
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.432 sec - in org.apache.hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeDescriptor
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.418 sec - in org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeDescriptor
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks
Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 63.172 sec <<< FAILURE!
- in org.apache.hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks
testChooseReplicaToDelete(org.apache.hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks)
 Time elapsed: 43.172 sec  <<< ERROR!
java.util.concurrent.TimeoutException: Timed out waiting for /foo2 to reach 4 replicas
	at org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:768)
	at org.apache.hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks.testChooseReplicaToDelete(TestOverReplicatedBlocks.java:166)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestBlockInfo
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.698 sec - in org.apache.hadoop.hdfs.server.blockmanagement.TestBlockInfo
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.795 sec - in org.apache.hadoop.hdfs.server.blockmanagement.TestHeartbeatHandling
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestCachedBlocksList
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.532 sec - in org.apache.hadoop.hdfs.server.blockmanagement.TestCachedBlocksList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestHostFileManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.499 sec - in org.apache.hadoop.hdfs.server.blockmanagement.TestHostFileManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation
Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 12.78 sec <<< FAILURE!
- in org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation
testRWRInvalidation(org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation)
 Time elapsed: 2.025 sec  <<< ERROR!
java.io.IOException: All datanodes [DatanodeInfoWithStorage[127.0.0.1:36805,DS-a5f1bfa3-1c59-4632-9417-3be80df500af,DISK]]
are bad. Aborting...
	at org.apache.hadoop.hdfs.DataStreamer.handleBadDatanode(DataStreamer.java:1394)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1337)
	at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1324)
	at org.apache.hadoop.hdfs.DataStreamer.processDatanodeOrExternalError(DataStreamer.java:1122)
	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:544)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlockQueues
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.417 sec - in org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlockQueues
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; support was removed
in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId

Results :

Failed tests: 
  TestDFSStripedOutputStreamWithFailure.testAddBlockWhenNoSufficientDataBlockNumOfNodes:238
Expected to find 'Failed to get 6 nodes from namenode: blockGroupSize= 9, blocks.length= 5'
but got unexpected exception:java.io.IOException: Failed to get 6 nodes from namenode: blockGroupSize=
9, blocks.length= 4
	at org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStripedOutputStream.java:443)
	at org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:482)
	at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
	at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
	at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
	at org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOutputStream.java:922)
	at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:753)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
	at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
	at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testAddBlockWhenNoSufficientDataBlockNumOfNodes(TestDFSStripedOutputStreamWithFailure.java:234)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:483)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

  TestRecoverStripedFile.testRecoverThreeParityBlocks:131->assertFileBlocksRecovery:290->sortTargetsByReplicas:345
Failed to recover striped block: -9223372036854775768
  TestSpaceReservation.testTmpSpaceReserve:454 Wrong reserve space for Tmp  expected:<200>
but was:<1000>
  TestNNHandlesCombinedBlockReport>BlockReportTestBase.blockReport_09:556 Wrong number
of PendingReplication blocks expected:<2> but was:<1>
  TestHSync.testHSyncWithReplication:202->checkSyncMetric:48 Bad value for metric FsyncCount
expected:<1> but was:<0>
  TestNNHandlesBlockReportPerStorage>BlockReportTestBase.blockReport_09:556 Wrong number
of PendingReplication blocks expected:<2> but was:<1>
  TestDataNodeVolumeFailure.testUnderReplicationAfterVolFailure:398 There is no under replicated
block after volume failure
  TestBlockHasMultipleReplicasOnSameDN.testBlockHasMultipleReplicasOnSameDN:137 
Expected: is <2>
     but: was <1>
  TestBlockManager.testTwoOfThreeNodesDecommissioned:180->doTestTwoOfThreeNodesDecommissioned:197
Should have three targets expected:<3> but was:<2>
  TestBlockManager.testAllNodesHoldingReplicasDecommissioned:224->doTestAllNodesHoldingReplicasDecommissioned:241
Should have three targets expected:<4> but was:<3>
  TestBlockManager.testOneOfTwoRacksDecommissioned:277->doTestOneOfTwoRacksDecommissioned:321->scheduleSingleReplication:464
computeBlockRecoveryWork should indicate replication is needed expected:<1> but was:<0>
  TestBlockManager.testSufficientlyReplBlocksUsesNewRack:337->doTestSufficientlyReplBlocksUsesNewRack:345->scheduleSingleReplication:464
computeBlockRecoveryWork should indicate replication is needed expected:<1> but was:<0>
  TestComputeInvalidateWork.testDatanodeReRegistration:159 Expected invalidate blocks to be
the number of DNs expected:<3> but was:<2>
  TestReplicationPolicyWithNodeGroup.testChooseTargetForLocalStorage:415 null
  TestReplicationPolicyWithNodeGroup.testChooseTarget1:210 null
  TestReplicationPolicyWithNodeGroup.testChooseTarget3:312 expected:<[DISK]s2:NORMAL:2.2.2.2:50010>
but was:<[DISK]s3:NORMAL:3.3.3.3:50010>
  TestReplicationPolicyWithNodeGroup.testChooseTarget5:394 null
  TestReplicationPolicyWithNodeGroup.testRereplicate3:499 null
  TestReplicationPolicyWithNodeGroup.testChooseMoreTargetsThanNodeGroups:675 expected:<5>
but was:<6>

Tests in error: 
  TestDFSStripedOutputStreamWithFailure.testAddBlockWhenNoSufficientParityNumOfNodes:267 »
IO
  TestDFSStripedOutputStreamWithFailure.testBlockTokenExpired:199->runTest:378->write:402
IO
  TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56:172->runTestWithMultipleFailure:301->runTest:378->write:402
IO
  TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks1:69->readFileWithMissingBlocks:104
» Timeout
  TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks2:74->readFileWithMissingBlocks:104
» Timeout
  TestEncryptedTransfer.testEncryptedAppend[0] » IO Failed to replace a bad data...
  TestDataNodeHotSwapVolumes.testDirectlyReloadAfterCheckDiskError:721->createFile:140->createFile:156
» Timeout
  TestDataNodeHotSwapVolumes.testReplicatingAfterRemoveVolume:484->waitReplication:192
Timeout
  TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures:139 » Timeout ...
  TestDiskError.testShutdown:110 » Timeout Timed out waiting for /test.txt0 to r...
  TestReplicationPolicyWithNodeGroup.testRereplicateOnBoundaryTopology:631 ArrayIndexOutOfBounds
  TestNodeCount.testNodeCount:130->checkTimeout:146->checkTimeout:156 Timeout Ti...
  TestBlockTokenWithDFSStriped.testEnd2End:89 » IO Failed to get 6 nodes from na...
  TestBlockTokenWithDFSStriped.testRead:62->TestBlockTokenWithDFS.doTestRead:407->isBlockTokenExpired:109->TestBlockTokenWithDFS.isBlockTokenExpired:628
» NullPointer
  TestUnderReplicatedBlocks.testNumberOfBlocksToBeReplicated:122 » Timeout Timed...
  TestOverReplicatedBlocks.testChooseReplicaToDelete:166 » Timeout Timed out wai...
  TestRBWBlockInvalidation.testRWRInvalidation » IO All datanodes [DatanodeInfoW...

Tests run: 644, Failures: 19, Errors: 15, Skipped: 2

[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS Native Client
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project ---
[INFO] Deleting <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project
---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project
---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project
---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [06:04 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:08 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.144 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:14 h
[INFO] Finished at: 2015-10-30T21:39:01+00:00
[INFO] Final Memory: 79M/1183M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test)
on project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked VM terminated
without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs>
&& /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx2048m -XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError
-jar <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter8129047128349985633.jar>
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5930738027668007779tmp>
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_329116990136437427800tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following
articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-12133
Updating MAPREDUCE-6451

Mime
View raw message