Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2031B189FC for ; Sat, 31 Oct 2015 02:48:30 +0000 (UTC) Received: (qmail 3296 invoked by uid 500); 31 Oct 2015 02:48:29 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 3172 invoked by uid 500); 31 Oct 2015 02:48:28 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 2999 invoked by uid 99); 31 Oct 2015 02:48:28 -0000 Received: from crius.apache.org (HELO crius.apache.org) (140.211.11.14) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 31 Oct 2015 02:48:28 +0000 Received: from crius.apache.org (localhost [127.0.0.1]) by crius.apache.org (ASF Mail Server at crius.apache.org) with ESMTP id 1E4F09C0175 for ; Sat, 31 Oct 2015 02:48:25 +0000 (UTC) Date: Sat, 31 Oct 2015 02:48:24 +0000 (UTC) From: Apache Jenkins Server To: hdfs-dev@hadoop.apache.org Message-ID: <1740937667.7316.1446259705007.JavaMail.jenkins@crius> In-Reply-To: <2069059037.7241.1446241150402.JavaMail.jenkins@crius> References: <2069059037.7241.1446241150402.JavaMail.jenkins@crius> Subject: Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #558 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Instance-Identity: MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAkqVKZPv7YyHBB3FvWfV7XQehwe/Ga3aadzSNknt8g382X3uN8A3SOQ+Ixq9HxS+ZlN6XR4TECySmSRy2JN5Rx8svxAD0TjtSF9LuU98dD+LniNDP7Lq6gvRFuJhbMHoS0nuTizDZLsK4X8TW5MyV9w+jFbdoZfRE5O/Mse0fkOeL5uoIS/3Vvu/W+x9QSjDkB7CaU56bPFlQjqqJBl3Cn9r34CkXQZYnLb/NjW4vcpw0+TgMUAPTIVEr5BTPZRshz19g7huwg3zANT5HBIZnzV4hsVY9w4JHkceFdKi/ibNnjPjsFs9pm0HSGJ/RDxjIvSTYT02eH4+m1RAYaj2E9QIDAQAB X-Jenkins-Job: Hadoop-Hdfs-trunk-Java8 X-Jenkins-Result: FAILURE See Changes: [kihwal] Addendum to MAPREDUCE-6451 ------------------------------------------ [...truncated 11499 lines...] =09at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMe= thod.java:44) =09at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMet= hod.java:17) =09at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.r= un(FailOnTimeout.java:74) TestReplication.testNoExtraReplicationWhenBlockReceivedIsLate:670->assert= NoReplicationWasPerformed:750 Bad value for metric BlocksReplicated expecte= d:<0> but was:<1> TestSaslDataTransfer.testAuthentication:80->doTest:185 expected:<3> but w= as:<2> TestSaslDataTransfer.testPrivacy:100->doTest:185 expected:<3> but was:<2> TestSaslDataTransfer.testNoSaslAndSecurePortsIgnored:165->doTest:185 expe= cted:<3> but was:<2> TestSaslDataTransfer.testIntegrity:90->doTest:185 expected:<3> but was:<2= > TestPread.testHedgedReadLoopTooManyTimes:346 null TestDFSStripedOutputStreamWithFailure190>TestDFSStripedOutputStreamWithFa= ilure$TestBase.test3:492->TestDFSStripedOutputStreamWithFailure$TestBase.ru= n:486 failed, dn=3D0, length=3D4194303java.io.IOException: Failed at i=3D46= 07 =09at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.write(Te= stDFSStripedOutputStreamWithFailure.java:402) =09at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(= TestDFSStripedOutputStreamWithFailure.java:378) =09at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(= TestDFSStripedOutputStreamWithFailure.java:281) =09at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase= .run(TestDFSStripedOutputStreamWithFailure.java:486) =09at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase= .test3(TestDFSStripedOutputStreamWithFailure.java:492) =09at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) =09at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.= java:62) =09at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces= sorImpl.java:43) =09at java.lang.reflect.Method.invoke(Method.java:483) =09at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(Framework= Method.java:47) =09at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCal= lable.java:12) =09at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMe= thod.java:44) =09at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMet= hod.java:17) =09at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.r= un(FailOnTimeout.java:74) Caused by: java.io.IOException: Failed to get 6 nodes from namenode: blockG= roupSize=3D 9, blocks.length=3D 5 =09at org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStr= ipedOutputStream.java:443) =09at org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOu= tputStream.java:482) =09at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSumme= r.java:217) =09at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:1= 64) =09at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:1= 45) =09at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79) =09at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOut= putStream.java:48) =09at java.io.DataOutputStream.write(DataOutputStream.java:88) =09at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.write(Te= stDFSStripedOutputStreamWithFailure.java:400) =09... 13 more TestDFSStripedOutputStreamWithFailure190>TestDFSStripedOutputStreamWithFa= ilure$TestBase.test9:498->TestDFSStripedOutputStreamWithFailure$TestBase.ru= n:486 failed, dn=3D0, length=3D4325375java.lang.AssertionError: expected:<1= 003> but was:<1004> =09at org.junit.Assert.fail(Assert.java:88) =09at org.junit.Assert.failNotEquals(Assert.java:743) =09at org.junit.Assert.assertEquals(Assert.java:118) =09at org.junit.Assert.assertEquals(Assert.java:555) =09at org.junit.Assert.assertEquals(Assert.java:542) =09at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(= TestDFSStripedOutputStreamWithFailure.java:362) =09at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(= TestDFSStripedOutputStreamWithFailure.java:281) =09at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase= .run(TestDFSStripedOutputStreamWithFailure.java:486) =09at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase= .test9(TestDFSStripedOutputStreamWithFailure.java:498) =09at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) =09at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.= java:62) =09at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces= sorImpl.java:43) =09at java.lang.reflect.Method.invoke(Method.java:483) =09at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(Framework= Method.java:47) =09at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCal= lable.java:12) =09at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMe= thod.java:44) =09at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMet= hod.java:17) =09at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.r= un(FailOnTimeout.java:74) TestFileCreation.testLeaseExpireHardLimit:984 /TestFileCreation/foo shoul= d be replicated to 3 datanodes. TestDFSStripedOutputStreamWithFailure.testAddBlockWhenNoSufficientDataBlo= ckNumOfNodes:238 Expected to find 'Failed to get 6 nodes from namenode: blo= ckGroupSize=3D 9, blocks.length=3D 5' but got unexpected exception:java.io.= IOException: Failed to get 6 nodes from namenode: blockGroupSize=3D 9, bloc= ks.length=3D 4 =09at org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStr= ipedOutputStream.java:443) =09at org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOu= tputStream.java:482) =09at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSumme= r.java:217) =09at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:1= 64) =09at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:1= 45) =09at org.apache.hadoop.hdfs.DFSStripedOutputStream.closeImpl(DFSStripedOut= putStream.java:922) =09at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:753= ) =09at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOut= putStream.java:72) =09at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java= :101) =09at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.testAddB= lockWhenNoSufficientDataBlockNumOfNodes(TestDFSStripedOutputStreamWithFailu= re.java:234) =09at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) =09at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.= java:62) =09at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces= sorImpl.java:43) =09at java.lang.reflect.Method.invoke(Method.java:483) =09at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(Framework= Method.java:47) =09at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCal= lable.java:12) =09at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMe= thod.java:44) =09at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMet= hod.java:17) =09at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.r= un(FailOnTimeout.java:74) TestFileAppend2.testComplexAppend:546->testComplexAppend:541 testComplexA= ppend Worker encountered exceptions. TestFileAppend2.testComplexAppend2:551->testComplexAppend:541 testComplex= Append Worker encountered exceptions. TestDecommission.testRecommission:645 Unexpected number of replicas from = getFileBlockLocations expected:<5> but was:<3> TestDecommission.testDecommission2:420 expected null, but was: TestDecommission.testBlocksPerInterval:1038->doDecomCheck:1060 Unexpected= # of nodes checked expected:<2> but was:<3> TestBlockTokenWithDFS.testWrite:337 null TestReplicationPolicyWithNodeGroup.testChooseTarget1:218 null TestReplicationPolicyWithNodeGroup.testChooseTarget3:321 null TestReplicationPolicyWithNodeGroup.testChooseTarget5:394 null TestReplicationPolicyWithNodeGroup.testRereplicate3:499 null TestReplicationPolicy.testChooseTargetWithMoreThanAvailableNodesWithStale= ness:478->testChooseTargetWithMoreThanAvailableNodes:506 expected:<1> but w= as:<4> TestReplicationPolicy.testChooseTarget1:218 null TestReplicationPolicy.testChooseTarget2:268 null TestReplicationPolicy.testChooseTarget3:324 expected:<[DISK]s2:NORMAL:2.2= .2.2:50010> but was:<[DISK]s4:NORMAL:4.4.4.4:50010> TestReplicationPolicy.testChooseTargetWithMoreThanAvailableNodes:506 expe= cted:<3> but was:<4> TestReplicationPolicy.testRereplicate1:745 null TestReplicationPolicy.testRereplicate3:814 null TestReplicationPolicy.testChooseTargetWithHalfStaleNodes:607 null TestReplicationPolicy.testChooseTargetWithMoreThanAvailableNodesWithStale= ness:478->testChooseTargetWithMoreThanAvailableNodes:506 expected:<3> but w= as:<4> TestReplicationPolicy.testChooseTarget3:320 expected:<[DISK]s2:NORMAL:2.2= .2.2:50010> but was:<[DISK]s5:NORMAL:5.5.5.5:50010> TestReplicationPolicy.testRereplicate1:745 null TestReplicationPolicy.testRereplicate3:805 null TestReplicationPolicy.testChooseNodeWithMultipleStorages1:151 expected:<[= DISK]s5:NORMAL:5.5.5.5:50010> but was:<[DISK]s4:NORMAL:4.4.4.4:50010> TestReplicationPolicy.testChooseTargetWithHalfStaleNodes:602 null TestBlockManager.testTwoOfThreeNodesDecommissioned:180->doTestTwoOfThreeN= odesDecommissioned:193->scheduleSingleReplication:464 computeBlockRecoveryW= ork should indicate replication is needed expected:<1> but was:<0> TestBlockManager.testAllNodesHoldingReplicasDecommissioned:224->doTestAll= NodesHoldingReplicasDecommissioned:241 Should have three targets expected:<= 4> but was:<3> TestBlockManager.testOneOfTwoRacksDecommissioned:277->doTestOneOfTwoRacks= Decommissioned:321->scheduleSingleReplication:464 computeBlockRecoveryWork = should indicate replication is needed expected:<1> but was:<0> TestBlockManager.testSufficientlyReplBlocksUsesNewRack:337->doTestSuffici= entlyReplBlocksUsesNewRack:345->scheduleSingleReplication:464 computeBlockR= ecoveryWork should indicate replication is needed expected:<1> but was:<0> TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks:150 expected:<4= > but was:<3> TestComputeInvalidateWork.testDatanodeReRegistration:159 Expected invalid= ate blocks to be the number of DNs expected:<3> but was:<2> TestPendingReplication.testBlockReceived:285 expected:<4> but was:<3> TestPendingReplication.testPendingAndInvalidate:391 expected:<1> but was:= <2> TestNNHandlesCombinedBlockReport>BlockReportTestBase.blockReport_08:513 W= rong number of PendingReplication blocks expected:<2> but was:<1> TestNNHandlesCombinedBlockReport>BlockReportTestBase.blockReport_09:556 W= rong number of PendingReplication blocks expected:<2> but was:<1> TestSpaceReservation.testTmpSpaceReserve:454 Wrong reserve space for Tmp = expected:<200> but was:<1000> TestDataNodeMetrics.testRoundTripAckMetric:195 Expected non-zero number o= f acks TestHSync.testHSyncWithReplication:201->checkSyncMetric:48 Bad value for = metric FsyncCount expected:<1> but was:<0> TestAddOverReplicatedStripedBlocks.testProcessOverReplicatedSBSmallerThan= FullBlocks:164 expected:<8> but was:<7> TestRecoverStripedBlocks.testMissingStripedBlockWithBusyNode1:71->doTestM= issingStripedBlock:145 Counting the number of outstanding EC tasks expected= :<4> but was:<0> TestRecoverStripedBlocks.testMissingStripedBlockWithBusyNode2:76->doTestM= issingStripedBlock:145 Counting the number of outstanding EC tasks expected= :<4> but was:<0> TestRecoverStripedBlocks.testMissingStripedBlock:66->doTestMissingStriped= Block:145 Counting the number of outstanding EC tasks expected:<4> but was:= <2> Tests in error:=20 TestRecoverStripedFile.testRecoverThreeDataBlocks1:143->assertFileBlocksR= ecovery:288->waitForRecoveryFinished:388 IO TestRecoverStripedFile.testRecoverAnyBlocks:167->assertFileBlocksRecovery= :288->waitForRecoveryFinished:388 IO TestCrcCorruption.testCrcCorruption:306->thistest:163 =C2=BB Timeout Time= d out wait... TestGetFileChecksum.testGetFileChecksum =C2=BB IO Failed to replace a bad= datanode ... TestSafeModeWithStripedFile.testStripedFile0:72->doTest:102 =C2=BB IO Fai= led to get... TestSafeModeWithStripedFile.testStripedFile1:77->doTest:102 =C2=BB IO Fai= led to get... TestReadStripedFileWithDecoding.testReadCorruptedDataByDeleting:136->test= ReadWithBlockCorrupted:320->verifyRead:161 =C2=BB IO TestReadStripedFileWithDecoding.testReportBadBlock:236 =C2=BB IO 4 missin= g blocks, ... TestReadStripedFileWithDecoding.testReadCorruptedData:119->testReadWithBl= ockCorrupted:320->verifyRead:161 =C2=BB IO TestWriteReadStripedFile.testFileMoreThanOneStripe2:143->testOneFileUsing= DFSStripedInputStream:199 =C2=BB IO TestWriteReadStripedFile.testFileMoreThanABlockGroup3:185->testOneFileUsi= ngDFSStripedInputStream:199 =C2=BB IO TestDFSStripedOutputStreamWithFailure150>TestDFSStripedOutputStreamWithFa= ilure$TestBase.test9:498->TestDFSStripedOutputStreamWithFailure$TestBase.ru= n:486 =C2=BB=20 TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks1:69->r= eadFileWithMissingBlocks:104 =C2=BB Timeout TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks2:74->r= eadFileWithMissingBlocks:104 =C2=BB Timeout TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks3:79->r= eadFileWithMissingBlocks:104 =C2=BB Timeout TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks4:84->r= eadFileWithMissingBlocks:104 =C2=BB Timeout TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks5:89->r= eadFileWithMissingBlocks:104 =C2=BB Timeout TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks6:94->r= eadFileWithMissingBlocks:104 =C2=BB Timeout TestOfflineImageViewerWithStripedBlocks.testFileHavingMultipleBlocks:91->= testFileSize:123 =C2=BB IO TestDFSStripedOutputStreamWithFailure.testAddBlockWhenNoSufficientParityN= umOfNodes:267 =C2=BB IO TestDFSStripedOutputStreamWithFailure.testBlockTokenExpired:199->runTest:= 378->write:402 IO TestDFSStripedOutputStreamWithFailure.testMultipleDatanodeFailure56:172->= runTestWithMultipleFailure:301->runTest:378->write:402 IO TestDFSStripedOutputStream.testFileSmallerThanOneCell1:80->testOneFile:15= 5 =C2=BB Timeout TestDFSStripedOutputStream.testFileSmallerThanOneCell2:85->testOneFile:15= 5 =C2=BB Timeout TestDFSStripedOutputStream.testFileMoreThanOneStripe1:110->testOneFile:15= 5 =C2=BB Timeout TestDFSStripedOutputStream.testFileMoreThanOneStripe2:115->testOneFile:15= 5 =C2=BB Timeout TestAbandonBlock.testQuotaUpdatedWhenBlockAbandoned =C2=BB Remote File /T= estAbandon... TestBlocksWithNotEnoughRacks.testNodeDecomissionWithOverreplicationRespec= tsRackPolicy:463 =C2=BB Timeout TestNodeCount.testNodeCount:130->checkTimeout:146->checkTimeout:156 Timeo= ut Ti... TestBlockTokenWithDFSStriped.testEnd2End:89 =C2=BB IO Failed to get 6 nod= es from na... TestBlockTokenWithDFSStriped.testRead:62->TestBlockTokenWithDFS.doTestRea= d:407->isBlockTokenExpired:109->TestBlockTokenWithDFS.isBlockTokenExpired:6= 28 =C2=BB NullPointer TestRBWBlockInvalidation.testBlockInvalidationWhenRBWReplicaMissedInDN:95= =C2=BB ReplicaNotFound TestRBWBlockInvalidation.testRWRInvalidation =C2=BB IO All datanodes [Dat= anodeInfoW... TestOverReplicatedBlocks.testChooseReplicaToDelete:166 =C2=BB Timeout Tim= ed out wai... TestNNHandlesCombinedBlockReport>BlockReportTestBase.testOneReplicaRbwRep= ortArrivesAfterBlockCompleted:643 =C2=BB BlockMissing TestDataNodeHotSwapVolumes.testDirectlyReloadAfterCheckDiskError:740->cre= ateFile:140->createFile:156 =C2=BB=20 TestDataNodeVolumeFailure.testUnderReplicationAfterVolFailure =C2=BB Remo= te File /t... TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures =C2=BB Re= mote File ... TestDataNodeVolumeFailureReporting.testDataNodeReconfigureWithVolumeFailu= res =C2=BB Remote Tests run: 2192, Failures: 63, Errors: 35, Skipped: 7 [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HDFS Native Client [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HttpFS [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HDFS BookKeeper Journal [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HDFS-NFS [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT [INFO] --------------------------------------------------------------------= ---- [INFO]=20 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-proje= ct --- [INFO] Deleting [INFO]=20 [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-proj= ect --- [INFO] Executing tasks main: [mkdir] Created dir: [INFO] Executed tasks [INFO]=20 [INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hado= op-hdfs-project --- [INFO]=20 [INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @= hadoop-hdfs-project --- [INFO]=20 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs= -project --- [INFO]=20 [INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ ha= doop-hdfs-project --- [INFO]=20 [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-p= roject --- [INFO] Not executing Javadoc as the project is not a Java classpath-capable= package [INFO]=20 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-pro= ject --- [INFO]=20 [INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-h= dfs-project --- [INFO]=20 [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs= -project --- [INFO] --------------------------------------------------------------------= ---- [INFO] Reactor Summary: [INFO]=20 [INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:55 m= in] [INFO] Apache Hadoop HDFS ................................ FAILURE [ 02:17= h] [INFO] Apache Hadoop HDFS Native Client .................. SKIPPED [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.089= s] [INFO] --------------------------------------------------------------------= ---- [INFO] BUILD FAILURE [INFO] --------------------------------------------------------------------= ---- [INFO] Total time: 02:21 h [INFO] Finished at: 2015-10-31T02:48:14+00:00 [INFO] Final Memory: 78M/776M [INFO] --------------------------------------------------------------------= ---- Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.011 sec = - in org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plug= in:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: jav= a.lang.RuntimeException: The forked VM terminated without properly saying g= oodbye. VM crash or System.exit called? [ERROR] Command was /bin/sh -c cd && /home/jenkins/tools/ja= va/jdk1.8.0/jre/bin/java -Xmx2048m -XX:MaxPermSize=3D768m -XX:+HeapDumpOnOu= tOfMemoryError -jar [ERROR] -> [Help 1] [ERROR]=20 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e= switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR]=20 [ERROR] For more information about the errors and possible solutions, pleas= e read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailu= reException [ERROR]=20 [ERROR] After correcting the problems, you can resume the build with the co= mmand [ERROR] mvn -rf :hadoop-hdfs Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Updating MAPREDUCE-6451