Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6F29917C79 for ; Fri, 4 Sep 2015 21:25:50 +0000 (UTC) Received: (qmail 21606 invoked by uid 500); 4 Sep 2015 21:25:44 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 21535 invoked by uid 500); 4 Sep 2015 21:25:44 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 21514 invoked by uid 99); 4 Sep 2015 21:25:43 -0000 Received: from crius.apache.org (HELO crius.apache.org) (140.211.11.14) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 04 Sep 2015 21:25:43 +0000 Received: from crius.apache.org (localhost [127.0.0.1]) by crius.apache.org (ASF Mail Server at crius.apache.org) with ESMTP id B54A99C027E for ; Fri, 4 Sep 2015 21:24:25 +0000 (UTC) Date: Fri, 4 Sep 2015 21:24:25 +0000 (UTC) From: Apache Jenkins Server To: hdfs-dev@hadoop.apache.org Message-ID: <943006201.4194.1441401865652.JavaMail.jenkins@crius> In-Reply-To: <1741376267.4139.1441394169948.JavaMail.jenkins@crius> References: <1741376267.4139.1441394169948.JavaMail.jenkins@crius> Subject: Build failed in Jenkins: Hadoop-Hdfs-trunk #2274 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Jenkins-Job: Hadoop-Hdfs-trunk X-Jenkins-Result: FAILURE See Changes: [wheat9] HDFS-9012. Move o.a.h.hdfs.protocol.datatransfer.PipelineAck class= to hadoop-hdfs-client module. Contributed by Mingliang Liu. [jing9] HDFS-8384. Allow NN to startup if there are files having a lease bu= t are not under construction. Contributed by Jing Zhao. [wheat9] HDFS-8984. Move replication queues related methods in FSNamesystem= to BlockManager. Contributed by Haohui Mai. [mingma] HDFS-8981. Adding revision to data node jmx getVersion() method. (= Siqi Li via mingma) ------------------------------------------ [...truncated 6773 lines...] =09at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown= Source) =09at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) =09at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) =09at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) =09at org.apache.xerces.parsers.DOMParser.parse(Unknown Source) =09at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source) =09at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150) =09at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546) =09at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534) =09at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:= 2605) =09at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java= :2558) =09at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469= ) =09at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205) =09at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177) =09at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422) =09at org.apache.hadoop.hdfs.server.namenode.TestFsck.testBlockIdCK(TestFsc= k.java:1305) testFsckPermission(org.apache.hadoop.hdfs.server.namenode.TestFsck) Time e= lapsed: 0 sec <<< ERROR! java.lang.RuntimeException: java.util.zip.ZipException: invalid block type =09at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164) =09at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122) =09at java.io.FilterInputStream.read(FilterInputStream.java:83) =09at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Un= known Source) =09at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown So= urce) =09at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown= Source) =09at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) =09at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) =09at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) =09at org.apache.xerces.parsers.DOMParser.parse(Unknown Source) =09at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source) =09at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150) =09at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546) =09at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534) =09at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:= 2605) =09at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java= :2558) =09at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469= ) =09at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205) =09at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177) =09at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422) =09at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckPermission(Te= stFsck.java:277) testFsckOpenFiles(org.apache.hadoop.hdfs.server.namenode.TestFsck) Time el= apsed: 0.002 sec <<< ERROR! java.lang.RuntimeException: java.util.zip.ZipException: invalid block type =09at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164) =09at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122) =09at java.io.FilterInputStream.read(FilterInputStream.java:83) =09at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Un= known Source) =09at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown So= urce) =09at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown= Source) =09at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) =09at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) =09at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) =09at org.apache.xerces.parsers.DOMParser.parse(Unknown Source) =09at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source) =09at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150) =09at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546) =09at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534) =09at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:= 2605) =09at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java= :2558) =09at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2469= ) =09at org.apache.hadoop.conf.Configuration.set(Configuration.java:1205) =09at org.apache.hadoop.conf.Configuration.set(Configuration.java:1177) =09at org.apache.hadoop.conf.Configuration.setLong(Configuration.java:1422) =09at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckOpenFiles(Tes= tFsck.java:599) Running org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.404 sec -= in org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate Running org.apache.hadoop.hdfs.server.namenode.TestHostsFiles Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.225 sec = - in org.apache.hadoop.hdfs.server.namenode.TestHostsFiles Running org.apache.hadoop.hdfs.server.namenode.TestStartupProgressServlet Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.619 sec -= in org.apache.hadoop.hdfs.server.namenode.TestStartupProgressServlet Running org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.45 sec = - in org.apache.hadoop.hdfs.server.namenode.TestFileContextXAttr Running org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.537 sec = - in org.apache.hadoop.hdfs.server.namenode.TestFSImageWithSnapshot Running org.apache.hadoop.hdfs.server.namenode.TestFileTruncate Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 184.253 se= c - in org.apache.hadoop.hdfs.server.namenode.TestFileTruncate Running org.apache.hadoop.hdfs.server.namenode.TestFSDirectory Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.059 sec = - in org.apache.hadoop.hdfs.server.namenode.TestFSDirectory Running org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExtern= alKdc Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.104 sec -= in org.apache.hadoop.hdfs.server.namenode.TestSecureNameNodeWithExternalKd= c Running org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow= Manager Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.538 sec -= in org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindowMana= ger Running org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.189 sec -= in org.apache.hadoop.hdfs.server.namenode.top.window.TestRollingWindow Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.151 sec -= in org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.862 sec = - in org.apache.hadoop.hdfs.server.namenode.TestNameNodeMXBean Running org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNode= s Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.121 sec -= in org.apache.hadoop.hdfs.server.namenode.TestFsckWithMultipleNameNodes Running org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.584 sec -= in org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark Running org.apache.hadoop.hdfs.server.namenode.TestDeduplicationMap Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.097 sec -= in org.apache.hadoop.hdfs.server.namenode.TestDeduplicationMap Running org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.693 sec= - in org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplicat= ion Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 69.532 sec = - in org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.326 sec = - in org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA Running org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailove= r Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.11 sec - = in org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover Running org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.894 sec = - in org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer Running org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessa= ges Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.633 sec = - in org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages Running org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.274 sec = - in org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover Running org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.838 sec -= in org.apache.hadoop.hdfs.server.namenode.ha.TestQuotasWithHA Running org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.385 sec -= in org.apache.hadoop.hdfs.server.namenode.ha.TestGetGroupsWithHA Running org.apache.hadoop.hdfs.server.namenode.ha.TestNNHealthCheck Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.91 sec - = in org.apache.hadoop.hdfs.server.namenode.ha.TestNNHealthCheck Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 85.289 sec = - in org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.547 sec -= in org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir Running org.apache.hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationH= andler Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.15 sec - = in org.apache.hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandle= r Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.713 sec -= in org.apache.hadoop.hdfs.server.namenode.ha.TestHAFsck Running org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.292 sec = - in org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 40.275 sec = <<< FAILURE! - in org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing testQueueingWithAppend(org.apache.hadoop.hdfs.server.namenode.ha.TestDNFenc= ing) Time elapsed: 3.783 sec <<< FAILURE! java.lang.AssertionError: expected:<21> but was:<19> =09at org.junit.Assert.fail(Assert.java:88) =09at org.junit.Assert.failNotEquals(Assert.java:743) =09at org.junit.Assert.assertEquals(Assert.java:118) =09at org.junit.Assert.assertEquals(Assert.java:555) =09at org.junit.Assert.assertEquals(Assert.java:542) =09at org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingW= ithAppend(TestDNFencing.java:463) Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.342 sec -= in org.apache.hadoop.hdfs.server.namenode.ha.TestHAAppend Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.488 sec = - in org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 90.792 sec= - in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.052 sec= - in org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions Running org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQ= JM Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.95 sec -= in org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM Running org.apache.hadoop.hdfs.server.namenode.ha.TestRemoteNameNodeInfo Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.789 sec -= in org.apache.hadoop.hdfs.server.namenode.ha.TestRemoteNameNodeInfo Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints Results : Failed tests:=20 TestDNFencing.testQueueingWithAppend:463 expected:<21> but was:<19> Tests in error:=20 TestFsck.testBlockIdCK:1305 =C2=BB Runtime java.util.zip.ZipException: in= valid bloc... TestFsck.testFsckPermission:277 =C2=BB Runtime java.util.zip.ZipException= : invalid ... TestFsck.testFsckOpenFiles:599 =C2=BB Runtime java.util.zip.ZipException:= invalid b... Tests run: 2403, Failures: 1, Errors: 3, Skipped: 12 [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HttpFS [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HDFS BookKeeper Journal [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HDFS-NFS [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT [INFO] --------------------------------------------------------------------= ---- [INFO]=20 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-proje= ct --- [INFO] Deleting [INFO]=20 [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-proj= ect --- [INFO] Executing tasks main: [mkdir] Created dir: [INFO] Executed tasks [INFO]=20 [INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hado= op-hdfs-project --- [INFO]=20 [INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @= hadoop-hdfs-project --- [INFO]=20 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs= -project --- [INFO]=20 [INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ ha= doop-hdfs-project --- [INFO]=20 [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-p= roject --- [INFO] Skipping javadoc generation [INFO]=20 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-pro= ject --- [INFO]=20 [INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-h= dfs-project --- [INFO]=20 [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs= -project --- [INFO] --------------------------------------------------------------------= ---- [INFO] Reactor Summary: [INFO]=20 [INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:12 m= in] [INFO] Apache Hadoop HDFS ................................ FAILURE [ 01:25= h] [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.063= s] [INFO] --------------------------------------------------------------------= ---- [INFO] BUILD FAILURE [INFO] --------------------------------------------------------------------= ---- [INFO] Total time: 01:28 h [INFO] Finished at: 2015-09-04T21:25:34+00:00 [INFO] Final Memory: 72M/1106M [INFO] --------------------------------------------------------------------= ---- [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plug= in:2.17:test (default-test) on project hadoop-hdfs: ExecutionException: jav= a.lang.RuntimeException: The forked VM terminated without properly saying g= oodbye. VM crash or System.exit called? [ERROR] Command was /bin/sh -c cd && /home/jenkins/tools/java/jdk= 1.7.0_55/jre/bin/java -Xmx4096m -XX:MaxPermSize=3D768m -XX:+HeapDumpOnOutOf= MemoryError -jar [ERROR] -> [Help 1] [ERROR]=20 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e= switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR]=20 [ERROR] For more information about the errors and possible solutions, pleas= e read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailu= reException [ERROR]=20 [ERROR] After correcting the problems, you can resume the build with the co= mmand [ERROR] mvn -rf :hadoop-hdfs Build step 'Execute shell' marked build as failure Archiving artifacts Sending artifact delta relative to Hadoop-Hdfs-trunk #2269 Archived 1 artifacts Archive block size is 32768 Received 0 blocks and 4550427 bytes Compression is 0.0% Took 2.1 sec Recording test results Updating HDFS-8384 Updating HDFS-8981 Updating HDFS-8984 Updating HDFS-9012