Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id AB74311B6B for ; Mon, 14 Apr 2014 13:44:50 +0000 (UTC) Received: (qmail 41944 invoked by uid 500); 14 Apr 2014 13:44:42 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 40783 invoked by uid 500); 14 Apr 2014 13:44:40 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 40351 invoked by uid 99); 14 Apr 2014 13:44:37 -0000 Received: from crius.apache.org (HELO crius) (140.211.11.14) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 14 Apr 2014 13:44:37 +0000 Received: from crius.apache.org (localhost [127.0.0.1]) by crius (Postfix) with ESMTP id F3C98E0000B for ; Mon, 14 Apr 2014 13:44:36 +0000 (UTC) Date: Mon, 14 Apr 2014 13:44:35 +0000 (UTC) From: Apache Jenkins Server To: hdfs-dev@hadoop.apache.org Message-ID: <114308653.638.1397483076306.JavaMail.jenkins@crius> In-Reply-To: <574917883.475.1397396748101.JavaMail.jenkins@crius> References: <574917883.475.1397396748101.JavaMail.jenkins@crius> Subject: Build failed in Jenkins: Hadoop-Hdfs-trunk #1732 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Jenkins-Job: Hadoop-Hdfs-trunk X-Jenkins-Result: FAILURE See Changes: [cnauroth] HDFS-6238. TestDirectoryScanner leaks file descriptors. Contribu= ted by Chris Nauroth. [cnauroth] HDFS-6237. TestDFSShell#testGet fails on Windows due to invalid = file system path. Contributed by Chris Nauroth. [cnauroth] HADOOP-10496. Metrics system FileSink can leak file descriptor. = Contributed by Chris Nauroth. [cnauroth] HADOOP-10495. TestFileUtil fails on Windows due to bad permissio= n assertions. Contributed by Chris Nauroth. [vinodkv] YARN-1928. Fixed a race condition in TestAMRMRPCNodeUpdates which= caused it to fail occassionally. Contributed by Zhijie Shen. [vinodkv] YARN-1933. Fixed test issues with TestAMRestart and TestNodeHealt= hService. Contributed by Jian He. [vinodkv] MAPREDUCE-5828. Fixed a test issue with TestMapReduceJobControl t= hat was causing it to fail on Windows. Contributed by Vinod Kumar Vavilapal= li. ------------------------------------------ [...truncated 12870 lines...] =09at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameN= ode.java:614) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:510) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 671) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 656) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1309) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.j= ava:973) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniD= FSCluster.java:854) =09at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSClust= er.java:700) =09at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:373) =09at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.ja= va:354) =09at org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.setU= pCluster(TestFailureToReadEdits.java:108) testFailuretoReadEdits[1](org.apache.hadoop.hdfs.server.namenode.ha.TestFai= lureToReadEdits) Time elapsed: 0.839 sec <<< ERROR! java.net.BindException: Port in use: localhost:10001 =09at sun.nio.ch.Net.bind(Native Method) =09at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:= 126) =09at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) =09at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnec= tor.java:216) =09at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:853= ) =09at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:794) =09at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameN= odeHttpServer.java:132) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameN= ode.java:614) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:510) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 671) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 656) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1309) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.j= ava:973) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniD= FSCluster.java:854) =09at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSClust= er.java:700) =09at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:373) =09at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.ja= va:354) =09at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHAClus= ter.java:95) =09at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHAClus= ter.java:36) =09at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJ= MHACluster.java:64) =09at org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.setU= pCluster(TestFailureToReadEdits.java:116) testCheckpointStartingMidEditsFile[1](org.apache.hadoop.hdfs.server.namenod= e.ha.TestFailureToReadEdits) Time elapsed: 0.296 sec <<< ERROR! java.net.BindException: Port in use: localhost:10001 =09at sun.nio.ch.Net.bind(Native Method) =09at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:= 126) =09at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) =09at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnec= tor.java:216) =09at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:853= ) =09at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:794) =09at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameN= odeHttpServer.java:132) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameN= ode.java:614) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:510) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 671) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 656) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1309) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.j= ava:973) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniD= FSCluster.java:854) =09at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSClust= er.java:700) =09at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:373) =09at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.ja= va:354) =09at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHAClus= ter.java:95) =09at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHAClus= ter.java:36) =09at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJ= MHACluster.java:64) =09at org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.setU= pCluster(TestFailureToReadEdits.java:116) testFailureToReadEditsOnTransitionToActive[1](org.apache.hadoop.hdfs.server= .namenode.ha.TestFailureToReadEdits) Time elapsed: 0.298 sec <<< ERROR! java.net.BindException: Port in use: localhost:10001 =09at sun.nio.ch.Net.bind(Native Method) =09at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:= 126) =09at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) =09at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnec= tor.java:216) =09at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:853= ) =09at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:794) =09at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameN= odeHttpServer.java:132) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameN= ode.java:614) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:510) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 671) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 656) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1309) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.j= ava:973) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniD= FSCluster.java:854) =09at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSClust= er.java:700) =09at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:373) =09at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.ja= va:354) =09at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHAClus= ter.java:95) =09at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.(MiniQJMHAClus= ter.java:36) =09at org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJ= MHACluster.java:64) =09at org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.setU= pCluster(TestFailureToReadEdits.java:116) Running org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.725 sec = - in org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.919 sec -= in org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.182 sec -= in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 184.606 sec= - in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults Running org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparato= r Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.271 sec -= in org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.501 sec -= in org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.233 sec -= in org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.888 sec= - in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.433 sec -= in org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster Running org.apache.hadoop.hdfs.qjournal.server.TestJournal Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.311 sec = - in org.apache.hadoop.hdfs.qjournal.server.TestJournal Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNode Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.522 sec -= in org.apache.hadoop.hdfs.qjournal.server.TestJournalNode Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.402 sec -= in org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.642 sec -= in org.apache.hadoop.hdfs.qjournal.TestNNWithQJM Running org.apache.hadoop.hdfs.TestSetTimes Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.858 sec = - in org.apache.hadoop.hdfs.TestSetTimes Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.314 sec= - in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery Running org.apache.hadoop.hdfs.TestIsMethodSupported Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.09 sec -= in org.apache.hadoop.hdfs.TestIsMethodSupported Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.88 sec -= in org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery Running org.apache.hadoop.hdfs.TestDFSRollback Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.405 sec = - in org.apache.hadoop.hdfs.TestDFSRollback Running org.apache.hadoop.hdfs.TestPread Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.621 sec= - in org.apache.hadoop.hdfs.TestPread Running org.apache.hadoop.hdfs.TestModTime Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.89 sec -= in org.apache.hadoop.hdfs.TestModTime Running org.apache.hadoop.hdfs.TestDFSShellGenericOptions Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.952 sec -= in org.apache.hadoop.hdfs.TestDFSShellGenericOptions Running org.apache.hadoop.hdfs.TestSmallBlock Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.456 sec -= in org.apache.hadoop.hdfs.TestSmallBlock Running org.apache.hadoop.hdfs.TestFileCreationClient Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.366 sec = - in org.apache.hadoop.hdfs.TestFileCreationClient Running org.apache.hadoop.net.TestNetworkTopology Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.87 sec -= in org.apache.hadoop.net.TestNetworkTopology Running org.apache.hadoop.security.TestPermission Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.708 sec -= in org.apache.hadoop.security.TestPermission Running org.apache.hadoop.security.TestRefreshUserMappings Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.777 sec -= in org.apache.hadoop.security.TestRefreshUserMappings Running org.apache.hadoop.security.TestPermissionSymlinks Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.133 sec = - in org.apache.hadoop.security.TestPermissionSymlinks Running org.apache.hadoop.cli.TestHDFSCLI Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.179 sec = - in org.apache.hadoop.cli.TestHDFSCLI Running org.apache.hadoop.cli.TestAclCLI Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.221 sec -= in org.apache.hadoop.cli.TestAclCLI Running org.apache.hadoop.cli.TestCacheAdminCLI Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.582 sec -= in org.apache.hadoop.cli.TestCacheAdminCLI Running org.apache.hadoop.tools.TestJMXGet Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.788 sec -= in org.apache.hadoop.tools.TestJMXGet Running org.apache.hadoop.tools.TestTools Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.113 sec -= in org.apache.hadoop.tools.TestTools Results : Failed tests:=20 TestFailureToReadEdits.testFailuretoReadEdits:169 Standby fully caught up= , but should not have been able to Tests in error:=20 TestFailureToReadEdits.tearDownCluster:136 =C2=BB ClassCast java.lang.Lon= g cannot b... TestFailureToReadEdits.setUpCluster:108 =C2=BB Bind Port in use: localhos= t:10003 TestFailureToReadEdits.setUpCluster:108 =C2=BB Bind Port in use: localhos= t:10001 TestFailureToReadEdits.setUpCluster:116 =C2=BB Bind Port in use: localhos= t:10001 TestFailureToReadEdits.setUpCluster:116 =C2=BB Bind Port in use: localhos= t:10001 TestFailureToReadEdits.setUpCluster:116 =C2=BB Bind Port in use: localhos= t:10001 Tests run: 2657, Failures: 1, Errors: 6, Skipped: 17 [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HttpFS [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HDFS BookKeeper Journal [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HDFS-NFS [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT [INFO] --------------------------------------------------------------------= ---- [WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missin= g, no dependency information available [WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycl= e-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of i= ts dependencies could not be resolved: Failed to read artifact descriptor f= or org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 [INFO]=20 [INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-pro= ject --- [INFO] Deleting [INFO]=20 [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-proj= ect --- [INFO] Executing tasks main: [mkdir] Created dir: [INFO] Executed tasks [INFO]=20 [INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ ha= doop-hdfs-project --- [INFO]=20 [INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources)= @ hadoop-hdfs-project --- [INFO]=20 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs= -project --- [INFO]=20 [INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ ha= doop-hdfs-project --- [INFO]=20 [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-p= roject --- [INFO] Not executing Javadoc as the project is not a Java classpath-capable= package [INFO]=20 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-pro= ject --- [INFO]=20 [INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ hadoop-hd= fs-project --- [INFO]=20 [INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ hadoop-hdfs= -project --- [INFO] ****** FindBugsMojo execute ******* [INFO] canGenerate is false [INFO] --------------------------------------------------------------------= ---- [INFO] Reactor Summary: [INFO]=20 [INFO] Apache Hadoop HDFS ................................ FAILURE [2:07:49= .455s] [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [2.436s] [INFO] --------------------------------------------------------------------= ---- [INFO] BUILD FAILURE [INFO] --------------------------------------------------------------------= ---- [INFO] Total time: 2:07:53.441s [INFO] Finished at: Mon Apr 14 13:43:49 UTC 2014 [INFO] Final Memory: 32M/315M [INFO] --------------------------------------------------------------------= ---- [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plug= in:2.16:test (default-test) on project hadoop-hdfs: There are test failures= . [ERROR]=20 [ERROR] Please refer to for the ind= ividual test results. [ERROR] -> [Help 1] [ERROR]=20 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e= switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR]=20 [ERROR] For more information about the errors and possible solutions, pleas= e read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailu= reException Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Updating HADOOP-10496 Updating YARN-1933 Updating HADOOP-10495 Updating YARN-1928 Updating MAPREDUCE-5828 Updating HDFS-6238 Updating HDFS-6237