Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2AC25175BE for ; Wed, 25 Mar 2015 14:39:50 +0000 (UTC) Received: (qmail 24378 invoked by uid 500); 25 Mar 2015 14:39:27 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 24276 invoked by uid 500); 25 Mar 2015 14:39:27 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 24263 invoked by uid 99); 25 Mar 2015 14:39:27 -0000 Received: from crius.apache.org (HELO crius) (140.211.11.14) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 25 Mar 2015 14:39:27 +0000 Received: from crius.apache.org (localhost [127.0.0.1]) by crius (Postfix) with ESMTP id D7C12E01175 for ; Wed, 25 Mar 2015 14:39:26 +0000 (UTC) Date: Wed, 25 Mar 2015 14:39:26 +0000 (UTC) From: Apache Jenkins Server To: hdfs-dev@hadoop.apache.org Message-ID: <1809176373.3657.1427294366779.JavaMail.jenkins@crius> In-Reply-To: <1573501777.3239.1427207675451.JavaMail.jenkins@crius> References: <1573501777.3239.1427207675451.JavaMail.jenkins@crius> Subject: Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #134 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Jenkins-Job: Hadoop-Hdfs-trunk-Java8 X-Jenkins-Result: FAILURE See Changes: [ozawa] HADOOP-11609. Correct credential commands info in CommandsManual.ht= ml#credential. Contributed by Varun Saxena. [ozawa] Fix CHANGES.txt for HADOOP-11602. [ozawa] MAPREDUCE-6285. ClientServiceDelegate should not retry upon Authent= icationException. Contributed by Jonathan Eagles. [wang] HDFS-7961. Trigger full block report after hot swapping disk. Contri= buted by Eddy Xu. [brandonli] HDFS-7976. Update NFS user guide for mount option 'sync' to min= imize or avoid reordered writes. Contributed by Brandon Li [harsh] HDFS-7875. Improve log message when wrong value configured for dfs.= datanode.failed.volumes.tolerated. Contributed by Nijel. [wangda] YARN-3383. AdminService should use warn instead of info to log exc= eption when operation fails. (Li Lu via wangda) [brandonli] HDFS-7977. NFS couldn't take percentile intervals. Contributed = by Brandon Li [jing9] HDFS-7854. Separate class DataStreamer out of DFSOutputStream. Cont= ributed by Li Bo. [wheat9] HDFS-7713. Implement mkdirs in the HDFS Web UI. Contributed by Rav= i Prakash. [jitendra] HDFS-6826. Plugin interface to enable delegation of HDFS authori= zation assertions. Contributed by Arun Suresh. [wheat9] HDFS-7985. WebHDFS should be always enabled. Contributed by Li Lu. [ozawa] HADOOP-11741. Add LOG.isDebugEnabled() guard for some LOG.debug(). = Contributed by Walter Su. [ozawa] HADOOP-11014. Potential resource leak in JavaKeyStoreProvider due t= o unclosed stream. (ozawa) [ozawa] HADOOP-11738. Fix a link of Protocol Buffers 2.5 for download in BU= ILDING.txt. (ozawa) [harsh] MAPREDUCE-579. Streaming slowmatch documentation. [aajisaka] MAPREDUCE-6292. Use org.junit package instead of junit.framework= in TestCombineFileInputFormat. (aajisaka) ------------------------------------------ [...truncated 8719 lines...] java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$Repli= cationMonitor.run(BlockManager.java:3602) at java.lang.Thread.run(Thread.java:744) "IPC Server handler 7 on 41332" daemon prio=3D5 tid=3D232 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.jav= a:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionO= bject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueu= e.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.jav= a:109) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2110) "1068945248@qtp-455888635-0" daemon prio=3D5 tid=3D151 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadP= ool.java:626) "IPC Server handler 2 on 56584" daemon prio=3D5 tid=3D139 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.jav= a:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionO= bject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueu= e.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.jav= a:109) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2110) "IPC Server idle connection scanner for port 56584" daemon prio=3D5 tid=3D1= 34 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "refreshUsed- daemon prio=3D5 tid=3D353 timed_waitin= g java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.DU$DURefreshThread.run(DU.java:115) at java.lang.Thread.run(Thread.java:744) "IPC Server handler 5 on 40071" daemon prio=3D5 tid=3D49 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.jav= a:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionO= bject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueu= e.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.jav= a:109) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2110) "java.util.concurrent.ThreadPoolExecutor$Worker@67052fdb[State =3D -1, empt= y queue]" daemon prio=3D5 tid=3D332 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.jav= a:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionO= bject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueu= e.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueu= e.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecut= or.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExec= utor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExe= cutor.java:617) at java.lang.Thread.run(Thread.java:744) "org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$Pen= dingReplicationMonitor@265adfad" daemon prio=3D5 tid=3D42 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.blockmanagement.PendingReplication= Blocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:221) at java.lang.Thread.run(Thread.java:744) "IPC Server handler 6 on 56584" daemon prio=3D5 tid=3D143 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.jav= a:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionO= bject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueu= e.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.jav= a:109) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2110) "IPC Server idle connection scanner for port 40071" daemon prio=3D5 tid=3D3= 6 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505) "nioEventLoopGroup-4-1" prio=3D10 tid=3D154 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleT= hreadEventExecutor.java:116) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDec= orator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:744) "IPC Server handler 3 on 41332" daemon prio=3D5 tid=3D228 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.jav= a:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionO= bject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueu= e.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.jav= a:109) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2110) "pool-2-thread-1" prio=3D5 tid=3D30 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.jav= a:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionO= bject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueu= e.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueu= e.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecut= or.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExec= utor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExe= cutor.java:617) at java.lang.Thread.run(Thread.java:744) "IPC Server listener on 40071" daemon prio=3D5 tid=3D34 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at org.apache.hadoop.ipc.Server$Listener.run(Server.java:687) "IPC Server handler 9 on 33960" daemon prio=3D5 tid=3D322 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.jav= a:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionO= bject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueu= e.java:467) at org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.jav= a:109) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2110) "org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@= 36328d33" daemon prio=3D5 tid=3D56 timed_waiting java.lang.Thread.State: TIMED_WAITING at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEdit= LogRoller.run(FSNamesystem.java:4664) at java.lang.Thread.run(Thread.java:744) "process reaper" daemon prio=3D10 tid=3D341 timed_waiting java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.jav= a:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill= (SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(Syn= chronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java= :941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecut= or.java:1066) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExec= utor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExe= cutor.java:617) at java.lang.Thread.run(Thread.java:744) "nioEventLoopGroup-2-1" prio=3D10 tid=3D66 runnable java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleT= hreadEventExecutor.java:116) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDec= orator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:744) TestDatanodeManager.testNumVersionsReportedCorrect:157 The map of version= counts returned by DatanodeManager was not what it was expected to be on i= teration 496 expected:<0> but was:<1> Tests in error:=20 TestDistributedFileSystem.testAllWithNoXmlDefaults:655->testFileChecksum:= 571 =C2=BB SocketTimeout Tests run: 3322, Failures: 2, Errors: 1, Skipped: 18 [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HttpFS [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HDFS BookKeeper Journal [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Skipping Apache Hadoop HDFS-NFS [INFO] This project has been banned from the build due to previous failures= . [INFO] --------------------------------------------------------------------= ---- [INFO] = =20 [INFO] --------------------------------------------------------------------= ---- [INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT [INFO] --------------------------------------------------------------------= ---- [WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missin= g, no dependency information available [WARNING] Failed to retrieve plugin descriptor for org.eclipse.m2e:lifecycl= e-mapping:1.0.0: Plugin org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of i= ts dependencies could not be resolved: Failure to find org.eclipse.m2e:life= cycle-mapping:jar:1.0.0 in http://repo.maven.apache.org/maven2 was cached i= n the local repository, resolution will not be reattempted until the update= interval of central has elapsed or updates are forced [INFO]=20 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-proje= ct --- [INFO] Deleting [INFO]=20 [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-proj= ect --- [INFO] Executing tasks main: [mkdir] Created dir: [INFO] Executed tasks [INFO]=20 [INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hado= op-hdfs-project --- [INFO]=20 [INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @= hadoop-hdfs-project --- [INFO]=20 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs= -project --- [INFO]=20 [INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ ha= doop-hdfs-project --- [INFO]=20 [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-p= roject --- [INFO] Not executing Javadoc as the project is not a Java classpath-capable= package [INFO]=20 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-pro= ject --- [INFO]=20 [INFO] --- maven-checkstyle-plugin:2.12.1:checkstyle (default-cli) @ hadoop= -hdfs-project --- [INFO]=20 [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs= -project --- [INFO] --------------------------------------------------------------------= ---- [INFO] Reactor Summary: [INFO]=20 [INFO] Apache Hadoop HDFS ................................ FAILURE [ 03:04= h] [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 1.829= s] [INFO] --------------------------------------------------------------------= ---- [INFO] BUILD FAILURE [INFO] --------------------------------------------------------------------= ---- [INFO] Total time: 03:04 h [INFO] Finished at: 2015-03-25T14:38:58+00:00 [INFO] Final Memory: 52M/230M [INFO] --------------------------------------------------------------------= ---- [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plug= in:2.17:test (default-test) on project hadoop-hdfs: There are test failures= . [ERROR]=20 [ERROR] Please refer to for the ind= ividual test results. [ERROR] -> [Help 1] [ERROR]=20 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e= switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR]=20 [ERROR] For more information about the errors and possible solutions, pleas= e read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailu= reException Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Updating HDFS-7875 Updating HDFS-7985 Updating HDFS-7961 Updating HDFS-7977 Updating HDFS-7976 Updating MAPREDUCE-6285 Updating YARN-3383 Updating HADOOP-11738 Updating MAPREDUCE-6292 Updating HDFS-7713 Updating MAPREDUCE-579 Updating HADOOP-11741 Updating HADOOP-11609 Updating HDFS-6826 Updating HADOOP-11014 Updating HDFS-7854 Updating HADOOP-11602