Return-Path: X-Original-To: apmail-hadoop-mapreduce-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 195B494DE for ; Thu, 3 Nov 2011 00:37:15 +0000 (UTC) Received: (qmail 36361 invoked by uid 500); 3 Nov 2011 00:37:14 -0000 Delivered-To: apmail-hadoop-mapreduce-dev-archive@hadoop.apache.org Received: (qmail 36316 invoked by uid 500); 3 Nov 2011 00:37:14 -0000 Mailing-List: contact mapreduce-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-dev@hadoop.apache.org Delivered-To: mailing list mapreduce-dev@hadoop.apache.org Received: (qmail 36308 invoked by uid 99); 3 Nov 2011 00:37:14 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 03 Nov 2011 00:37:14 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.8] (HELO aegis.apache.org) (140.211.11.8) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 03 Nov 2011 00:37:10 +0000 Received: from aegis (localhost [127.0.0.1]) by aegis.apache.org (Postfix) with ESMTP id 982FFC0106 for ; Thu, 3 Nov 2011 00:36:48 +0000 (UTC) Date: Thu, 3 Nov 2011 00:36:47 +0000 (UTC) From: Apache Jenkins Server To: mapreduce-dev@hadoop.apache.org Message-ID: <1273125566.3641320280608607.JavaMail.hudson@aegis> Subject: Hadoop-Mapreduce-22-branch - Build # 87 - Failure MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org See https://builds.apache.org/job/Hadoop-Mapreduce-22-branch/87/ ###########################################################################= ######## ########################## LAST 60 LINES OF THE CONSOLE ###################= ######## [...truncated 500297 lines...] [junit] 11/11/03 00:32:26 INFO datanode.DataBlockScanner: Exiting DataB= lockScanner thread. [junit] 11/11/03 00:32:26 INFO datanode.DataNode: DatanodeRegistration(= 127.0.0.1:48938, storageID=3DDS-475485639-67.195.138.25-48938-1320280345227= , infoPort=3D36358, ipcPort=3D59430):Finishing DataNode in: FSDataset{dirpa= th=3D'/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trun= k/build/contrib/raid/test/data/dfs/data/data3/current/finalized,/home/jenki= ns/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/r= aid/test/data/dfs/data/data4/current/finalized'} [junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping server on 59430 [junit] 11/11/03 00:32:26 INFO datanode.DataNode: Waiting for threadgro= up to exit, active threads is 0 [junit] 11/11/03 00:32:26 INFO datanode.FSDatasetAsyncDiskService: Shut= ting down all async disk service threads... [junit] 11/11/03 00:32:26 INFO datanode.FSDatasetAsyncDiskService: All = async disk service threads have been shut down. [junit] 11/11/03 00:32:26 WARN datanode.FSDatasetAsyncDiskService: Asyn= cDiskService has already shut down. [junit] 11/11/03 00:32:26 INFO hdfs.MiniDFSCluster: Shutting down DataN= ode 0 [junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping server on 42200 [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 0 on 4220= 0: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 2 on 4220= 0: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 1 on 4220= 0: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping IPC Server listener= on 42200 [junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping IPC Server Responde= r [junit] 11/11/03 00:32:26 INFO datanode.DataNode: Waiting for threadgro= up to exit, active threads is 25 [junit] 11/11/03 00:32:26 INFO datanode.DataNode: Waiting for threadgro= up to exit, active threads is 0 [junit] 11/11/03 00:32:26 INFO datanode.DataBlockScanner: Exiting DataB= lockScanner thread. [junit] 11/11/03 00:32:26 INFO datanode.DataNode: DatanodeRegistration(= 127.0.0.1:44434, storageID=3DDS-908436179-67.195.138.25-44434-1320280345099= , infoPort=3D55557, ipcPort=3D42200):Finishing DataNode in: FSDataset{dirpa= th=3D'/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trun= k/build/contrib/raid/test/data/dfs/data/data1/current/finalized,/home/jenki= ns/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/contrib/r= aid/test/data/dfs/data/data2/current/finalized'} [junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping server on 42200 [junit] 11/11/03 00:32:26 INFO datanode.DataNode: Waiting for threadgro= up to exit, active threads is 0 [junit] 11/11/03 00:32:26 INFO datanode.FSDatasetAsyncDiskService: Shut= ting down all async disk service threads... [junit] 11/11/03 00:32:26 INFO datanode.FSDatasetAsyncDiskService: All = async disk service threads have been shut down. [junit] 11/11/03 00:32:26 WARN datanode.FSDatasetAsyncDiskService: Asyn= cDiskService has already shut down. [junit] 11/11/03 00:32:26 WARN namenode.FSNamesystem: ReplicationMonito= r thread received InterruptedException.java.lang.InterruptedException: slee= p interrupted [junit] 11/11/03 00:32:26 WARN namenode.DecommissionManager: Monitor in= terrupted: java.lang.InterruptedException: sleep interrupted [junit] 11/11/03 00:32:26 INFO namenode.FSEditLog: Number of transactio= ns: 14 Total time for transactions(ms): 2Number of transactions batched in = Syncs: 0 Number of syncs: 7 SyncTimes(ms): 5 2=20 [junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping server on 58221 [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 0 on 5822= 1: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 2 on 5822= 1: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 5 on 5822= 1: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 8 on 5822= 1: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 9 on 5822= 1: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 1 on 5822= 1: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping IPC Server listener= on 58221 [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 4 on 5822= 1: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 7 on 5822= 1: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 6 on 5822= 1: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: IPC Server handler 3 on 5822= 1: exiting [junit] 11/11/03 00:32:26 INFO ipc.Server: Stopping IPC Server Responde= r [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.89 sec BUILD FAILED /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/buil= d.xml:817: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/buil= d.xml:796: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/= contrib/build.xml:87: The following error occurred while executing this lin= e: /home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/= contrib/raid/build.xml:60: Tests failed! Total time: 193 minutes 53 seconds Build step 'Execute shell' marked build as failure [FINDBUGS] Skipping publisher since build result is FAILURE Archiving artifacts Publishing Clover coverage report... No Clover report will be published due to a Build Failure Recording test results Publishing Javadoc Recording fingerprints Updating MAPREDUCE-3139 Email was triggered for: Failure Sending email for trigger: Failure ###########################################################################= ######## ############################## FAILED TESTS (if any) ######################= ######## 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.hadoop.mapred.TestFairSchedul= erSystem Error Message: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed on connecti= on exception: java.net.ConnectException: Connection refused Stack Trace: java.lang.RuntimeException: java.net.ConnectException: Call to localhost/12= 7.0.0.1:0 failed on connection exception: java.net.ConnectException: Connec= tion refused =09at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.ja= va:336) =09at org.apache.hadoop.mapred.MiniMRCluster.(MiniMRCluster.java:546) =09at org.apache.hadoop.mapred.MiniMRCluster.(MiniMRCluster.java:483) =09at org.apache.hadoop.mapred.MiniMRCluster.(MiniMRCluster.java:475) =09at org.apache.hadoop.mapred.MiniMRCluster.(MiniMRCluster.java:418) =09at org.apache.hadoop.mapred.TestFairSchedulerSystem.setUp(TestFairSchedu= lerSystem.java:74) Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:0 failed = on connection exception: java.net.ConnectException: Connection refused =09at org.apache.hadoop.ipc.Client.wrapException(Client.java:1055) =09at org.apache.hadoop.ipc.Client.call(Client.java:1031) =09at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEng= ine.java:198) =09at $Proxy6.getProtocolVersion(Unknown Source) =09at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.ja= va:235) =09at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:275) =09at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:249) =09at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:86) =09at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:98) =09at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:74) =09at org.apache.hadoop.mapred.JobClient.init(JobClient.java:456) =09at org.apache.hadoop.mapred.JobClient.(JobClient.java:435) =09at org.apache.hadoop.mapred.MiniMRCluster.waitUntilIdle(MiniMRCluster.ja= va:322) Caused by: java.net.ConnectException: Connection refused =09at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) =09at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567= ) =09at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout= .java:206) =09at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) =09at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:4= 16) =09at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:50= 4) =09at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:206) =09at org.apache.hadoop.ipc.Client.getConnection(Client.java:1164) =09at org.apache.hadoop.ipc.Client.call(Client.java:1008) FAILED: org.apache.hadoop.raid.TestRaidNode.testPathFilter Error Message: Too many open files at sun.nio.ch.IOUtil.initPipe(Native Method) at sun.n= io.ch.EPollSelectorImpl.(EPollSelectorImpl.java:49) at sun.nio.ch.EP= ollSelectorProvider.openSelector(EPollSelectorProvider.java:18) at org.apa= che.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWithTimeout.jav= a:407) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(So= cketIOWithTimeout.java:322) at org.apache.hadoop.net.SocketIOWithTimeout.d= oIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStre= am.read(SocketInputStream.java:159) at org.apache.hadoop.net.SocketInputSt= ream.read(SocketInputStream.java:132) at java.io.BufferedInputStream.fill(= BufferedInputStream.java:218) at java.io.BufferedInputStream.read1(Buffere= dInputStream.java:258) at java.io.BufferedInputStream.read(BufferedInputSt= ream.java:317) at java.io.DataInputStream.read(DataInputStream.java:132) = at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:122) at org.apache.= hadoop.hdfs.BlockReader.readChunk(BlockReader.java:297) at org.apache.hado= op.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:273) at org.apa= che.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:225) at org.apache.= hadoop.fs.FSInputChecker.read(FSInputChecker.java:193) at org.apache.hadoo= p.hdfs.BlockReader.read(BlockReader.java:136) at org.apache.hadoop.hdfs.DF= SInputStream.readBuffer(DFSInputStream.java:466) at org.apache.hadoop.hdfs= .DFSInputStream.read(DFSInputStream.java:517) at java.io.DataInputStream.r= ead(DataInputStream.java:132) at org.apache.hadoop.raid.ParityInputStream.= readExact(ParityInputStream.java:138) at org.apache.hadoop.raid.ParityInpu= tStream.makeAvailable(ParityInputStream.java:117) at org.apache.hadoop.rai= d.ParityInputStream.drain(ParityInputStream.java:95) at org.apache.hadoop.= raid.XORDecoder.fixErasedBlock(XORDecoder.java:74) at org.apache.hadoop.ra= id.Decoder.decodeFile(Decoder.java:147) at org.apache.hadoop.raid.RaidNode= .unRaid(RaidNode.java:867) at org.apache.hadoop.raid.RaidNode.recoverFile(= RaidNode.java:333) at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown= Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMeth= odAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597= ) at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine= .java:349) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482)= at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478) at java.= security.AccessController.doPrivileged(Native Method) at javax.security.au= th.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupI= nformation.doAs(UserGroupInformation.java:1153) at org.apache.hadoop.ipc.S= erver$Handler.run(Server.java:1476)=20 Stack Trace: java.io.IOException: Too many open files =09at sun.nio.ch.IOUtil.initPipe(Native Method) =09at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:49) =09at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.j= ava:18) =09at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWi= thTimeout.java:407) =09at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketI= OWithTimeout.java:322) =09at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.ja= va:157) =09at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1= 59) =09at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:1= 32) =09at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) =09at java.io.BufferedInputStream.read1(BufferedInputStream.java:258) =09at java.io.BufferedInputStream.read(BufferedInputStream.java:317) =09at java.io.DataInputStream.read(DataInputStream.java:132) =09at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:122) =09at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:297) =09at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.= java:273) =09at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:225) =09at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:193) =09at org.apache.hadoop.hdfs.BlockReader.read(BlockReader.java:136) =09at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:= 466) =09at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:517) =09at java.io.DataInputStream.read(DataInputStream.java:132) =09at org.apache.hadoop.raid.ParityInputStream.readExact(ParityInputStream.= java:138) =09at org.apache.hadoop.raid.ParityInputStream.makeAvailable(ParityInputStr= eam.java:117) =09at org.apache.hadoop.raid.ParityInputStream.drain(ParityInputStream.java= :95) =09at org.apache.hadoop.raid.XORDecoder.fixErasedBlock(XORDecoder.java:74) =09at org.apache.hadoop.raid.Decoder.decodeFile(Decoder.java:147) =09at org.apache.hadoop.raid.RaidNode.unRaid(RaidNode.java:867) =09at org.apache.hadoop.raid.RaidNode.recoverFile(RaidNode.java:333) =09at org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine= .java:349) =09at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1482) =09at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1478) =09at java.security.AccessController.doPrivileged(Native Method) =09at javax.security.auth.Subject.doAs(Subject.java:396) =09at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma= tion.java:1153) =09at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1476) =09at org.apache.hadoop.ipc.Client.call(Client.java:1028) =09at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEng= ine.java:198) =09at $Proxy11.recoverFile(Unknown Source) =09at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryI= nvocationHandler.java:84) =09at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocat= ionHandler.java:59) =09at $Proxy11.recoverFile(Unknown Source) =09at org.apache.hadoop.raid.RaidShell.recover(RaidShell.java:272) =09at org.apache.hadoop.raid.TestRaidNode.simulateError(TestRaidNode.java:5= 76) =09at org.apache.hadoop.raid.TestRaidNode.doTestPathFilter(TestRaidNode.jav= a:331) =09at org.apache.hadoop.raid.TestRaidNode.testPathFilter(TestRaidNode.java:= 257) FAILED: org.apache.hadoop.streaming.TestDumpTypedBytes.testDumping Error Message: port out of range:-1 Stack Trace: java.lang.IllegalArgumentException: port out of range:-1 =09at java.net.InetSocketAddress.(InetSocketAddress.java:118) =09at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:5= 19) =09at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:4= 59) =09at java.security.AccessController.doPrivileged(Native Method) =09at javax.security.auth.Subject.doAs(Subject.java:396) =09at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma= tion.java:1153) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameN= ode.java:459) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.activate(NameNode.jav= a:403) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:387) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 576) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 569) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1538) =09at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSClust= er.java:445) =09at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:378) =09at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:259) =09at org.apache.hadoop.streaming.TestDumpTypedBytes.testDumping(TestDumpTy= pedBytes.java:42) FAILED: org.apache.hadoop.streaming.TestLoadTypedBytes.testLoading Error Message: port out of range:-1 Stack Trace: java.lang.IllegalArgumentException: port out of range:-1 =09at java.net.InetSocketAddress.(InetSocketAddress.java:118) =09at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:5= 19) =09at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:4= 59) =09at java.security.AccessController.doPrivileged(Native Method) =09at javax.security.auth.Subject.doAs(Subject.java:396) =09at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma= tion.java:1153) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameN= ode.java:459) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.activate(NameNode.jav= a:403) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:387) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 576) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 569) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1538) =09at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSClust= er.java:445) =09at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:378) =09at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:259) =09at org.apache.hadoop.streaming.TestLoadTypedBytes.testLoading(TestLoadTy= pedBytes.java:42)