Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 18BA13A69 for ; Sat, 30 Apr 2011 12:43:17 +0000 (UTC) Received: (qmail 15328 invoked by uid 500); 30 Apr 2011 12:43:16 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 15225 invoked by uid 500); 30 Apr 2011 12:43:15 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 15217 invoked by uid 99); 30 Apr 2011 12:43:15 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 30 Apr 2011 12:43:15 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.8] (HELO aegis.apache.org) (140.211.11.8) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 30 Apr 2011 12:43:12 +0000 Received: from aegis (localhost [127.0.0.1]) by aegis.apache.org (Postfix) with ESMTP id D74F7C00F6 for ; Sat, 30 Apr 2011 12:42:50 +0000 (UTC) Date: Sat, 30 Apr 2011 12:42:49 +0000 (UTC) From: Apache Jenkins Server To: hdfs-dev@hadoop.apache.org Message-ID: <1223232637.7681304167370865.JavaMail.hudson@aegis> Subject: Hadoop-Hdfs-trunk - Build # 652 - Failure MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org See https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/652/ ################################################################################### ########################## LAST 60 LINES OF THE CONSOLE ########################### [...truncated 794185 lines...] [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:136) [junit] at java.lang.Thread.run(Thread.java:662) [junit] [junit] 2011-04-30 12:43:08,285 INFO datanode.DataNode (DataNode.java:shutdown(1638)) - Waiting for threadgroup to exit, active threads is 0 [junit] 2011-04-30 12:43:08,285 WARN datanode.DataNode (DataNode.java:offerService(1065)) - BPOfferService for block pool=BP-512121671-127.0.1.1-1304167386975 received exception:java.lang.InterruptedException [junit] 2011-04-30 12:43:08,285 WARN datanode.DataNode (DataNode.java:run(1218)) - DatanodeRegistration(127.0.0.1:38200, storageID=DS-199548474-127.0.1.1-38200-1304167387552, infoPort=33000, ipcPort=44921, storageInfo=lv=-35;cid=testClusterID;nsid=1503134809;c=0) ending block pool service for: BP-512121671-127.0.1.1-1304167386975 [junit] 2011-04-30 12:43:08,286 INFO datanode.DataBlockScanner (DataBlockScanner.java:removeBlockPool(277)) - Removed bpid=BP-512121671-127.0.1.1-1304167386975 from blockPoolScannerMap [junit] 2011-04-30 12:43:08,286 INFO datanode.DataNode (FSDataset.java:shutdownBlockPool(2547)) - Removing block pool BP-512121671-127.0.1.1-1304167386975 [junit] 2011-04-30 12:43:08,286 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-04-30 12:43:08,286 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-04-30 12:43:08,286 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1041)) - Shutting down DataNode 0 [junit] 2011-04-30 12:43:08,287 WARN datanode.DirectoryScanner (DirectoryScanner.java:shutdown(297)) - DirectoryScanner: shutdown has been called [junit] 2011-04-30 12:43:08,287 INFO datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:startNewPeriod(591)) - Starting a new period : work left in prev period : 100.00% [junit] 2011-04-30 12:43:08,387 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 56243 [junit] 2011-04-30 12:43:08,388 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 56243: exiting [junit] 2011-04-30 12:43:08,388 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder [junit] 2011-04-30 12:43:08,388 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 56243 [junit] 2011-04-30 12:43:08,388 INFO datanode.DataNode (DataNode.java:shutdown(1638)) - Waiting for threadgroup to exit, active threads is 1 [junit] 2011-04-30 12:43:08,388 WARN datanode.DataNode (DataXceiverServer.java:run(143)) - 127.0.0.1:42990:DataXceiveServer: java.nio.channels.AsynchronousCloseException [junit] at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) [junit] at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159) [junit] at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) [junit] at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:136) [junit] at java.lang.Thread.run(Thread.java:662) [junit] [junit] 2011-04-30 12:43:08,389 WARN datanode.DataNode (DataNode.java:offerService(1065)) - BPOfferService for block pool=BP-512121671-127.0.1.1-1304167386975 received exception:java.lang.InterruptedException [junit] 2011-04-30 12:43:08,389 WARN datanode.DataNode (DataNode.java:run(1218)) - DatanodeRegistration(127.0.0.1:42990, storageID=DS-608167230-127.0.1.1-42990-1304167387444, infoPort=45180, ipcPort=56243, storageInfo=lv=-35;cid=testClusterID;nsid=1503134809;c=0) ending block pool service for: BP-512121671-127.0.1.1-1304167386975 [junit] 2011-04-30 12:43:08,489 INFO datanode.DataBlockScanner (DataBlockScanner.java:removeBlockPool(277)) - Removed bpid=BP-512121671-127.0.1.1-1304167386975 from blockPoolScannerMap [junit] 2011-04-30 12:43:08,489 INFO datanode.DataNode (FSDataset.java:shutdownBlockPool(2547)) - Removing block pool BP-512121671-127.0.1.1-1304167386975 [junit] 2011-04-30 12:43:08,489 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(133)) - Shutting down all async disk service threads... [junit] 2011-04-30 12:43:08,490 INFO datanode.FSDatasetAsyncDiskService (FSDatasetAsyncDiskService.java:shutdown(142)) - All async disk service threads have been shut down. [junit] 2011-04-30 12:43:08,590 WARN namenode.FSNamesystem (FSNamesystem.java:run(3009)) - ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted [junit] 2011-04-30 12:43:08,591 INFO namenode.FSEditLog (FSEditLog.java:printStatistics(573)) - Number of transactions: 6 Total time for transactions(ms): 1Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 11 12 [junit] 2011-04-30 12:43:08,590 WARN namenode.DecommissionManager (DecommissionManager.java:run(70)) - Monitor interrupted: java.lang.InterruptedException: sleep interrupted [junit] 2011-04-30 12:43:08,593 INFO ipc.Server (Server.java:stop(1626)) - Stopping server on 57997 [junit] 2011-04-30 12:43:08,593 INFO ipc.Server (Server.java:run(1459)) - IPC Server handler 0 on 57997: exiting [junit] 2011-04-30 12:43:08,594 INFO ipc.Server (Server.java:run(487)) - Stopping IPC Server listener on 57997 [junit] 2011-04-30 12:43:08,594 INFO ipc.Server (Server.java:run(691)) - Stopping IPC Server Responder [junit] Tests run: 16, Failures: 0, Errors: 0, Time elapsed: 101.857 sec checkfailure: -run-test-hdfs-fault-inject-withtestcaseonly: run-test-hdfs-fault-inject: BUILD FAILED /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build.xml:749: Tests failed! Total time: 70 minutes 4 seconds [FINDBUGS] Skipping publisher since build result is FAILURE Publishing Javadoc Archiving artifacts Recording test results Recording fingerprints Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ################################################################################### ############################## FAILED TESTS (if any) ############################## 3 tests failed. REGRESSION: org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint Error Message: null Stack Trace: junit.framework.AssertionFailedError: null at org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:152) at org.apache.hadoop.hdfs.server.namenode.TestBackupNode.__CLR3_0_2xuql33qjy(TestBackupNode.java:103) at org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testCheckpoint(TestBackupNode.java:101) REGRESSION: org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testBackupRegistration Error Message: Only one backup node should be able to start Stack Trace: junit.framework.AssertionFailedError: Only one backup node should be able to start at org.apache.hadoop.hdfs.server.namenode.TestBackupNode.__CLR3_0_2ygtwtwqm3(TestBackupNode.java:231) at org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testBackupRegistration(TestBackupNode.java:211) REGRESSION: org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckListCorruptFilesBlocks Error Message: null Stack Trace: junit.framework.AssertionFailedError: null at org.apache.hadoop.hdfs.server.namenode.TestFsck.__CLR3_0_257ts7815sh(TestFsck.java:497) at org.apache.hadoop.hdfs.server.namenode.TestFsck.testFsckListCorruptFilesBlocks(TestFsck.java:446)