Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 91536 invoked from network); 30 Aug 2007 17:04:19 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 30 Aug 2007 17:04:19 -0000 Received: (qmail 75958 invoked by uid 500); 30 Aug 2007 17:04:13 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 75932 invoked by uid 500); 30 Aug 2007 17:04:13 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 75923 invoked by uid 99); 30 Aug 2007 17:04:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 30 Aug 2007 10:04:13 -0700 X-ASF-Spam-Status: No, hits=-98.5 required=10.0 tests=ALL_TRUSTED,WEIRD_PORT X-Spam-Check-By: apache.org Received: from [140.211.11.75] (HELO lucene.zones.apache.org) (140.211.11.75) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 30 Aug 2007 17:05:06 +0000 Received: from lucene.zones.apache.org (localhost [127.0.0.1]) by lucene.zones.apache.org (8.13.7+Sun/8.13.7) with ESMTP id l7UH3hRl001786 for ; Thu, 30 Aug 2007 17:03:43 GMT Date: Thu, 30 Aug 2007 17:03:42 +0000 (GMT+00:00) From: hudson@lucene.zones.apache.org To: hadoop-dev@lucene.apache.org Message-ID: <28664985.611188493422987.JavaMail.hudson@lucene.zones.apache.org> Subject: Build failed in Hudson: Hadoop-Nightly #219 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/219/changes Changes: [cutting] HADOOP-1749. Change TestDFSUpgrade to sort files, fixing sporadic test failures. Contributed by Enis. [cutting] HADOOP-1601. Change GenericWritable to use ReflectionUtils for instance creation. Contributed by Enis. [cutting] HADOOP-1767. Add 'bin/hadoop job -list' sub-command. Contributed by Christophe Taton. [cutting] HADOOP-1748. Fix tasktracker to be able to launch tasks when log directory is relative. Contributed by Owen. [cutting] HADOOP-1775. Fix a NullPointerException and an IllegalArgumentException in MapWritable. Contributed by Jim Kellerman. [cutting] HADOOP-1750. Log standard output and error when forking task processes. Contributed by Owen. [jimk] HADOOP-1805 Region server hang on exit Catch runtime exceptions in HMemcacheScanner constructor to ensure that read lock is released. [cutting] HADOOP-1803. Generalize build.xml to make files in all src/contrib/*/bin directories executable. Contributed by stack. [cutting] HADOOP-1798. Fix jobtracker to correctly account for failed tasks. Contributed by Owen. [stack] HADOOP-1785 TableInputFormat.TableRecordReader.next has a bug M src/contrib/hbase/src/test/org/apache/hadoop/hbase/TestTableMapReduce.java (localTestSingleRegionTable, localTestMultiRegionTable, verify): Added. M src/contrib/hbase/src/test/org/apache/hadoop/hbase/HBaseTestCase.java Javadoc for addContents and Loader interface and implementations. Methods have been made static so accessible w/o subclassing. M src/contrib/hbase/src/test/org/apache/hadoop/hbase/MultiRegionTable.java Guts of TestSplit has been moved here so other tests can have access to a multiregion table. M src/contrib/hbase/src/test/org/apache/hadoop/hbase/TestSplit.java Bulk moved to MultiRegionTable utility class. Use this new class instead. M src/contrib/hbase/src/java/org/apache/hadoop/hbase/HTable.java Added '@deprecated' javadoc. M src/contrib/hbase/src/java/org/apache/hadoop/hbase/HMaster.java Was throwing RuntimeException when a msgQueue.put was interrupted but this is a likely event on shutdown. Log a message instead. M src/contrib/hbase/src/java/org/apache/hadoop/hbase/mapred/TableInputFormat.java Actually fix for HADOOP-1785... reverse test of row comparison. ------------------------------------------ [...truncated 41867 lines...] [junit] at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:822) [junit] at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:727) [junit] at java.lang.Thread.run(Thread.java:595) [junit] 07/08/30 17:02:30 INFO hbase.HMaster: 140.211.11.75:39142 serving -ROOT-,,0 [junit] 07/08/30 17:02:30 INFO hbase.HMaster: HMaster.rootScanner scanning meta region -ROOT-,,0 on 140.211.11.75:39142 [junit] 07/08/30 17:02:30 INFO dfs.DataNode: Served block blk_5063435198599963256 to /127.0.0.1 [junit] 07/08/30 17:02:30 INFO dfs.DataNode: Served block blk_8791160491941835459 to /127.0.0.1 [junit] 07/08/30 17:02:30 INFO hbase.HMaster: HMaster.rootScanner scan of meta region -ROOT-,,0 complete [junit] 07/08/30 17:02:31 INFO dfs.StateChange: BLOCK* NameSystem.blockToInvalidate: ask 127.0.0.1:50010 to delete blk_-4009091100517168867 blk_680008136866675248 [junit] 07/08/30 17:02:31 INFO dfs.StateChange: BLOCK* NameSystem.blockToInvalidate: ask 127.0.0.1:50011 to delete blk_-4009091100517168867 blk_680008136866675248 [junit] 07/08/30 17:02:31 INFO hbase.HMaster: assigning region .META.,,1 to the only server 140.211.11.75:39142 [junit] 07/08/30 17:02:31 INFO hbase.HRegionServer: MSG_REGION_OPEN : regionname: .META.,,1, startKey: <>, tableDesc: {name: .META., families: {info:=(info:, max versions: 1, compression: NONE, in memory: false, max value length: 2147483647, bloom filter: none)}} [junit] 07/08/30 17:02:31 INFO hbase.HRegion: region .META.,,1 available [junit] 07/08/30 17:02:32 INFO hbase.HMaster: 140.211.11.75:39142 serving .META.,,1 [junit] 07/08/30 17:02:32 INFO hbase.HMaster: .META.,,1 open on 140.211.11.75:39142 [junit] 07/08/30 17:02:32 INFO hbase.HMaster: updating row .META.,,1 in table -ROOT-,,0 with startcode 7030247354054854522 and server 140.211.11.75:39142 [junit] 07/08/30 17:02:32 INFO hbase.HMaster: HMaster.metaScanner scanning meta region .META.,,1 on 140.211.11.75:39142 [junit] 07/08/30 17:02:32 INFO hbase.HMaster: HMaster.metaScanner scan of meta region .META.,,1 complete [junit] 07/08/30 17:02:33 INFO dfs.DataNode: Deleting block blk_-4009091100517168867 file http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data2/current/blk_-4009091100517168867 [junit] 07/08/30 17:02:33 INFO dfs.DataNode: Deleting block blk_680008136866675248 file http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data1/current/blk_680008136866675248 [junit] 07/08/30 17:02:33 INFO hbase.HRegion: region test,,-7595141907404260838 available [junit] 07/08/30 17:02:33 INFO hbase.HRegion: closed test,,-7595141907404260838 [junit] 07/08/30 17:02:33 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /hbase/hregion_test,,-7595141907404260838/log/hlog.dat.000. blk_-3926922525346215770 is created and added to pendingCreates and pendingCreateBlocks [junit] 07/08/30 17:02:33 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_-3926922525346215770 [junit] 07/08/30 17:02:33 INFO dfs.DataNode: Received block blk_-3926922525346215770 from /127.0.0.1 [junit] 07/08/30 17:02:33 INFO dfs.DataNode: Received block blk_-3926922525346215770 from /127.0.0.1 and mirrored to /127.0.0.1:50011 [junit] 07/08/30 17:02:33 INFO dfs.StateChange: BLOCK* NameSystem.delete: blk_-3926922525346215770 is added to invalidSet of 127.0.0.1:50010 [junit] 07/08/30 17:02:33 INFO hbase.HMaster: created table test [junit] 07/08/30 17:02:34 INFO dfs.DataNode: Served block blk_5063435198599963256 to /127.0.0.1 [junit] 07/08/30 17:02:34 INFO dfs.DataNode: Served block blk_8791160491941835459 to /127.0.0.1 [junit] 07/08/30 17:02:34 INFO dfs.DataNode: Deleting block blk_-4009091100517168867 file http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data4/current/blk_-4009091100517168867 [junit] 07/08/30 17:02:34 INFO dfs.DataNode: Deleting block blk_680008136866675248 file http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data3/current/blk_680008136866675248 [junit] 07/08/30 17:02:34 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50011 is added to blk_-3926922525346215770 [junit] 07/08/30 17:02:34 INFO dfs.StateChange: BLOCK* NameSystem.blockToInvalidate: ask 127.0.0.1:50010 to delete blk_-3926922525346215770 [junit] 07/08/30 17:02:34 INFO hbase.HMaster: assigning region test,,-7595141907404260838 to the only server 140.211.11.75:39142 [junit] 07/08/30 17:02:34 INFO hbase.HRegionServer: MSG_REGION_OPEN : regionname: test,,-7595141907404260838, startKey: <>, tableDesc: {name: test, families: {contents:=(contents:, max versions: 3, compression: NONE, in memory: false, max value length: 2147483647, bloom filter: none)}} [junit] 07/08/30 17:02:34 INFO hbase.HRegion: region test,,-7595141907404260838 available [junit] 07/08/30 17:02:35 INFO hbase.HMaster: 140.211.11.75:39142 serving test,,-7595141907404260838 [junit] 07/08/30 17:02:35 INFO hbase.HMaster: test,,-7595141907404260838 open on 140.211.11.75:39142 [junit] 07/08/30 17:02:35 INFO hbase.HMaster: updating row test,,-7595141907404260838 in table .META.,,1 with startcode 7030247354054854522 and server 140.211.11.75:39142 [junit] 07/08/30 17:02:36 INFO dfs.DataNode: Deleting block blk_-3926922525346215770 file http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data2/current/blk_-3926922525346215770 [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /hbase/hregion_-ROOT-,,0/info/mapfiles/7830686212181393652/data. blk_2673250400400565082 is created and added to pendingCreates and pendingCreateBlocks [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_2673250400400565082 [junit] 07/08/30 17:02:39 INFO dfs.DataNode: Received block blk_2673250400400565082 from /127.0.0.1 [junit] 07/08/30 17:02:39 INFO dfs.DataNode: Received block blk_2673250400400565082 from /127.0.0.1 and mirrored to /127.0.0.1:50011 [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50011 is added to blk_2673250400400565082 [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /hbase/hregion_-ROOT-,,0/info/mapfiles/7830686212181393652/index. blk_-7235044915064163197 is created and added to pendingCreates and pendingCreateBlocks [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50011 is added to blk_-7235044915064163197 [junit] 07/08/30 17:02:39 INFO dfs.DataNode: Received block blk_-7235044915064163197 from /127.0.0.1 [junit] 07/08/30 17:02:39 INFO dfs.DataNode: Received block blk_-7235044915064163197 from /127.0.0.1 and mirrored to /127.0.0.1:50010 [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /hbase/hregion_-ROOT-,,0/info/info/7830686212181393652. blk_-6585655778395807321 is created and added to pendingCreates and pendingCreateBlocks [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_-7235044915064163197 [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50011 is added to blk_-6585655778395807321 [junit] 07/08/30 17:02:39 INFO dfs.DataNode: Received block blk_-6585655778395807321 from /127.0.0.1 [junit] 07/08/30 17:02:39 INFO dfs.DataNode: Received block blk_-6585655778395807321 from /127.0.0.1 and mirrored to /127.0.0.1:50010 [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_-6585655778395807321 [junit] 07/08/30 17:02:39 INFO dfs.DataNode: Served block blk_2673250400400565082 to /127.0.0.1 [junit] 07/08/30 17:02:39 INFO dfs.DataNode: Served block blk_-7235044915064163197 to /127.0.0.1 [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /hbase/hregion_.META.,,1/info/mapfiles/585115468444810564/data. blk_4706109107896841716 is created and added to pendingCreates and pendingCreateBlocks [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_4706109107896841716 [junit] 07/08/30 17:02:39 INFO dfs.DataNode: Received block blk_4706109107896841716 from /127.0.0.1 [junit] 07/08/30 17:02:39 INFO dfs.DataNode: Received block blk_4706109107896841716 from /127.0.0.1 and mirrored to /127.0.0.1:50011 [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50011 is added to blk_4706109107896841716 [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /hbase/hregion_.META.,,1/info/mapfiles/585115468444810564/index. blk_-3401312334531540051 is created and added to pendingCreates and pendingCreateBlocks [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_-3401312334531540051 [junit] 07/08/30 17:02:39 INFO dfs.DataNode: Received block blk_-3401312334531540051 from /127.0.0.1 [junit] 07/08/30 17:02:39 INFO dfs.DataNode: Received block blk_-3401312334531540051 from /127.0.0.1 and mirrored to /127.0.0.1:50011 [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50011 is added to blk_-3401312334531540051 [junit] 07/08/30 17:02:39 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /hbase/hregion_.META.,,1/info/info/585115468444810564. blk_-1518819601282007663 is created and added to pendingCreates and pendingCreateBlocks [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Received block blk_-1518819601282007663 from /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_-1518819601282007663 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Received block blk_-1518819601282007663 from /127.0.0.1 and mirrored to /127.0.0.1:50010 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_4706109107896841716 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_-3401312334531540051 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50011 is added to blk_-1518819601282007663 [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /hbase/hregion_test,,-7595141907404260838/contents/mapfiles/1705177988606680603/data. blk_-3298125500550375589 is created and added to pendingCreates and pendingCreateBlocks [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50011 is added to blk_-3298125500550375589 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Received block blk_-3298125500550375589 from /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Received block blk_-3298125500550375589 from /127.0.0.1 and mirrored to /127.0.0.1:50010 [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_-3298125500550375589 [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /hbase/hregion_test,,-7595141907404260838/contents/mapfiles/1705177988606680603/index. blk_3979822069249389781 is created and added to pendingCreates and pendingCreateBlocks [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50011 is added to blk_3979822069249389781 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Received block blk_3979822069249389781 from /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Received block blk_3979822069249389781 from /127.0.0.1 and mirrored to /127.0.0.1:50010 [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_3979822069249389781 [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /hbase/hregion_test,,-7595141907404260838/contents/info/1705177988606680603. blk_-5492561577934974696 is created and added to pendingCreates and pendingCreateBlocks [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_-5492561577934974696 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Received block blk_-5492561577934974696 from /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Received block blk_-5492561577934974696 from /127.0.0.1 and mirrored to /127.0.0.1:50011 [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50011 is added to blk_-5492561577934974696 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_-3298125500550375589 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_3979822069249389781 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_-3298125500550375589 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_-3298125500550375589 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_-3298125500550375589 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_-3298125500550375589 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_3979822069249389781 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_-3298125500550375589 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_3979822069249389781 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_-3298125500550375589 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Served block blk_3979822069249389781 to /127.0.0.1 [junit] 07/08/30 17:02:40 INFO hbase.MiniHBaseCluster: Shutting down HBase Cluster [junit] 07/08/30 17:02:40 INFO hbase.HMaster: RootScanner exiting [junit] 07/08/30 17:02:40 INFO hbase.HMaster: Waiting on following regionserver(s) to go down (or region server lease expiration, whichever happens first): [address: 140.211.11.75:39142, startcode: 7030247354054854522, load: (requests: 19 regions: 3)] [junit] 07/08/30 17:02:40 INFO hbase.Leases: closing leases [junit] 07/08/30 17:02:40 INFO hbase.Leases: leases closed [junit] 07/08/30 17:02:40 INFO ipc.Server: Stopping server on 39142 [junit] 07/08/30 17:02:40 INFO ipc.Server: IPC Server handler 2 on 39142: exiting [junit] 07/08/30 17:02:40 INFO ipc.Server: IPC Server handler 1 on 39142: exiting [junit] 07/08/30 17:02:40 INFO hbase.HRegionServer: cacheFlusher exiting [junit] 07/08/30 17:02:40 INFO ipc.Server: IPC Server handler 0 on 39142: exiting [junit] 07/08/30 17:02:40 INFO hbase.HRegionServer: logRoller exiting [junit] 07/08/30 17:02:40 INFO ipc.Server: IPC Server handler 3 on 39142: exiting [junit] 07/08/30 17:02:40 INFO ipc.Server: Stopping IPC Server listener on 39142 [junit] 07/08/30 17:02:40 INFO ipc.Server: IPC Server handler 4 on 39142: exiting [junit] 07/08/30 17:02:40 INFO hbase.HRegion: closed -ROOT-,,0 [junit] 07/08/30 17:02:40 INFO hbase.HRegionServer: splitOrCompactChecker exiting [junit] 07/08/30 17:02:40 INFO hbase.HRegion: closed .META.,,1 [junit] 07/08/30 17:02:40 INFO hbase.HRegion: closed test,,-7595141907404260838 [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock: /hbase/log_140.211.11.75_39142/hlog.dat.000. blk_-3597449902409858080 is created and added to pendingCreates and pendingCreateBlocks [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50010 is added to blk_-3597449902409858080 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Received block blk_-3597449902409858080 from /127.0.0.1 [junit] 07/08/30 17:02:40 INFO dfs.DataNode: Received block blk_-3597449902409858080 from /127.0.0.1 and mirrored to /127.0.0.1:50011 [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50011 is added to blk_-3597449902409858080 [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.delete: blk_-3597449902409858080 is added to invalidSet of 127.0.0.1:50010 [junit] 07/08/30 17:02:40 INFO dfs.StateChange: BLOCK* NameSystem.delete: blk_-3597449902409858080 is added to invalidSet of 127.0.0.1:50011 [junit] 07/08/30 17:02:40 INFO hbase.HRegionServer: stopping server at: 140.211.11.75:39142 [junit] 07/08/30 17:02:41 INFO hbase.HMaster: Waiting on following regionserver(s) to go down (or region server lease expiration, whichever happens first): [address: 140.211.11.75:39142, startcode: 7030247354054854522, load: (requests: 19 regions: 3)] [junit] 07/08/30 17:02:41 INFO hbase.HRegionServer: worker thread exiting [junit] 07/08/30 17:02:41 INFO hbase.HRegionServer: HRegionServer stopped at: 140.211.11.75:39142 [junit] 07/08/30 17:02:41 INFO hbase.HRegionServer: main thread exiting [junit] 07/08/30 17:02:42 INFO hbase.HMaster: all meta regions scanned [junit] 07/08/30 17:02:42 INFO hbase.HMaster: MetaScanner exiting [junit] 07/08/30 17:02:42 INFO hbase.HMaster: Waiting on following regionserver(s) to go down (or region server lease expiration, whichever happens first): [address: 140.211.11.75:39142, startcode: 7030247354054854522, load: (requests: 19 regions: 3)] [junit] 07/08/30 17:02:43 INFO dfs.StateChange: BLOCK* NameSystem.blockToInvalidate: ask 127.0.0.1:50010 to delete blk_-3597449902409858080 [junit] 07/08/30 17:02:43 INFO dfs.StateChange: BLOCK* NameSystem.blockToInvalidate: ask 127.0.0.1:50011 to delete blk_-3597449902409858080 [junit] 07/08/30 17:02:43 INFO hbase.HMaster: Waiting on following regionserver(s) to go down (or region server lease expiration, whichever happens first): [address: 140.211.11.75:39142, startcode: 7030247354054854522, load: (requests: 19 regions: 3)] [junit] 07/08/30 17:02:44 INFO hbase.HMaster: Waiting on following regionserver(s) to go down (or region server lease expiration, whichever happens first): [address: 140.211.11.75:39142, startcode: 7030247354054854522, load: (requests: 19 regions: 3)] [junit] 07/08/30 17:02:45 INFO dfs.DataNode: Deleting block blk_-3597449902409858080 file http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data2/current/blk_-3597449902409858080 [junit] 07/08/30 17:02:45 INFO hbase.HMaster: Waiting on following regionserver(s) to go down (or region server lease expiration, whichever happens first): [address: 140.211.11.75:39142, startcode: 7030247354054854522, load: (requests: 19 regions: 3)] [junit] 07/08/30 17:02:46 INFO dfs.DataNode: Deleting block blk_-3597449902409858080 file http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data4/current/blk_-3597449902409858080 [junit] 07/08/30 17:02:47 INFO hbase.HMaster: Waiting on following regionserver(s) to go down (or region server lease expiration, whichever happens first): [address: 140.211.11.75:39142, startcode: 7030247354054854522, load: (requests: 19 regions: 3)] [junit] 07/08/30 17:02:47 INFO hbase.Leases: Lease expired 1437242142/1437242142 [junit] 07/08/30 17:02:47 INFO hbase.HMaster: 140.211.11.75:39142 lease expired [junit] 07/08/30 17:02:47 INFO ipc.Server: Stopping server on 39141 [junit] 07/08/30 17:02:47 INFO ipc.Server: IPC Server handler 3 on 39141: exiting [junit] 07/08/30 17:02:47 INFO ipc.Server: Stopping IPC Server listener on 39141 [junit] 07/08/30 17:02:47 INFO ipc.Server: IPC Server handler 2 on 39141: exiting [junit] 07/08/30 17:02:47 INFO hbase.Leases: closing leases [junit] 07/08/30 17:02:47 INFO ipc.Server: IPC Server handler 1 on 39141: exiting [junit] 07/08/30 17:02:47 INFO ipc.Server: IPC Server handler 4 on 39141: exiting [junit] 07/08/30 17:02:47 INFO ipc.Server: IPC Server handler 0 on 39141: exiting [junit] 07/08/30 17:02:47 WARN hbase.HMaster: MsgQueue.put was interrupted (If we are exiting, this msg can be ignored [junit] 07/08/30 17:02:50 INFO hbase.Leases: leases closed [junit] 07/08/30 17:02:50 INFO hbase.HMaster: HMaster main thread exiting [junit] 07/08/30 17:02:50 INFO hbase.MiniHBaseCluster: Shutdown HMaster 1 region server(s) [junit] 07/08/30 17:02:50 INFO hbase.MiniHBaseCluster: Shutting down Mini DFS cluster [junit] Shutting down the Mini HDFS Cluster [junit] Shutting down DataNode 1 [junit] 07/08/30 17:02:50 INFO util.ThreadedServer: Stopping Acceptor ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=39117] [junit] 07/08/30 17:02:50 INFO http.SocketListener: Stopped SocketListener on 0.0.0.0:39117 [junit] 07/08/30 17:02:50 INFO util.Container: Stopped org.mortbay.jetty.servlet.WebApplicationHandler@1fcc0a2 [junit] 07/08/30 17:02:50 INFO util.Container: Stopped WebApplicationContext[/,/] [junit] 07/08/30 17:02:51 INFO util.Container: Stopped HttpContext[/logs,/logs] [junit] 07/08/30 17:02:51 INFO util.Container: Stopped HttpContext[/static,/static] [junit] 07/08/30 17:02:51 INFO util.Container: Stopped org.mortbay.jetty.Server@d0220c [junit] 07/08/30 17:02:51 INFO dfs.DataNode: Exiting DataXceiveServer due to java.net.SocketException: Socket closed [junit] 07/08/30 17:02:51 INFO dfs.DataNode: Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data3/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data4/current'} [junit] Shutting down DataNode 0 [junit] 07/08/30 17:02:51 INFO util.ThreadedServer: Stopping Acceptor ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=39115] [junit] 07/08/30 17:02:51 INFO http.SocketListener: Stopped SocketListener on 0.0.0.0:39115 [junit] 07/08/30 17:02:51 INFO util.Container: Stopped org.mortbay.jetty.servlet.WebApplicationHandler@131c89c [junit] 07/08/30 17:02:51 INFO util.Container: Stopped WebApplicationContext[/,/] [junit] 07/08/30 17:02:51 INFO util.Container: Stopped HttpContext[/logs,/logs] [junit] 07/08/30 17:02:51 INFO util.Container: Stopped HttpContext[/static,/static] [junit] 07/08/30 17:02:51 INFO util.Container: Stopped org.mortbay.jetty.Server@1a8773c [junit] 07/08/30 17:02:51 INFO dfs.DataNode: Exiting DataXceiveServer due to java.net.SocketException: Socket closed [junit] 07/08/30 17:02:51 WARN fs.FSNamesystem: PendingReplicationMonitor thread received exception. java.lang.InterruptedException: sleep interrupted [junit] 07/08/30 17:02:51 INFO util.ThreadedServer: Stopping Acceptor ServerSocket[addr=0.0.0.0/0.0.0.0,port=0,localport=39110] [junit] 07/08/30 17:02:51 INFO http.SocketListener: Stopped SocketListener on 0.0.0.0:39110 [junit] 07/08/30 17:02:51 INFO util.Container: Stopped org.mortbay.jetty.servlet.WebApplicationHandler@1bdc9d8 [junit] 07/08/30 17:02:51 INFO util.Container: Stopped WebApplicationContext[/,/] [junit] 07/08/30 17:02:51 INFO util.Container: Stopped HttpContext[/logs,/logs] [junit] 07/08/30 17:02:51 INFO util.Container: Stopped HttpContext[/static,/static] [junit] 07/08/30 17:02:51 INFO util.Container: Stopped org.mortbay.jetty.Server@1c80b01 [junit] 07/08/30 17:02:51 INFO ipc.Server: Stopping server on 39106 [junit] 07/08/30 17:02:51 INFO ipc.Server: IPC Server handler 2 on 39106: exiting [junit] 07/08/30 17:02:51 INFO ipc.Server: IPC Server handler 3 on 39106: exiting [junit] 07/08/30 17:02:51 INFO ipc.Server: IPC Server handler 4 on 39106: exiting [junit] 07/08/30 17:02:51 INFO ipc.Server: IPC Server handler 8 on 39106: exiting [junit] 07/08/30 17:02:51 INFO ipc.Server: IPC Server handler 5 on 39106: exiting [junit] 07/08/30 17:02:51 INFO ipc.Server: IPC Server handler 0 on 39106: exiting [junit] 07/08/30 17:02:51 INFO ipc.Server: IPC Server handler 1 on 39106: exiting [junit] 07/08/30 17:02:51 INFO ipc.Server: IPC Server handler 6 on 39106: exiting [junit] 07/08/30 17:02:51 INFO ipc.Server: Stopping IPC Server listener on 39106 [junit] 07/08/30 17:02:51 INFO ipc.Server: IPC Server handler 7 on 39106: exiting [junit] 07/08/30 17:02:51 INFO ipc.Server: IPC Server handler 9 on 39106: exiting [junit] 07/08/30 17:02:51 WARN dfs.DataNode: java.io.IOException: du: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data1 : No such file or directory [junit] at org.apache.hadoop.fs.Command.run(Command.java:33) [junit] at org.apache.hadoop.fs.DU.getUsed(DU.java:56) [junit] at org.apache.hadoop.dfs.FSDataset$FSVolume.getDfsUsed(FSDataset.java:299) [junit] at org.apache.hadoop.dfs.FSDataset$FSVolume.getAvailable(FSDataset.java:307) [junit] at org.apache.hadoop.dfs.FSDataset$FSVolumeSet.getRemaining(FSDataset.java:412) [junit] at org.apache.hadoop.dfs.FSDataset.getRemaining(FSDataset.java:505) [junit] at org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:485) [junit] at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1310) [junit] at java.lang.Thread.run(Thread.java:595) [junit] 07/08/30 17:02:51 INFO dfs.DataNode: Finishing DataNode in: FSDataset{dirpath='http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build/contrib/hbase/test/data/dfs/data/data1/current,/export/home/hudson/hudson/jobs/Hadoop-Nightly/workspace/trunk/build/contrib/hbase/test/data/dfs/data/data2/current'} [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 30.391 sec [junit] Running org.apache.hadoop.hbase.TestToString [junit] name: hank, families: {hankfamily:=(hankfamily:, max versions: 3, compression: NONE, in memory: false, max value length: 2147483647, bloom filter: none), hankotherfamily:=(hankotherfamily:, max versions: 10, compression: BLOCK, in memory: true, max value length: 1000, bloom filter: none)} [junit] regionname: hank,,-1, startKey: <>, tableDesc: {name: hank, families: {hankfamily:=(hankfamily:, max versions: 3, compression: NONE, in memory: false, max value length: 2147483647, bloom filter: none), hankotherfamily:=(hankotherfamily:, max versions: 10, compression: BLOCK, in memory: true, max value length: 1000, bloom filter: none)}} [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.402 sec [junit] Running org.apache.hadoop.hbase.filter.TestPageRowFilter [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.277 sec [junit] Running org.apache.hadoop.hbase.filter.TestRegExpRowFilter [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.467 sec [junit] Running org.apache.hadoop.hbase.filter.TestRowFilterSet [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.412 sec [junit] Running org.apache.hadoop.hbase.filter.TestStopRowFilter [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.044 sec [junit] Running org.apache.hadoop.hbase.filter.TestWhileMatchRowFilter [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.638 sec [junit] Running org.apache.hadoop.hbase.shell.TestHBaseShell [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.366 sec [junit] Running org.apache.hadoop.hbase.util.TestKeying [junit] Original url http://abc:bcd@www.example.com/index.html?query=something#middle, Transformed url r:http://abc:bcd@com.example.www/index.html?query=something#middle [junit] Original url file:///usr/bin/java, Transformed url file:///usr/bin/java [junit] Original url dns:www.powerset.com, Transformed url dns:www.powerset.com [junit] Original url dns://dns.powerset.com/www.powerset.com, Transformed url r:dns://com.powerset.dns/www.powerset.com [junit] Original url http://one.two.three/index.html, Transformed url r:http://three.two.one/index.html [junit] Original url https://one.two.three:9443/index.html, Transformed url r:https://three.two.one:9443/index.html [junit] Original url ftp://one.two.three/index.html, Transformed url r:ftp://three.two.one/index.html [junit] Original url filename, Transformed url filename [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.07 sec [junit] Running org.onelab.test.TestFilter [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.362 sec BUILD FAILED http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/build.xml :506: The following error occurred while executing this line: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/src/contrib/build.xml :23: The following error occurred while executing this line: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/src/contrib/hbase/build.xml :101: The following error occurred while executing this line: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/ws/trunk/src/contrib/build-contrib.xml :205: Tests failed! Total time: 299 minutes 43 seconds Recording fingerprints Publishing Javadoc Recording test results