Return-Path: Delivered-To: apmail-hadoop-hbase-dev-archive@minotaur.apache.org Received: (qmail 80110 invoked from network); 16 Jun 2009 08:56:29 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 16 Jun 2009 08:56:29 -0000 Received: (qmail 81217 invoked by uid 500); 16 Jun 2009 08:56:40 -0000 Delivered-To: apmail-hadoop-hbase-dev-archive@hadoop.apache.org Received: (qmail 81174 invoked by uid 500); 16 Jun 2009 08:56:40 -0000 Mailing-List: contact hbase-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-dev@hadoop.apache.org Delivered-To: mailing list hbase-dev@hadoop.apache.org Received: (qmail 81164 invoked by uid 99); 16 Jun 2009 08:56:40 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 16 Jun 2009 08:56:40 +0000 X-ASF-Spam-Status: No, hits=-1998.5 required=10.0 tests=ALL_TRUSTED,WEIRD_PORT X-Spam-Check-By: apache.org Received: from [140.211.11.106] (HELO hudson.zones.apache.org) (140.211.11.106) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 16 Jun 2009 08:56:31 +0000 Received: from hudson.zones.apache.org (localhost [127.0.0.1]) by hudson.zones.apache.org (8.13.8+Sun/8.13.8) with ESMTP id n5G8uAw4020645 for ; Tue, 16 Jun 2009 04:56:11 -0400 (EDT) Date: Tue, 16 Jun 2009 08:56:10 +0000 (UTC) From: Apache Hudson Server To: hbase-dev@hadoop.apache.org Message-ID: <8518996.2051245142570781.JavaMail.hudson@hudson.zones.apache.org> In-Reply-To: <18102969.2001245126580727.JavaMail.hudson@hudson.zones.apache.org> References: <18102969.2001245126580727.JavaMail.hudson@hudson.zones.apache.org> Subject: Build failed in Hudson: HBase-Patch #643 MIME-Version: 1.0 Content-Type: text/plain; charset=ISO646-US Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org See http://hudson.zones.apache.org/hudson/job/HBase-Patch/643/changes Changes: [rawson] fix build from 1528 [stack] HBASE-1447 Take last version of the hbase-1249 design doc. and mak= e documentation out of it [stack] HBASE-1528 Ensure scanners work across memcache snapshot [apurtell] HBASE-1529 familyMap not invalidated when Result is (re)read as = a Writable ------------------------------------------ [...truncated 23358 lines...] [junit]=20 [junit] 2009-06-16 09:17:51,624 WARN [org.apache.hadoop.hdfs.server.na= menode.FSNamesystem$ReplicationMonitor@e4e358] namenode.FSNamesystem$Replic= ationMonitor(2306): ReplicationMonitor thread received InterruptedException= .java.lang.InterruptedException: sleep interrupted [junit] 2009-06-16 09:17:52.590::INFO: jetty-6.1.14 [junit] 2009-06-16 09:17:52.597::INFO: Extract jar:http://hudson.zones= .apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core= .jar!/webapps/hdfs to /tmp/Jetty_localhost_37254_hdfs____b5oavr/webapp [junit] 2009-06-16 09:17:52.769::INFO: Started SelectChannelConnector@= localhost:37254 [junit] Starting DataNode 0 with dfs.data.dir: http://hudson.zones.apac= he.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data1,/home= /hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/d= ata2=20 [junit] 2009-06-16 09:17:53.273::INFO: jetty-6.1.14 [junit] 2009-06-16 09:17:53.280::INFO: Extract jar:http://hudson.zones= .apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core= .jar!/webapps/datanode to /tmp/Jetty_localhost_40513_datanode____.r6gmm6/w= ebapp [junit] 2009-06-16 09:17:53.449::INFO: Started SelectChannelConnector@= localhost:40513 [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apac= he.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data3,/home= /hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/d= ata4=20 [junit] 2009-06-16 09:17:53.968::INFO: jetty-6.1.14 [junit] 2009-06-16 09:17:53.975::INFO: Extract jar:http://hudson.zones= .apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core= .jar!/webapps/datanode to /tmp/Jetty_localhost_47721_datanode____.z2opqe/w= ebapp [junit] 2009-06-16 09:17:54.146::INFO: Started SelectChannelConnector@= localhost:47721 [junit] 2009-06-16 09:17:54,305 INFO [main] regionserver.HLog(209): HL= og configuration: blocksize=3D67108864, rollsize=3D63753420, enabled=3Dtrue= , flushlogentries=3D100, optionallogflushinternal=3D10000ms [junit] 2009-06-16 09:17:54,314 INFO [main] regionserver.HLog(299): Ne= w hlog /user/hudson/testscanner/116352605/.logs/hlog.dat.1245143874306 [junit] 2009-06-16 09:17:54,315 DEBUG [main] regionserver.HRegion(264):= Opening region testscanner,,1245143874287, encoded=3D116352605 [junit] 2009-06-16 09:17:54,386 INFO [main] regionserver.HRegion(339):= region testscanner,,1245143874287/116352605 available; sequence id is 0 [junit] 2009-06-16 09:17:54,870 INFO [main] regionserver.TestScanner(3= 70): Added: 17576 [junit] 2009-06-16 09:17:54,870 INFO [main] regionserver.TestScanner(4= 19): Taking out counting scan [junit] 2009-06-16 09:17:55,769 INFO [main] regionserver.TestScanner(4= 53): Found 17575 items [junit] 2009-06-16 09:17:55,770 INFO [main] regionserver.TestScanner(4= 19): Taking out counting scan [junit] 2009-06-16 09:17:55,773 INFO [main] regionserver.TestScanner(4= 32): Starting flush at flush index 100 [junit] 2009-06-16 09:17:55,774 DEBUG [main] regionserver.HRegion(884):= Started memcache flush for region testscanner,,1245143874287. Current regi= on memcache size 2.6m [junit] 2009-06-16 09:17:55,906 DEBUG [main] regionserver.Store(526): A= dded hdfs://localhost:55943/user/hudson/testscanner/116352605/info/89961695= 48985904470, entries=3D17576, sequenceid=3D17577, memsize=3D2.6m, filesize= =3D691.0k to testscanner,,1245143874287 [junit] 2009-06-16 09:17:55,907 DEBUG [main] regionserver.HRegion(961):= Finished memcache flush of ~2.6m for region testscanner,,1245143874287 in = 133ms, sequence id=3D17577, compaction requested=3Dfalse [junit] 2009-06-16 09:17:55,908 INFO [main] regionserver.TestScanner$1= (437): Finishing flush [junit] 2009-06-16 09:17:55,908 INFO [main] regionserver.TestScanner(4= 48): Continuing on after kicking off background flush [junit] 2009-06-16 09:17:55,908 INFO [main] regionserver.TestScanner(4= 27): after next() just after next flush [junit] 2009-06-16 09:17:56,601 INFO [main] regionserver.TestScanner(4= 53): Found 17575 items [junit] 2009-06-16 09:17:56,602 DEBUG [main] regionserver.HRegion(436):= Closing testscanner,,1245143874287: compactions & flushes disabled=20 [junit] 2009-06-16 09:17:56,602 DEBUG [main] regionserver.HRegion(466):= Updates disabled for region, no outstanding scanners on testscanner,,12451= 43874287 [junit] 2009-06-16 09:17:56,602 DEBUG [main] regionserver.HRegion(473):= No more row locks outstanding on region testscanner,,1245143874287 [junit] 2009-06-16 09:17:56,602 DEBUG [main] regionserver.Store(445): c= losed info [junit] 2009-06-16 09:17:56,603 INFO [main] regionserver.HRegion(485):= Closed testscanner,,1245143874287 [junit] 2009-06-16 09:17:56,603 DEBUG [main] regionserver.HLog(456): cl= osing hlog writer in hdfs://localhost:55943/user/hudson/testscanner/1163526= 05/.logs [junit] 2009-06-16 09:17:56,633 INFO [main] hbase.HBaseTestCase(612): = Shutting down FileSystem [junit] 2009-06-16 09:17:56,634 INFO [main] hbase.HBaseTestCase(619): = Shutting down Mini DFS=20 [junit] Shutting down the Mini HDFS Cluster [junit] Shutting down DataNode 1 [junit] 2009-06-16 09:17:56,741 WARN [org.apache.hadoop.hdfs.server.da= tanode.DataXceiverServer@1371566] datanode.DataXceiverServer(137): Datanode= Registration(127.0.0.1:39764, storageID=3DDS-658804951-67.195.138.9-39764-1= 245143874152, infoPort=3D47721, ipcPort=3D36640):DataXceiveServer: java.nio= .channels.AsynchronousCloseException [junit] =09at java.nio.channels.spi.AbstractInterruptibleChannel.end(Ab= stractInterruptibleChannel.java:185) [junit] =09at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketCha= nnelImpl.java:152) [junit] =09at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor= .java:84) [junit] =09at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.= run(DataXceiverServer.java:130) [junit] =09at java.lang.Thread.run(Thread.java:619) [junit]=20 [junit] Shutting down DataNode 0 [junit] 2009-06-16 09:17:57,846 WARN [org.apache.hadoop.hdfs.server.da= tanode.DataXceiverServer@fe135d] datanode.DataXceiverServer(137): DatanodeR= egistration(127.0.0.1:40085, storageID=3DDS-217862166-67.195.138.9-40085-12= 45143873453, infoPort=3D40513, ipcPort=3D49222):DataXceiveServer: java.nio.= channels.AsynchronousCloseException [junit] =09at java.nio.channels.spi.AbstractInterruptibleChannel.end(Ab= stractInterruptibleChannel.java:185) [junit] =09at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketCha= nnelImpl.java:152) [junit] =09at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor= .java:84) [junit] =09at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.= run(DataXceiverServer.java:130) [junit] =09at java.lang.Thread.run(Thread.java:619) [junit]=20 [junit] 2009-06-16 09:17:58,049 WARN [org.apache.hadoop.hdfs.server.na= menode.FSNamesystem$ReplicationMonitor@1629e96] namenode.FSNamesystem$Repli= cationMonitor(2306): ReplicationMonitor thread received InterruptedExceptio= n.java.lang.InterruptedException: sleep interrupted [junit] 2009-06-16 09:17:58.859::INFO: jetty-6.1.14 [junit] 2009-06-16 09:17:58.864::INFO: Extract jar:http://hudson.zones= .apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core= .jar!/webapps/hdfs to /tmp/Jetty_localhost_43597_hdfs____j10ezi/webapp [junit] 2009-06-16 09:17:59.033::INFO: Started SelectChannelConnector@= localhost:43597 [junit] Starting DataNode 0 with dfs.data.dir: http://hudson.zones.apac= he.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data1,/home= /hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/d= ata2=20 [junit] 2009-06-16 09:17:59.532::INFO: jetty-6.1.14 [junit] 2009-06-16 09:17:59.538::INFO: Extract jar:http://hudson.zones= .apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core= .jar!/webapps/datanode to /tmp/Jetty_localhost_47063_datanode____vdi8oh/we= bapp [junit] 2009-06-16 09:17:59.699::INFO: Started SelectChannelConnector@= localhost:47063 [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apac= he.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data3,/home= /hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/d= ata4=20 [junit] 2009-06-16 09:18:00.250::INFO: jetty-6.1.14 [junit] 2009-06-16 09:18:00.256::INFO: Extract jar:http://hudson.zones= .apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core= .jar!/webapps/datanode to /tmp/Jetty_localhost_46165_datanode____.fzgiln/w= ebapp [junit] 2009-06-16 09:18:00.419::INFO: Started SelectChannelConnector@= localhost:46165 [junit] 2009-06-16 09:18:00,628 INFO [main] regionserver.HLog(209): HL= og configuration: blocksize=3D67108864, rollsize=3D63753420, enabled=3Dtrue= , flushlogentries=3D100, optionallogflushinternal=3D10000ms [junit] 2009-06-16 09:18:00,644 INFO [main] regionserver.HLog(299): Ne= w hlog /user/hudson/testscanner/945683522/.logs/hlog.dat.1245143880628 [junit] 2009-06-16 09:18:00,645 DEBUG [main] regionserver.HRegion(264):= Opening region testscanner,,1245143880596, encoded=3D945683522 [junit] 2009-06-16 09:18:00,688 INFO [main] regionserver.HRegion(339):= region testscanner,,1245143880596/945683522 available; sequence id is 0 [junit] 2009-06-16 09:18:00,917 INFO [main] regionserver.TestScanner(3= 94): Added: 17576 [junit] 2009-06-16 09:18:00,917 INFO [main] regionserver.TestScanner(4= 19): Taking out counting scan [junit] 2009-06-16 09:18:01,370 INFO [main] regionserver.TestScanner(4= 53): Found 17575 items [junit] 2009-06-16 09:18:01,371 INFO [main] regionserver.TestScanner(4= 19): Taking out counting scan [junit] 2009-06-16 09:18:01,374 INFO [main] regionserver.TestScanner(4= 32): Starting flush at flush index 100 [junit] 2009-06-16 09:18:01,375 INFO [main] regionserver.TestScanner(4= 48): Continuing on after kicking off background flush [junit] 2009-06-16 09:18:01,375 DEBUG [Thread-334] regionserver.HRegion= (884): Started memcache flush for region testscanner,,1245143880596. Curren= t region memcache size 2.6m [junit] 2009-06-16 09:18:01,375 INFO [main] regionserver.TestScanner(4= 27): after next() just after next flush [junit] 2009-06-16 09:18:01,482 DEBUG [Thread-334] regionserver.Store(5= 26): Added hdfs://localhost:58975/user/hudson/testscanner/945683522/info/72= 81789039702252371, entries=3D17576, sequenceid=3D17577, memsize=3D2.6m, fil= esize=3D691.0k to testscanner,,1245143880596 [junit] 2009-06-16 09:18:01,485 DEBUG [Thread-334] regionserver.HRegion= (961): Finished memcache flush of ~2.6m for region testscanner,,12451438805= 96 in 110ms, sequence id=3D17577, compaction requested=3Dfalse [junit] 2009-06-16 09:18:01,486 INFO [Thread-334] regionserver.TestSca= nner$1(437): Finishing flush [junit] 2009-06-16 09:18:01,971 INFO [main] regionserver.TestScanner(4= 53): Found 17575 items [junit] 2009-06-16 09:18:01,972 DEBUG [main] regionserver.HRegion(436):= Closing testscanner,,1245143880596: compactions & flushes disabled=20 [junit] 2009-06-16 09:18:01,972 DEBUG [main] regionserver.HRegion(466):= Updates disabled for region, no outstanding scanners on testscanner,,12451= 43880596 [junit] 2009-06-16 09:18:01,972 DEBUG [main] regionserver.HRegion(473):= No more row locks outstanding on region testscanner,,1245143880596 [junit] 2009-06-16 09:18:01,973 DEBUG [main] regionserver.Store(445): c= losed info [junit] 2009-06-16 09:18:01,973 INFO [main] regionserver.HRegion(485):= Closed testscanner,,1245143880596 [junit] 2009-06-16 09:18:01,974 DEBUG [main] regionserver.HLog(456): cl= osing hlog writer in hdfs://localhost:58975/user/hudson/testscanner/9456835= 22/.logs [junit] 2009-06-16 09:18:02,007 INFO [main] hbase.HBaseTestCase(612): = Shutting down FileSystem [junit] 2009-06-16 09:18:02,008 INFO [main] hbase.HBaseTestCase(619): = Shutting down Mini DFS=20 [junit] Shutting down the Mini HDFS Cluster [junit] Shutting down DataNode 1 [junit] 2009-06-16 09:18:02,114 WARN [org.apache.hadoop.hdfs.server.da= tanode.DataXceiverServer@1c2e163] datanode.DataXceiverServer(137): Datanode= Registration(127.0.0.1:58374, storageID=3DDS-1823329271-67.195.138.9-58374-= 1245143880422, infoPort=3D46165, ipcPort=3D34929):DataXceiveServer: java.ni= o.channels.AsynchronousCloseException [junit] =09at java.nio.channels.spi.AbstractInterruptibleChannel.end(Ab= stractInterruptibleChannel.java:185) [junit] =09at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketCha= nnelImpl.java:152) [junit] =09at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor= .java:84) [junit] =09at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.= run(DataXceiverServer.java:130) [junit] =09at java.lang.Thread.run(Thread.java:619) [junit]=20 [junit] Shutting down DataNode 0 [junit] 2009-06-16 09:18:03,225 WARN [org.apache.hadoop.hdfs.server.da= tanode.DataXceiverServer@1461b5b] datanode.DataXceiverServer(137): Datanode= Registration(127.0.0.1:39234, storageID=3DDS-1455394431-67.195.138.9-39234-= 1245143879701, infoPort=3D47063, ipcPort=3D45611):DataXceiveServer: java.ni= o.channels.AsynchronousCloseException [junit] =09at java.nio.channels.spi.AbstractInterruptibleChannel.end(Ab= stractInterruptibleChannel.java:185) [junit] =09at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketCha= nnelImpl.java:152) [junit] =09at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor= .java:84) [junit] =09at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.= run(DataXceiverServer.java:130) [junit] =09at java.lang.Thread.run(Thread.java:619) [junit]=20 [junit] 2009-06-16 09:18:04,452 WARN [org.apache.hadoop.hdfs.server.na= menode.FSNamesystem$ReplicationMonitor@b98a06] namenode.FSNamesystem$Replic= ationMonitor(2306): ReplicationMonitor thread received InterruptedException= .java.lang.InterruptedException: sleep interrupted [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 24.181 sec [junit] Running org.apache.hadoop.hbase.regionserver.TestStore [junit] 2009-06-16 09:18:05,248 DEBUG [main] regionserver.Store(526): A= dded test/build/data/TestStore/testGet_FromFilesOnly/907321421/family/34495= 60078394243732, entries=3D2, sequenceid=3D1245143884809, memsize=3D298.0, f= ilesize=3D385.0 to table,,1245143885189 [junit] 2009-06-16 09:18:05,273 DEBUG [main] regionserver.Store(526): A= dded test/build/data/TestStore/testGet_FromFilesOnly/907321421/family/20138= 56511298433683, entries=3D2, sequenceid=3D1245143884810, memsize=3D298.0, f= ilesize=3D385.0 to table,,1245143885189 [junit] 2009-06-16 09:18:05,287 DEBUG [main] regionserver.Store(526): A= dded test/build/data/TestStore/testGet_FromFilesOnly/907321421/family/37343= 50420027666692, entries=3D2, sequenceid=3D1245143884811, memsize=3D298.0, f= ilesize=3D385.0 to table,,1245143885189 [junit] 2009-06-16 09:18:05,359 DEBUG [main] regionserver.Store(526): A= dded test/build/data/TestStore/testGet_FromMemCacheAndFiles/1490861299/fami= ly/8503563494733908307, entries=3D2, sequenceid=3D1245143884809, memsize=3D= 298.0, filesize=3D385.0 to table,,1245143885346 [junit] 2009-06-16 09:18:05,372 DEBUG [main] regionserver.Store(526): A= dded test/build/data/TestStore/testGet_FromMemCacheAndFiles/1490861299/fami= ly/2273667784886685843, entries=3D2, sequenceid=3D1245143884810, memsize=3D= 298.0, filesize=3D385.0 to table,,1245143885346 [junit] 2009-06-16 09:18:05,537 DEBUG [main] regionserver.Store(526): A= dded test/build/data/TestStore/testIncrementColumnValue_UpdatingFromSF/2782= 51907/family/5831305377021493877, entries=3D2, sequenceid=3D1245143884809, = memsize=3D314.0, filesize=3D401.0 to table,,1245143885511 [junit] 2009-06-16 09:18:05,586 DEBUG [main] regionserver.Store(526): A= dded test/build/data/TestStore/testIncrementColumnValue_AddingNewAfterSFChe= ck/1853976967/family/9082215623633883202, entries=3D2, sequenceid=3D1245143= 884809, memsize=3D314.0, filesize=3D401.0 to table,,1245143885571 [junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.778 sec [junit] Running org.apache.hadoop.hbase.regionserver.TestStoreFile [junit] 2009-06-16 09:18:07.380::INFO: Logging to STDERR via org.mortb= ay.log.StdErrLog [junit] 2009-06-16 09:18:07.428::INFO: jetty-6.1.14 [junit] 2009-06-16 09:18:07.459::INFO: Extract jar:http://hudson.zones= .apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core= .jar!/webapps/hdfs to /tmp/Jetty_localhost_56849_hdfs____tyb12e/webapp [junit] 2009-06-16 09:18:07.902::INFO: Started SelectChannelConnector@= localhost:56849 [junit] Starting DataNode 0 with dfs.data.dir: http://hudson.zones.apac= he.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data1,/home= /hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/d= ata2=20 [junit] 2009-06-16 09:18:08.463::INFO: jetty-6.1.14 [junit] 2009-06-16 09:18:08.469::INFO: Extract jar:http://hudson.zones= .apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core= .jar!/webapps/datanode to /tmp/Jetty_localhost_40901_datanode____jl0r1h/we= bapp [junit] 2009-06-16 09:18:08.697::INFO: Started SelectChannelConnector@= localhost:40901 [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apac= he.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data3,/home= /hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/d= ata4=20 [junit] 2009-06-16 09:18:09.262::INFO: jetty-6.1.14 [junit] 2009-06-16 09:18:09.269::INFO: Extract jar:http://hudson.zones= .apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core= .jar!/webapps/datanode to /tmp/Jetty_localhost_42952_datanode____r2jqrj/we= bapp [junit] 2009-06-16 09:18:09.478::INFO: Started SelectChannelConnector@= localhost:42952 [junit] 2009-06-16 09:18:09,982 INFO [main] regionserver.TestStoreFile= (164): Midkey: =00=02mi=14testBasicHalfMapFiletestBasicHalfMapFile=00=00=01= !?[? [junit] 2009-06-16 09:18:09,992 INFO [main] regionserver.TestStoreFile= (183): First in top: =00=02aa=14testBasicHalfMapFiletestBasicHalfMapFile=00= =00=01!?[? [junit] 2009-06-16 09:18:10,000 INFO [main] regionserver.TestStoreFile= (186): Last in top: =00=02zz=14testBasicHalfMapFiletestBasicHalfMapFile=00= =00=01!?[? [junit] 2009-06-16 09:18:10,221 INFO [main] regionserver.TestStoreFile= (237): First top when key < bottom: /aa/1473914524603146611 [junit] 2009-06-16 09:18:10,228 INFO [main] regionserver.TestStoreFile= (245): Last top when key < bottom: /zz/1473914524603146611 [junit] 2009-06-16 09:18:10,611 INFO [main] regionserver.TestStoreFile= (269): First bottom when key > top: /aa/1473914524603146611 [junit] 2009-06-16 09:18:10,618 INFO [main] regionserver.TestStoreFile= (277): Last bottom when key > top: /zz/1473914524603146611 [junit] 2009-06-16 09:18:10,625 INFO [main] hbase.HBaseTestCase(612): = Shutting down FileSystem [junit] 2009-06-16 09:18:10,626 INFO [main] hbase.HBaseTestCase(619): = Shutting down Mini DFS=20 [junit] Shutting down the Mini HDFS Cluster [junit] Shutting down DataNode 1 [junit] 2009-06-16 09:18:10,728 WARN [org.apache.hadoop.hdfs.server.da= tanode.DataXceiverServer@3a0ab1] datanode.DataXceiverServer(137): DatanodeR= egistration(127.0.0.1:39316, storageID=3DDS-196895930-67.195.138.9-39316-12= 45143889482, infoPort=3D42952, ipcPort=3D45447):DataXceiveServer: java.nio.= channels.AsynchronousCloseException [junit] =09at java.nio.channels.spi.AbstractInterruptibleChannel.end(Ab= stractInterruptibleChannel.java:185) [junit] =09at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketCha= nnelImpl.java:152) [junit] =09at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor= .java:84) [junit] =09at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.= run(DataXceiverServer.java:130) [junit] =09at java.lang.Thread.run(Thread.java:619) [junit]=20 [junit] Shutting down DataNode 0 [junit] 2009-06-16 09:18:11,831 WARN [org.apache.hadoop.hdfs.server.da= tanode.DataXceiverServer@1a80aea] datanode.DataXceiverServer(137): Datanode= Registration(127.0.0.1:54498, storageID=3DDS-1482787314-67.195.138.9-54498-= 1245143888702, infoPort=3D40901, ipcPort=3D60689):DataXceiveServer: java.ni= o.channels.AsynchronousCloseException [junit] =09at java.nio.channels.spi.AbstractInterruptibleChannel.end(Ab= stractInterruptibleChannel.java:185) [junit] =09at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketCha= nnelImpl.java:152) [junit] =09at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor= .java:84) [junit] =09at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.= run(DataXceiverServer.java:130) [junit] =09at java.lang.Thread.run(Thread.java:619) [junit]=20 [junit] 2009-06-16 09:18:12,033 WARN [org.apache.hadoop.hdfs.server.na= menode.FSNamesystem$ReplicationMonitor@1cd66ea] namenode.FSNamesystem$Repli= cationMonitor(2306): ReplicationMonitor thread received InterruptedExceptio= n.java.lang.InterruptedException: sleep interrupted [junit] 2009-06-16 09:18:12.905::INFO: jetty-6.1.14 [junit] 2009-06-16 09:18:12.913::INFO: Extract jar:http://hudson.zones= .apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core= .jar!/webapps/hdfs to /tmp/Jetty_localhost_40819_hdfs____vqvfgu/webapp [junit] 2009-06-16 09:18:13.110::INFO: Started SelectChannelConnector@= localhost:40819 [junit] Starting DataNode 0 with dfs.data.dir: http://hudson.zones.apac= he.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data1,/home= /hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/d= ata2=20 [junit] 2009-06-16 09:18:13.642::INFO: jetty-6.1.14 [junit] 2009-06-16 09:18:13.650::INFO: Extract jar:http://hudson.zones= .apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core= .jar!/webapps/datanode to /tmp/Jetty_localhost_45465_datanode____.3rxumf/w= ebapp [junit] 2009-06-16 09:18:13.828::INFO: Started SelectChannelConnector@= localhost:45465 [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apac= he.org/hudson/job/HBase-Patch/ws/trunk/build/test/data/dfs/data/data3,/home= /hudson/hudson-slave/workspace/HBase-Patch/trunk/build/test/data/dfs/data/d= ata4=20 [junit] 2009-06-16 09:18:14.348::INFO: jetty-6.1.14 [junit] 2009-06-16 09:18:14.356::INFO: Extract jar:http://hudson.zones= .apache.org/hudson/job/HBase-Patch/ws/trunk/lib/hadoop-0.20.0-plus4681-core= .jar!/webapps/datanode to /tmp/Jetty_localhost_46621_datanode____jgtsii/we= bapp [junit] 2009-06-16 09:18:14.534::INFO: Started SelectChannelConnector@= localhost:46621 [junit] 2009-06-16 09:18:14,988 INFO [main] hbase.HBaseTestCase(612): = Shutting down FileSystem [junit] 2009-06-16 09:18:14,990 INFO [main] hbase.HBaseTestCase(619): = Shutting down Mini DFS=20 [junit] Shutting down the Mini HDFS Cluster [junit] Shutting down DataNode 1 [junit] 2009-06-16 09:18:15,097 WARN [org.apache.hadoop.hdfs.server.da= tanode.DataXceiverServer@1ca5df9] datanode.DataXceiverServer(137): Datanode= Registration(127.0.0.1:54080, storageID=3DDS-1670435711-67.195.138.9-54080-= 1245143894540, infoPort=3D46621, ipcPort=3D52579):DataXceiveServer: java.ni= o.channels.AsynchronousCloseException [junit] =09at java.nio.channels.spi.AbstractInterruptibleChannel.end(Ab= stractInterruptibleChannel.java:185) [junit] =09at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketCha= nnelImpl.java:152) [junit] =09at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor= .java:84) [junit] =09at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.= run(DataXceiverServer.java:130) [junit] =09at java.lang.Thread.run(Thread.java:619) [junit]=20 [junit] Shutting down DataNode 0 [junit] 2009-06-16 09:18:16,204 WARN [org.apache.hadoop.hdfs.server.da= tanode.DataXceiverServer@b8d09d] datanode.DataXceiverServer(137): DatanodeR= egistration(127.0.0.1:51165, storageID=3DDS-2095719225-67.195.138.9-51165-1= 245143893832, infoPort=3D45465, ipcPort=3D56132):DataXceiveServer: java.nio= .channels.AsynchronousCloseException [junit] =09at java.nio.channels.spi.AbstractInterruptibleChannel.end(Ab= stractInterruptibleChannel.java:185) [junit] =09at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketCha= nnelImpl.java:152) [junit] =09at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor= .java:84) [junit] =09at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.= run(DataXceiverServer.java:130) [junit] =09at java.lang.Thread.run(Thread.java:619) [junit]=20 [junit] 2009-06-16 09:18:17,306 WARN [org.apache.hadoop.hdfs.server.na= menode.FSNamesystem$ReplicationMonitor@1a32ea4] namenode.FSNamesystem$Repli= cationMonitor(2306): ReplicationMonitor thread received InterruptedExceptio= n.java.lang.InterruptedException: sleep interrupted [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 11.12 sec [junit] Running org.apache.hadoop.hbase.regionserver.TestStoreScanner [junit] Tests run: 14, Failures: 0, Errors: 0, Time elapsed: 0.071 sec [junit] Running org.apache.hadoop.hbase.regionserver.TestWildcardColumn= Tracker [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.312 sec [junit] Running org.apache.hadoop.hbase.util.TestBase64 [junit]=20 [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.138 sec [junit] Running org.apache.hadoop.hbase.util.TestBytes [junit] AAA [junit] CCC [junit] EEE [junit] AAA [junit] BBB [junit] CCC [junit] DDD [junit] http://A [junit] http://] [junit] http://z [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.095 sec [junit] Running org.apache.hadoop.hbase.util.TestKeying [junit] Original url http://abc:bcd@www.example.com/index.html?query=3D= something#middle, Transformed url r:http://abc:bcd@com.example.www/index.ht= ml?query=3Dsomething#middle [junit] Original url file:///usr/bin/java, Transformed url file:///usr/= bin/java [junit] Original url dns:www.powerset.com, Transformed url dns:www.powe= rset.com [junit] Original url dns://dns.powerset.com/www.powerset.com, Transform= ed url r:dns://com.powerset.dns/www.powerset.com [junit] Original url http://one.two.three/index.html, Transformed url r= :http://three.two.one/index.html [junit] Original url https://one.two.three:9443/index.html, Transformed= url r:https://three.two.one:9443/index.html [junit] Original url ftp://one.two.three/index.html, Transformed url r:= ftp://three.two.one/index.html [junit] Original url filename, Transformed url filename [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.081 sec [junit] Running org.apache.hadoop.hbase.util.TestRootPath [junit] 2009-06-16 09:18:20,114 INFO [main] util.TestRootPath(60): Got= expected exception when checking invalid path: [junit] java.io.IOException: Root directory does not contain a scheme [junit] =09at org.apache.hadoop.hbase.util.FSUtils.validateRootPath(FSU= tils.java:212) [junit] =09at org.apache.hadoop.hbase.util.TestRootPath.testRootPath(Te= stRootPath.java:56) [junit] =09at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Metho= d) [junit] =09at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodA= ccessorImpl.java:39) [junit] =09at sun.reflect.DelegatingMethodAccessorImpl.invoke(Delegatin= gMethodAccessorImpl.java:25) [junit] =09at java.lang.reflect.Method.invoke(Method.java:597) [junit] =09at junit.framework.TestCase.runTest(TestCase.java:154) [junit] =09at junit.framework.TestCase.runBare(TestCase.java:127) [junit] =09at junit.framework.TestResult$1.protect(TestResult.java:106) [junit] =09at junit.framework.TestResult.runProtected(TestResult.java:1= 24) [junit] =09at junit.framework.TestResult.run(TestResult.java:109) [junit] =09at junit.framework.TestCase.run(TestCase.java:118) [junit] =09at junit.framework.TestSuite.runTest(TestSuite.java:208) [junit] =09at junit.framework.TestSuite.run(TestSuite.java:203) [junit] =09at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRun= ner.run(JUnitTestRunner.java:421) [junit] =09at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRun= ner.launch(JUnitTestRunner.java:912) [junit] =09at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRun= ner.main(JUnitTestRunner.java:766) [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.072 sec BUILD FAILED http://hudson.zones.apache.org/hudson/job/HBase-Patch/ws/trunk/build.xml :4= 60: Tests failed! Total time: 34 minutes 44 seconds [locks-and-latches] Releasing all the locks [locks-and-latches] All the locks released Recording test results Publishing Clover coverage report...