hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Zheng Lv <lvzheng19800...@gmail.com>
Subject Re: Region servers down when inserting with hbase0.20.0 rc
Date Fri, 14 Aug 2009 09:27:55 GMT
Hello,
    Thank you for your suggestions.
    Several days before We found our routing talbe has some problems, after
adjusting now we are sure that the bandwidth is ok.
    And we have used lzo compression.
    So we started the test program again, but after running normally for 23
hours, the master killed itself. Following is part of the log.
    By the way, this time we inserted 10 webpages per second only.

2009-08-14 13:36:31,840 INFO org.apache.hadoop.hbase.master.ServerManager: 4
region servers, 0 dead, average load 48.75
2009-08-14 13:36:32,016 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.metaScanner scanning meta region {server: 192.168.33.5:60020,
regionnam
e: .META.,,1, startKey: <>}
2009-08-14 13:36:32,076 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.rootScanner scanning meta region {server: 192.168.33.6:60020,
regionnam
e: -ROOT-,,0, startKey: <>}
2009-08-14 13:36:32,084 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.rootScanner scan of 1 row(s) of meta region {server:
192.168.33.6:60020
, regionname: -ROOT-,,0, startKey: <>} complete
2009-08-14 13:36:32,316 INFO org.apache.hadoop.hbase.master.BaseScanner:
RegionManager.metaScanner scan of 193 row(s) of meta region {server:
192.168.33.5:600
20, regionname: .META.,,1, startKey: <>} complete
2009-08-14 13:36:32,316 INFO org.apache.hadoop.hbase.master.BaseScanner: All
1 .META. region(s) scanned
2009-08-14 13:37:00,366 WARN org.apache.zookeeper.ClientCnxn: Exception
closing session 0x22313002be80001 to sun.nio.ch.SelectionKeyImpl@4a407c9f
java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0
lim=4 cap=4]
        at
org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:653)
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:897)
2009-08-14 13:37:00,881 INFO org.apache.zookeeper.ClientCnxn: Attempting
connection to server ubuntu3/192.168.33.8:2222
2009-08-14 13:37:04,366 WARN org.apache.zookeeper.ClientCnxn: Exception
closing session 0x22313002be80000 to sun.nio.ch.SelectionKeyImpl@4ac6ee33
java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0
lim=4 cap=4]
        at
org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:653)
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:897)
2009-08-14 13:37:04,721 INFO org.apache.zookeeper.ClientCnxn: Attempting
connection to server ubuntu2/192.168.33.9:2222
2009-08-14 13:37:08,872 WARN org.apache.zookeeper.ClientCnxn: Exception
closing session 0x22313002be80001 to sun.nio.ch.SelectionKeyImpl@2e93ebe0
java.io.IOException: TIMED OUT
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
2009-08-14 13:37:08,873 WARN org.apache.zookeeper.ClientCnxn: Ignoring
exception during shutdown output
java.net.SocketException: Transport endpoint is not connected
        at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
        at
sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
        at
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:956)
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
2009-08-14 13:37:09,486 INFO org.apache.zookeeper.ClientCnxn: Attempting
connection to server ubuntu2/192.168.33.9:2222
2009-08-14 13:37:12,712 WARN org.apache.zookeeper.ClientCnxn: Exception
closing session 0x22313002be80000 to sun.nio.ch.SelectionKeyImpl@7162d703
java.io.IOException: TIMED OUT
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
2009-08-14 13:37:12,713 WARN org.apache.zookeeper.ClientCnxn: Ignoring
exception during shutdown output
java.net.SocketException: Transport endpoint is not connected
        at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
        at
sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
        at
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:956)
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
2009-08-14 13:37:13,032 INFO org.apache.zookeeper.ClientCnxn: Attempting
connection to server ubuntu3/192.168.33.8:2222
2009-08-14 13:37:17,482 WARN org.apache.zookeeper.ClientCnxn: Exception
closing session 0x22313002be80001 to sun.nio.ch.SelectionKeyImpl@1012401d
java.io.IOException: TIMED OUT
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
2009-08-14 13:37:17,483 WARN org.apache.zookeeper.ClientCnxn: Ignoring
exception during shutdown output
java.net.SocketException: Transport endpoint is not connected
        at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
        at
sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
        at
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:956)
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
2009-08-14 13:37:17,856 INFO org.apache.zookeeper.ClientCnxn: Attempting
connection to server ubuntu7/192.168.33.6:2222
2009-08-14 13:37:19,445 INFO org.apache.zookeeper.ClientCnxn: Priming
connection to java.nio.channels.SocketChannel[connected local=/
192.168.33.7:40923 remote
=ubuntu7/192.168.33.6:2222]
2009-08-14 13:37:19,445 INFO org.apache.zookeeper.ClientCnxn: Server
connection successful
2009-08-14 13:37:21,022 WARN org.apache.zookeeper.ClientCnxn: Exception
closing session 0x22313002be80000 to sun.nio.ch.SelectionKeyImpl@2e101b3a
java.io.IOException: TIMED OUT
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
2009-08-14 13:37:21,023 WARN org.apache.zookeeper.ClientCnxn: Ignoring
exception during shutdown output
java.net.SocketException: Transport endpoint is not connected
        at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
        at
sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
        at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
        at
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:956)
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
2009-08-14 13:37:21,908 INFO org.apache.zookeeper.ClientCnxn: Attempting
connection to server ubuntu7/192.168.33.6:2222
2009-08-14 13:37:21,908 INFO org.apache.zookeeper.ClientCnxn: Priming
connection to java.nio.channels.SocketChannel[connected local=/
192.168.33.7:40926 remote
=ubuntu7/192.168.33.6:2222]
2009-08-14 13:37:21,909 INFO org.apache.zookeeper.ClientCnxn: Server
connection successful
2009-08-14 13:37:21,911 WARN org.apache.zookeeper.ClientCnxn: Exception
closing session 0x22313002be80000 to sun.nio.ch.SelectionKeyImpl@6bdfe124
java.io.IOException: Session Expired
        at
org.apache.zookeeper.ClientCnxn$SendThread.readConnectResult(ClientCnxn.java:548)
        at
org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:661)
        at
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:897)
2009-08-14 13:37:21,912 ERROR org.apache.hadoop.hbase.master.HMaster: Master
lost its znode, killing itself now
Regards,
LvZheng




2009/8/4 Zheng Lv <lvzheng19800619@gmail.com>

> Hello Everyone,
>     We are testing hbase0.20.0 rc in a cluster with 5 nodes, inserting web
> pages at a speed of 20 pages per second. A few minutes later one region
> server shut down, and about 40 minutes later another shut down too, leaving
> 2 of 4 region servers running.
>     I note that the contents of logs in two servers are not similar, and
> below is the contents.
>
>     ubuntu7:
>
>
> 2009-08-04 08:59:35,870 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249344005673,
> entries=34, calcsize=10262, filesize=7353. New hlog
> /hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249347575858
> 2009-08-04 09:12:49,462 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x22e2b69b350000 to sun.nio.ch.SelectionKeyImpl@25cd0888
> java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0
> lim=4 cap=4]
>  at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:653)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:897)
> 2009-08-04 09:12:49,564 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Got ZooKeeper event,
> state: Disconnected, type: None, path: null
> 2009-08-04 09:12:49,959 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu7/192.168.33.6:2222
> 2009-08-04 09:12:49,960 INFO org.apache.zookeeper.ClientCnxn: Priming
> connection to java.nio.channels.SocketChannel[connected local=/
> 192.168.33.6:50860 remote=ubuntu7/192.168.33.6:2222]
> 2009-08-04 09:12:49,960 INFO org.apache.zookeeper.ClientCnxn: Server
> connection successful
> 2009-08-04 09:12:49,965 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Got ZooKeeper event,
> state: SyncConnected, type: None, path: null
> 2009-08-04 09:13:39,460 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x22e2b69b350000 to sun.nio.ch.SelectionKeyImpl@3e472e76
> java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0
> lim=4 cap=4]
>  at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:653)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:897)
> 2009-08-04 09:13:39,567 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Got ZooKeeper event,
> state: Disconnected, type: None, path: null
> 2009-08-04 09:13:39,807 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu6/192.168.33.7:2222
> 2009-08-04 09:13:45,807 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x22e2b69b350000 to sun.nio.ch.SelectionKeyImpl@287b2e39
> java.io.IOException: TIMED OUT
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
> 2009-08-04 09:13:45,807 WARN org.apache.zookeeper.ClientCnxn: Ignoring
> exception during shutdown output
> java.net.SocketException: Transport endpoint is not connected
>  at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>  at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
>  at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>  at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:956)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
> 2009-08-04 09:13:46,192 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu9/192.168.33.5:2222
> 2009-08-04 09:13:52,184 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x22e2b69b350000 to sun.nio.ch.SelectionKeyImpl@2f17b4f2
> java.io.IOException: TIMED OUT
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
> 2009-08-04 09:13:52,185 WARN org.apache.zookeeper.ClientCnxn: Ignoring
> exception during shutdown output
> java.net.SocketException: Transport endpoint is not connected
>  at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>  at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
>  at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>  at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:956)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
> 2009-08-04 09:13:52,923 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu3/192.168.33.8:2222
> 2009-08-04 09:13:55,338 INFO org.apache.zookeeper.ClientCnxn: Priming
> connection to java.nio.channels.SocketChannel[connected local=/
> 192.168.33.6:49662 remote=ubuntu3/192.168.33.8:2222]
> 2009-08-04 09:13:55,338 INFO org.apache.zookeeper.ClientCnxn: Server
> connection successful
> 2009-08-04 09:13:55,342 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x22e2b69b350000 to sun.nio.ch.SelectionKeyImpl@3d689405
> java.io.IOException: Session Expired
>  at
> org.apache.zookeeper.ClientCnxn$SendThread.readConnectResult(ClientCnxn.java:548)
>  at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:661)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:897)
> 2009-08-04 09:13:55,342 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Got ZooKeeper event,
> state: Expired, type: None, path: null
> 2009-08-04 09:13:55,343 ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session
> expired
> 2009-08-04 09:13:55,343 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Restarting Region Server
> 2009-08-04 09:13:55,375 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> request=0.0, regions=1, stores=2, storefiles=2, storefileIndexSize=0,
> memstoreSize=0, usedHeap=24, maxHeap=2993, blockCacheSize=5151568,
> blockCacheFree=622696432, blockCacheCount=0, blockCacheHitRatio=0
> 2009-08-04 09:13:55,833 INFO
> org.apache.hadoop.hbase.regionserver.LogFlusher:
> regionserver/192.168.33.6:60020.logFlusher exiting
> 2009-08-04 09:13:55,852 INFO
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
> regionserver/192.168.33.6:60020.cacheFlusher exiting
> 2009-08-04 09:13:55,882 INFO
> org.apache.hadoop.hbase.regionserver.LogRoller: LogRoller exiting.
> 2009-08-04 09:13:58,339 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> server on 60020
> 2009-08-04 09:13:58,339 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 0 on 60020: exiting
> 2009-08-04 09:13:58,340 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Stopping infoServer
> 2009-08-04 09:13:58,340 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> IPC Server Responder
> 2009-08-04 09:13:58,340 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> IPC Server listener on 60020
> 2009-08-04 09:13:58,341 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 5 on 60020: exiting
> 2009-08-04 09:13:58,341 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 8 on 60020: exiting
> 2009-08-04 09:13:58,354 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 1 on 60020: exiting
> 2009-08-04 09:13:58,354 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 2 on 60020: exiting
> 2009-08-04 09:13:58,377 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 9 on 60020: exiting
> 2009-08-04 09:13:58,377 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 7 on 60020: exiting
> 2009-08-04 09:13:58,377 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 6 on 60020: exiting
> 2009-08-04 09:13:58,378 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 4 on 60020: exiting
> 2009-08-04 09:13:58,378 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 3 on 60020: exiting
> 2009-08-04 09:13:58,471 INFO
> org.apache.hadoop.hbase.regionserver.CompactSplitThread:
> regionserver/192.168.33.6:60020.compactor exiting
> 2009-08-04 09:13:58,472 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer$MajorCompactionChecker:
> regionserver/192.168.33.6:60020.majorCompactionChecker exiting
> 2009-08-04 09:14:00,187 INFO org.apache.hadoop.hbase.Leases:
> regionserver/192.168.33.6:60020.leaseChecker closing leases
> 2009-08-04 09:14:00,187 INFO org.apache.hadoop.hbase.Leases:
> regionserver/192.168.33.6:60020.leaseChecker closed leases
> 2009-08-04 09:14:01,895 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread exiting
> 2009-08-04 09:14:25,347 INFO org.apache.hadoop.hdfs.DFSClient:
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
> /hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249347575858 File does
> not exist. Holder DFSClient_1310853810 does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1317)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1308)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1236)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>  at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:396)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>  at org.apache.hadoop.ipc.Client.call(Client.java:739)
>  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>  at $Proxy1.addBlock(Unknown Source)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>  at $Proxy1.addBlock(Unknown Source)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2875)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2757)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2048)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2234)
> 2009-08-04 09:14:25,348 INFO org.apache.hadoop.hdfs.DFSClient: Waiting for
> replication for 26 seconds
> 2009-08-04 09:14:25,348 WARN org.apache.hadoop.hdfs.DFSClient:
> NotReplicatedYetException sleeping
> /hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249347575858 retries left
> 4
> 2009-08-04 09:14:25,753 INFO org.apache.hadoop.hdfs.DFSClient:
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
> /hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249347575858 File does
> not exist. Holder DFSClient_1310853810 does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1317)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1308)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1236)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>  at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:396)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>  at org.apache.hadoop.ipc.Client.call(Client.java:739)
>  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>  at $Proxy1.addBlock(Unknown Source)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>  at $Proxy1.addBlock(Unknown Source)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2875)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2757)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2048)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2234)
> 2009-08-04 09:14:25,753 INFO org.apache.hadoop.hdfs.DFSClient: Waiting for
> replication for 27 seconds
> 2009-08-04 09:14:25,753 WARN org.apache.hadoop.hdfs.DFSClient:
> NotReplicatedYetException sleeping
> /hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249347575858 retries left
> 3
> 2009-08-04 09:14:26,557 INFO org.apache.hadoop.hdfs.DFSClient:
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
> /hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249347575858 File does
> not exist. Holder DFSClient_1310853810 does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1317)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1308)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1236)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>  at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:396)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>  at org.apache.hadoop.ipc.Client.call(Client.java:739)
>  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>  at $Proxy1.addBlock(Unknown Source)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>  at $Proxy1.addBlock(Unknown Source)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2875)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2757)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2048)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2234)
> 2009-08-04 09:14:26,557 INFO org.apache.hadoop.hdfs.DFSClient: Waiting for
> replication for 28 seconds
> 2009-08-04 09:14:26,557 WARN org.apache.hadoop.hdfs.DFSClient:
> NotReplicatedYetException sleeping
> /hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249347575858 retries left
> 2
> 2009-08-04 09:14:28,161 INFO org.apache.hadoop.hdfs.DFSClient:
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
> /hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249347575858 File does
> not exist. Holder DFSClient_1310853810 does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1317)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1308)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1236)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>  at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:396)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>  at org.apache.hadoop.ipc.Client.call(Client.java:739)
>  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>  at $Proxy1.addBlock(Unknown Source)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>  at $Proxy1.addBlock(Unknown Source)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2875)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2757)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2048)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2234)
> 2009-08-04 09:14:28,162 INFO org.apache.hadoop.hdfs.DFSClient: Waiting for
> replication for 29 seconds
> 2009-08-04 09:14:28,162 WARN org.apache.hadoop.hdfs.DFSClient:
> NotReplicatedYetException sleeping
> /hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249347575858 retries left
> 1
> 2009-08-04 09:14:31,367 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
> Exception: org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
> /hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249347575858 File does
> not exist. Holder DFSClient_1310853810 does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1317)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1308)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1236)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>  at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:396)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>  at org.apache.hadoop.ipc.Client.call(Client.java:739)
>  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>  at $Proxy1.addBlock(Unknown Source)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>  at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>  at $Proxy1.addBlock(Unknown Source)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2875)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2757)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2048)
>  at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2234)
> 2009-08-04 09:14:31,367 WARN org.apache.hadoop.hdfs.DFSClient: Error
> Recovery for block null bad datanode[0] nodes == null
> 2009-08-04 09:14:31,368 WARN org.apache.hadoop.hdfs.DFSClient: Could not
> get block locations. Source file
> "/hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249347575858" -
> Aborting...
> 2009-08-04 09:14:31,371 ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to close log in
> abort
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException:
> org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on
> /hbase/.logs/ubuntu7,60020,1249344004988/hlog.dat.1249347575858 File does
> not exist. Holder DFSClient_1310853810 does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1317)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1308)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1236)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>  at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:396)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>  at
> org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:94)
>  at
> org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:48)
>  at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:631)
>  at java.lang.Thread.run(Thread.java:619)
> 2009-08-04 09:14:31,373 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed .META.,,1
> 2009-08-04 09:14:31,373 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: aborting server at:
> 192.168.33.6:60020
> 2009-08-04 09:14:31,374 INFO org.apache.zookeeper.ZooKeeper: Closing
> session: 0x22e2b69b350000
> 2009-08-04 09:14:31,374 INFO org.apache.zookeeper.ClientCnxn: Closing
> ClientCnxn for session: 0x22e2b69b350000
> 2009-08-04 09:14:31,374 INFO org.apache.zookeeper.ClientCnxn: Disconnecting
> ClientCnxn for session: 0x22e2b69b350000
> 2009-08-04 09:14:31,374 INFO org.apache.zookeeper.ZooKeeper: Session:
> 0x22e2b69b350000 closed
> 2009-08-04 09:14:31,374 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: regionserver/
> 192.168.33.6:60020 exiting
> 2009-08-04 09:14:31,377 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> Initializing RPC Metrics with hostName=HRegionServer, port=60020
> 2009-08-04 09:14:31,391 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Starting shutdown
> thread.
> 2009-08-04 09:14:31,392 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Shutdown thread complete
> 2009-08-04 09:14:31,394 INFO
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
> globalMemStoreLimit=1.2g, globalMemStoreLimitLowMark=748.5m, maxHeap=2.9g
> 2009-08-04 09:14:31,394 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Runs every 10000000ms
> 2009-08-04 09:14:31,397 INFO org.apache.zookeeper.ZooKeeper: Initiating
> client connection,
> host=ubuntu9:2222,ubuntu7:2222,ubuntu3:2222,ubuntu2:2222,ubuntu6:2222
> sessionTimeout=30000
> watcher=org.apache.hadoop.hbase.regionserver.HRegionServer@6a8c436b
> 2009-08-04 09:14:31,398 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu3/192.168.33.8:2222
> 2009-08-04 09:14:31,401 INFO org.apache.zookeeper.ClientCnxn: Priming
> connection to java.nio.channels.SocketChannel[connected local=/
> 192.168.33.6:49672 remote=ubuntu3/192.168.33.8:2222]
> 2009-08-04 09:14:31,401 INFO org.apache.zookeeper.ClientCnxn: Server
> connection successful
> 2009-08-04 09:14:31,417 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Got ZooKeeper event,
> state: SyncConnected, type: None, path: null
> 2009-08-04 09:14:31,447 INFO org.apache.zookeeper.ClientCnxn: EventThread
> shut down
> 2009-08-04 09:14:31,447 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> server on 60020
> 2009-08-04 09:14:31,448 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Stopping infoServer
> 2009-08-04 09:14:31,452 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: telling master that
> region server is shutting down at: 192.168.33.6:60020
> 2009-08-04 09:14:31,453 WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed to send exiting
> message to master:
> java.lang.NullPointerException
>  at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:660)
>  at java.lang.Thread.run(Thread.java:619)
> 2009-08-04 09:14:31,453 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server at:
> 192.168.33.6:60020
> 2009-08-04 09:14:31,453 INFO org.apache.zookeeper.ZooKeeper: Closing
> session: 0x122e2b7548e0087
> 2009-08-04 09:14:31,453 INFO org.apache.zookeeper.ClientCnxn: Closing
> ClientCnxn for session: 0x122e2b7548e0087
> 2009-08-04 09:14:31,460 INFO org.apache.zookeeper.ClientCnxn: Exception
> while closing send thread for session 0x122e2b7548e0087 : Read error rc = -1
> java.nio.DirectByteBuffer[pos=0 lim=4 cap=4]
>
>
>
>
>
>
>
>
>     ubuntu9:
>
> 2009-08-04 08:59:40,655 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,,1249347647788
> 2009-08-04 08:59:40,656 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,,1249347647788
> 2009-08-04 08:59:40,683 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,,1249347647788/491403556 available; sequence id is 0
> 2009-08-04 09:01:40,266 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249344130090,
> entries=3147, calcsize=29823872, filesize=29491785. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249347700254
> 2009-08-04 09:01:40,269 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249344130090 whose
> highest sequence/edit id is 5984
> 2009-08-04 09:10:31,649 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249347700254,
> entries=6720, calcsize=63790080, filesize=63079139. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348231638
> 2009-08-04 09:11:30,345 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348231638,
> entries=6715, calcsize=63794066, filesize=63090684. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348290324
> 2009-08-04 09:11:30,345 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249347700254 whose
> highest sequence/edit id is 12705
> 2009-08-04 09:12:28,991 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348290324,
> entries=6710, calcsize=63797225, filesize=63093218. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348348979
> 2009-08-04 09:12:28,991 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348231638 whose
> highest sequence/edit id is 19419
> 2009-08-04 09:12:40,827 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,,1249347647788
> 2009-08-04 09:12:49,963 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,,1249347647788 in 9sec
> 2009-08-04 09:13:29,072 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348348979,
> entries=6720, calcsize=63800107, filesize=63095030. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348409060
> 2009-08-04 09:13:29,072 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348290324 whose
> highest sequence/edit id is 26134
> 2009-08-04 09:14:27,438 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348409060,
> entries=6715, calcsize=63798666, filesize=63094154. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348467426
> 2009-08-04 09:14:27,438 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348348979 whose
> highest sequence/edit id is 32850
> 2009-08-04 09:14:56,116 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x22e2b69b350001 to sun.nio.ch.SelectionKeyImpl@6db22920
> java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0
> lim=4 cap=4]
>  at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:653)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:897)
> 2009-08-04 09:14:57,075 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu3/192.168.33.8:2222
> 2009-08-04 09:14:57,076 INFO org.apache.zookeeper.ClientCnxn: Priming
> connection to java.nio.channels.SocketChannel[connected local=/
> 192.168.33.5:44326 remote=ubuntu3/192.168.33.8:2222]
> 2009-08-04 09:14:57,076 INFO org.apache.zookeeper.ClientCnxn: Server
> connection successful
> 2009-08-04 09:15:15,359 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,,1249347647788
> 2009-08-04 09:15:20,972 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,,1249347647788 in 5sec
> 2009-08-04 09:15:20,972 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting split of region webpage,,1249347647788
> 2009-08-04 09:15:21,963 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,,1249347647788
> 2009-08-04 09:15:24,791 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,,1249348520980/935635885 available; sequence id is 45742
> 2009-08-04 09:15:24,792 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,,1249348520980
> 2009-08-04 09:15:24,995 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348520980/1870783380
> available; sequence id is 45743
> 2009-08-04 09:15:24,996 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348520980
> 2009-08-04 09:15:25,039 INFO
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: region split, META
> updated, and report to master all successful. Old region=REGION => {NAME =>
> 'webpage,,1249347647788', STARTKEY => '', ENDKEY => '', ENCODED =>
> 491403556, OFFLINE => true, SPLIT => true, TABLE => {{NAME => 'webpage',
> FAMILIES => [{NAME => 'CF_CONTENT', COMPRESSION => 'NONE', VERSIONS => '2',
> TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE
> => 'true'}, {NAME => 'CF_INFORMATION', COMPRESSION => 'NONE', VERSIONS =>
> '1', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',
> BLOCKCACHE => 'true'}]}}, new regions: webpage,,1249348520980,
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348520980.
> Split took 4sec
> 2009-08-04 09:15:26,517 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,,1249348520980
> 2009-08-04 09:15:26,517 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348520980
> 2009-08-04 09:15:26,517 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,,1249348520980
> 2009-08-04 09:15:26,689 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,,1249348520980/935635885 available; sequence id is 45742
> 2009-08-04 09:15:26,689 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,,1249348520980
> 2009-08-04 09:15:26,690 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348520980
> 2009-08-04 09:15:26,998 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348520980/1870783380
> available; sequence id is 45743
> 2009-08-04 09:15:30,751 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,,1249348520980 in 4sec
> 2009-08-04 09:15:30,751 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348520980
> 2009-08-04 09:16:00,015 WARN org.apache.hadoop.hbase.regionserver.HLog: IPC
> Server handler 0 on 60020 took 25134ms appending an edit to hlog;
> editcount=6668
> 2009-08-04 09:16:00,016 WARN org.apache.hadoop.hbase.regionserver.HLog:
> regionserver/192.168.33.5:60020.logFlusher took 19781ms optional sync'ing
> hlog; editcount=6674
> 2009-08-04 09:16:00,397 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348467426,
> entries=6710, calcsize=63779032, filesize=63075244. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348560377
> 2009-08-04 09:16:00,397 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348409060 whose
> highest sequence/edit id is 39565
> 2009-08-04 09:16:30,001 WARN org.apache.hadoop.hbase.regionserver.HLog: IPC
> Server handler 1 on 60020 took 27826ms appending an edit to hlog;
> editcount=204
> 2009-08-04 09:16:30,002 WARN org.apache.hadoop.hbase.regionserver.HLog:
> regionserver/192.168.33.5:60020.logFlusher took 19986ms optional sync'ing
> hlog; editcount=205
> 2009-08-04 09:16:38,451 ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed openScanner
> org.apache.hadoop.hbase.NotServingRegionException: webpage,,1249347647788
>  at
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2255)
>  at
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:1871)
>  at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:650)
>  at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:913)
> 2009-08-04 09:16:38,457 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 7 on 60020, call openScanner([B@47965ff3, startRow=, stopRow=,
> maxVersions=1, timeRange=[0,9223372036854775807),
> families={(family=CF_CONTENT, columns={}), (family=CF_INFORMATION,
> columns={}}) from 192.168.33.7:52275: error:
> org.apache.hadoop.hbase.NotServingRegionException: webpage,,1249347647788
> org.apache.hadoop.hbase.NotServingRegionException: webpage,,1249347647788
>  at
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:2255)
>  at
> org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:1871)
>  at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:650)
>  at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:913)
> 2009-08-04 09:17:28,848 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348560377,
> entries=6955, calcsize=66566006, filesize=65836364. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348648837
> 2009-08-04 09:17:28,848 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348467426 whose
> highest sequence/edit id is 46277
> 2009-08-04 09:17:28,853 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348560377 whose
> highest sequence/edit id is 53231
> 2009-08-04 09:17:29,981 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348520980
> in 1mins, 59sec
> 2009-08-04 09:17:29,981 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting split of region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348520980
> 2009-08-04 09:17:30,218 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348520980
> 2009-08-04 09:17:33,371 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348649989/1176865797
> available; sequence id is 53366
> 2009-08-04 09:17:33,372 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348649989
> 2009-08-04 09:17:33,575 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989/203332105
> available; sequence id is 53367
> 2009-08-04 09:17:33,576 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989
> 2009-08-04 09:17:34,580 INFO org.apache.hadoop.ipc.HBaseClient: Retrying
> connect to server: /192.168.33.6:60020. Already tried 0 time(s).
> 2009-08-04 09:17:35,581 INFO org.apache.hadoop.ipc.HBaseClient: Retrying
> connect to server: /192.168.33.6:60020. Already tried 1 time(s).
> 2009-08-04 09:17:36,582 INFO org.apache.hadoop.ipc.HBaseClient: Retrying
> connect to server: /192.168.33.6:60020. Already tried 2 time(s).
> 2009-08-04 09:17:37,582 INFO org.apache.hadoop.ipc.HBaseClient: Retrying
> connect to server: /192.168.33.6:60020. Already tried 3 time(s).
> 2009-08-04 09:17:38,583 INFO org.apache.hadoop.ipc.HBaseClient: Retrying
> connect to server: /192.168.33.6:60020. Already tried 4 time(s).
> 2009-08-04 09:17:39,584 INFO org.apache.hadoop.ipc.HBaseClient: Retrying
> connect to server: /192.168.33.6:60020. Already tried 5 time(s).
> 2009-08-04 09:17:40,585 INFO org.apache.hadoop.ipc.HBaseClient: Retrying
> connect to server: /192.168.33.6:60020. Already tried 6 time(s).
> 2009-08-04 09:17:41,586 INFO org.apache.hadoop.ipc.HBaseClient: Retrying
> connect to server: /192.168.33.6:60020. Already tried 7 time(s).
> 2009-08-04 09:17:42,586 INFO org.apache.hadoop.ipc.HBaseClient: Retrying
> connect to server: /192.168.33.6:60020. Already tried 8 time(s).
> 2009-08-04 09:17:43,587 INFO org.apache.hadoop.ipc.HBaseClient: Retrying
> connect to server: /192.168.33.6:60020. Already tried 9 time(s).
> 2009-08-04 09:17:45,611 INFO
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: region split, META
> updated, and report to master all successful. Old region=REGION => {NAME =>
> 'webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348520980',
> STARTKEY => 'http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744',
> ENDKEY => '', ENCODED => 1870783380, OFFLINE => true, SPLIT => true, TABLE
> => {{NAME => 'webpage', FAMILIES => [{NAME => 'CF_CONTENT', COMPRESSION =>
> 'NONE', VERSIONS => '2', TTL => '2147483647', BLOCKSIZE => '65536',
> IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'CF_INFORMATION',
> COMPRESSION => 'NONE', VERSIONS => '1', TTL => '2147483647', BLOCKSIZE =>
> '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new regions:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348649989,
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989.
> Split took 15sec
> 2009-08-04 09:17:47,754 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348649989
> 2009-08-04 09:17:47,754 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989
> 2009-08-04 09:17:47,754 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348649989
> 2009-08-04 09:17:47,943 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348649989/1176865797
> available; sequence id is 53366
> 2009-08-04 09:17:47,944 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348649989
> 2009-08-04 09:17:47,944 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989
> 2009-08-04 09:17:48,163 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989/203332105
> available; sequence id is 53367
> 2009-08-04 09:17:52,777 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348649989
> in 4sec
> 2009-08-04 09:17:52,778 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989
> 2009-08-04 09:18:01,313 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989
> in 8sec
> 2009-08-04 09:24:26,812 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348648837,
> entries=6661, calcsize=63769148, filesize=63059485. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349066799
> 2009-08-04 09:25:24,469 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349066799,
> entries=6661, calcsize=63769500, filesize=63059617. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349124458
> 2009-08-04 09:25:24,469 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249348648837 whose
> highest sequence/edit id is 59894
> 2009-08-04 09:25:34,681 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989
> 2009-08-04 09:25:38,992 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989
> in 4sec
> 2009-08-04 09:25:38,992 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting split of region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989
> 2009-08-04 09:25:39,244 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989
> 2009-08-04 09:25:42,458 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249349139000/1497560335
> available; sequence id is 68239
> 2009-08-04 09:25:42,458 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249349139000
> 2009-08-04 09:25:42,647 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348629221_6016,1249349139000/1242271099
> available; sequence id is 68240
> 2009-08-04 09:25:42,647 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348629221_6016,1249349139000
> 2009-08-04 09:25:42,663 INFO
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: region split, META
> updated, and report to master all successful. Old region=REGION => {NAME =>
> 'webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249348649989',
> STARTKEY => 'http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168',
> ENDKEY => '', ENCODED => 203332105, OFFLINE => true, SPLIT => true, TABLE =>
> {{NAME => 'webpage', FAMILIES => [{NAME => 'CF_CONTENT', COMPRESSION =>
> 'NONE', VERSIONS => '2', TTL => '2147483647', BLOCKSIZE => '65536',
> IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'CF_INFORMATION',
> COMPRESSION => 'NONE', VERSIONS => '1', TTL => '2147483647', BLOCKSIZE =>
> '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new regions:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348498311_4168,1249349139000,
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348629221_6016,1249349139000.
> Split took 3sec
> 2009-08-04 09:30:24,918 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348939079_11112,1249349460035
> 2009-08-04 09:30:24,918 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349460035
> 2009-08-04 09:30:24,918 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348939079_11112,1249349460035
> 2009-08-04 09:30:25,058 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348939079_11112,1249349460035/2028268999
> available; sequence id is 66248
> 2009-08-04 09:30:25,059 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348939079_11112,1249349460035
> 2009-08-04 09:30:25,061 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349460035
> 2009-08-04 09:30:25,299 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349460035/1041724375
> available; sequence id is 66249
> 2009-08-04 09:30:28,717 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348939079_11112,1249349460035
> in 3sec
> 2009-08-04 09:30:28,717 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349460035
> 2009-08-04 09:30:41,191 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349460035
> in 12sec
> 2009-08-04 09:31:14,443 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349124458,
> entries=6662, calcsize=63774480, filesize=63064760. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349474432
> 2009-08-04 09:31:14,443 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349066799 whose
> highest sequence/edit id is 66556
> 2009-08-04 09:32:12,213 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349474432,
> entries=6661, calcsize=63776160, filesize=63066278. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349532174
> 2009-08-04 09:32:12,213 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349124458 whose
> highest sequence/edit id is 73217
> 2009-08-04 09:32:35,692 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349460035
> 2009-08-04 09:32:38,318 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349460035
> in 2sec
> 2009-08-04 09:32:38,318 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting split of region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349460035
> 2009-08-04 09:32:38,479 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349460035
> 2009-08-04 09:32:40,510 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349558326/481931133
> available; sequence id is 82900
> 2009-08-04 09:32:42,063 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349558326
> 2009-08-04 09:32:42,256 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349558326/922277637
> available; sequence id is 82901
> 2009-08-04 09:32:42,256 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349558326
> 2009-08-04 09:32:42,298 INFO
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: region split, META
> updated, and report to master all successful. Old region=REGION => {NAME =>
> 'webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349460035',
> STARTKEY => 'http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898',
> ENDKEY => '', ENCODED => 1041724375, OFFLINE => true, SPLIT => true, TABLE
> => {{NAME => 'webpage', FAMILIES => [{NAME => 'CF_CONTENT', VERSIONS => '2',
> COMPRESSION => 'NONE', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY
> => 'false', BLOCKCACHE => 'true'}, {NAME => 'CF_INFORMATION', VERSIONS =>
> '1', COMPRESSION => 'NONE', TTL => '2147483647', BLOCKSIZE => '65536',
> IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new regions:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349219044_15898,1249349558326,
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349558326.
> Split took 3sec
> 2009-08-04 09:32:43,169 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349558326
> 2009-08-04 09:32:43,170 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349558326
> 2009-08-04 09:32:43,315 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349558326/922277637
> available; sequence id is 82901
> 2009-08-04 09:32:43,315 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349558326
> 2009-08-04 09:32:55,285 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349558326
> in 11sec
> 2009-08-04 09:32:55,285 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting split of region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349558326
> 2009-08-04 09:32:55,912 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349558326
> 2009-08-04 09:32:56,630 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349575293/882105115
> available; sequence id is 83905
> 2009-08-04 09:32:56,631 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349575293
> 2009-08-04 09:32:56,793 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293/112267028
> available; sequence id is 83906
> 2009-08-04 09:32:56,794 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293
> 2009-08-04 09:32:56,808 INFO
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: region split, META
> updated, and report to master all successful. Old region=REGION => {NAME =>
> 'webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349558326',
> STARTKEY => 'http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298',
> ENDKEY => '', ENCODED => 922277637, OFFLINE => true, SPLIT => true, TABLE =>
> {{NAME => 'webpage', FAMILIES => [{NAME => 'CF_CONTENT', VERSIONS => '2',
> COMPRESSION => 'NONE', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY
> => 'false', BLOCKCACHE => 'true'}, {NAME => 'CF_INFORMATION', VERSIONS =>
> '1', COMPRESSION => 'NONE', TTL => '2147483647', BLOCKSIZE => '65536',
> IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new regions:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349575293,
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293.
> Split took 1sec
> 2009-08-04 09:32:58,205 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293
> 2009-08-04 09:32:58,206 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349575293
> 2009-08-04 09:32:58,206 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293
> 2009-08-04 09:32:58,304 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293/112267028
> available; sequence id is 83906
> 2009-08-04 09:32:58,304 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349575293
> 2009-08-04 09:32:58,306 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293
> 2009-08-04 09:32:58,441 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349575293/882105115
> available; sequence id is 83905
> 2009-08-04 09:33:03,860 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293
> in 5sec
> 2009-08-04 09:33:03,860 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349575293
> 2009-08-04 09:33:09,988 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349575293
> in 6sec
> 2009-08-04 09:33:25,094 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349532174,
> entries=6658, calcsize=63774356, filesize=63065350. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349605082
> 2009-08-04 09:33:25,094 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349474432 whose
> highest sequence/edit id is 79878
> 2009-08-04 09:34:22,724 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349605082,
> entries=6666, calcsize=63777964, filesize=63067534. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349662713
> 2009-08-04 09:34:22,725 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349532174 whose
> highest sequence/edit id is 86545
> 2009-08-04 09:35:06,800 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293
> 2009-08-04 09:35:09,326 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293
> in 2sec
> 2009-08-04 09:35:09,326 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting split of region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293
> 2009-08-04 09:35:09,561 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293
> 2009-08-04 09:35:12,780 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349709334/316888594
> available; sequence id is 98604
> 2009-08-04 09:35:12,781 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349709334
> 2009-08-04 09:35:12,949 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349568563_22014,1249349709334/1414734782
> available; sequence id is 98605
> 2009-08-04 09:35:12,950 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349568563_22014,1249349709334
> 2009-08-04 09:35:12,966 INFO
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: region split, META
> updated, and report to master all successful. Old region=REGION => {NAME =>
> 'webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349575293',
> STARTKEY => 'http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720',
> ENDKEY => '', ENCODED => 112267028, OFFLINE => true, SPLIT => true, TABLE =>
> {{NAME => 'webpage', FAMILIES => [{NAME => 'CF_CONTENT', VERSIONS => '2',
> COMPRESSION => 'NONE', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY
> => 'false', BLOCKCACHE => 'true'}, {NAME => 'CF_INFORMATION', VERSIONS =>
> '1', COMPRESSION => 'NONE', TTL => '2147483647', BLOCKSIZE => '65536',
> IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new regions:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349500641_20720,1249349709334,
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349568563_22014,1249349709334.
> Split took 3sec
> 2009-08-04 09:37:58,723 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349568563_22014,1249349965622
> 2009-08-04 09:37:58,724 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349568563_22014,1249349965622
> 2009-08-04 09:38:18,205 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349568563_22014,1249349965622/1005978655
> available; sequence id is 113675
> 2009-08-04 09:38:18,206 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349568563_22014,1249349965622
> 2009-08-04 09:38:22,211 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349568563_22014,1249349965622
> in 4sec
> 2009-08-04 09:47:32,557 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350166325_32162,1249350537732
> 2009-08-04 09:47:32,557 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350537732
> 2009-08-04 09:47:32,557 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350166325_32162,1249350537732
> 2009-08-04 09:47:32,694 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350166325_32162,1249350537732/671938285
> available; sequence id is 174598
> 2009-08-04 09:47:32,694 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350537732
> 2009-08-04 09:47:32,694 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350166325_32162,1249350537732
> 2009-08-04 09:47:32,913 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350537732/469485305
> available; sequence id is 174599
> 2009-08-04 09:47:35,243 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350166325_32162,1249350537732
> in 2sec
> 2009-08-04 09:47:35,243 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350537732
> 2009-08-04 09:47:46,699 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350537732
> in 11sec
> 2009-08-04 09:47:48,978 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349662713,
> entries=6662, calcsize=63776160, filesize=63066462. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249350468966
> 2009-08-04 09:47:48,978 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349605082 whose
> highest sequence/edit id is 93207
> 2009-08-04 09:48:46,548 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249350468966,
> entries=6661, calcsize=63776160, filesize=63066278. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249350526537
> 2009-08-04 09:48:46,548 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249349662713 whose
> highest sequence/edit id is 175865
> 2009-08-04 09:49:42,602 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350537732
> 2009-08-04 09:49:44,186 INFO org.apache.hadoop.hbase.regionserver.HLog:
> Roll /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249350526537,
> entries=6661, calcsize=63776160, filesize=63066258. New hlog
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249350584168
> 2009-08-04 09:49:44,187 INFO org.apache.hadoop.hbase.regionserver.HLog:
> removing old hlog file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249350468966 whose
> highest sequence/edit id is 182525
> 2009-08-04 09:49:48,938 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350537732
> in 6sec
> 2009-08-04 09:49:48,938 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting split of region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350537732
> 2009-08-04 09:49:49,199 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350537732
> 2009-08-04 09:49:49,954 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350588945/2137368991
> available; sequence id is 189657
> 2009-08-04 09:49:49,955 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350588945
> 2009-08-04 09:49:50,115 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350402915_36232,1249350588945/904839552
> available; sequence id is 189658
> 2009-08-04 09:49:50,115 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350402915_36232,1249350588945
> 2009-08-04 09:49:50,167 INFO
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: region split, META
> updated, and report to master all successful. Old region=REGION => {NAME =>
> 'webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350537732',
> STARTKEY => 'http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046',
> ENDKEY => '', ENCODED => 469485305, OFFLINE => true, SPLIT => true, TABLE =>
> {{NAME => 'webpage', FAMILIES => [{NAME => 'CF_CONTENT', VERSIONS => '2',
> COMPRESSION => 'NONE', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY
> => 'false', BLOCKCACHE => 'true'}, {NAME => 'CF_INFORMATION', VERSIONS =>
> '1', COMPRESSION => 'NONE', TTL => '2147483647', BLOCKSIZE => '65536',
> IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}}, new regions:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350588945,
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350402915_36232,1249350588945.
> Split took 1sec
> 2009-08-04 09:49:50,799 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350588945
> 2009-08-04 09:49:50,799 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350588945
> 2009-08-04 09:49:50,933 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350588945/2137368991
> available; sequence id is 189657
> 2009-08-04 09:49:50,934 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350588945
> 2009-08-04 09:49:55,807 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350588945
> in 4sec
> 2009-08-04 09:54:51,308 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350669622_41010,1249350974427
> 2009-08-04 09:54:51,309 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN:
> webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350669622_41010,1249350974427
> 2009-08-04 09:54:51,486 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> region webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350669622_41010,1249350974427/2108149192
> available; sequence id is 220779
> 2009-08-04 09:54:51,487 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350669622_41010,1249350974427
> 2009-08-04 09:54:53,885 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction completed on region webpage,http:\x2F\x2Fnews.163.com<http://x2fnews.163.com/>\x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350669622_41010,1249350974427
> in 2sec
> 2009-08-04 09:56:14,418 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x322e2b5f53a0000 to sun.nio.ch.SelectionKeyImpl@2345f0e3
> java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0
> lim=4 cap=4]
>  at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:653)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:897)
> 2009-08-04 09:56:14,519 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Got ZooKeeper event,
> state: Disconnected, type: None, path: null
> 2009-08-04 09:56:15,314 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu7/192.168.33.6:2222
> 2009-08-04 09:56:18,137 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x22e2b69b350001 to sun.nio.ch.SelectionKeyImpl@49271218
> java.io.IOException: TIMED OUT
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
> 2009-08-04 09:56:19,033 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu9/192.168.33.5:2222
> 2009-08-04 09:56:19,033 INFO org.apache.zookeeper.ClientCnxn: Priming
> connection to java.nio.channels.SocketChannel[connected local=/
> 192.168.33.5:43793 remote=ubuntu9/192.168.33.5:2222]
> 2009-08-04 09:56:19,033 INFO org.apache.zookeeper.ClientCnxn: Server
> connection successful
> 2009-08-04 09:56:19,035 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x22e2b69b350001 to sun.nio.ch.SelectionKeyImpl@cf78c3d
> java.io.IOException: Read error rc = -1 java.nio.DirectByteBuffer[pos=0
> lim=4 cap=4]
>  at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:653)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:897)
> 2009-08-04 09:56:19,036 WARN org.apache.zookeeper.ClientCnxn: Ignoring
> exception during shutdown input
> java.net.SocketException: Transport endpoint is not connected
>  at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>  at sun.nio.ch.SocketChannelImpl.shutdownInput(SocketChannelImpl.java:640)
>  at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:360)
>  at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:951)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
> 2009-08-04 09:56:19,036 WARN org.apache.zookeeper.ClientCnxn: Ignoring
> exception during shutdown output
> java.net.SocketException: Transport endpoint is not connected
>  at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>  at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
>  at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>  at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:956)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
> 2009-08-04 09:56:19,192 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu7/192.168.33.6:2222
> 2009-08-04 09:56:21,307 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x322e2b5f53a0000 to sun.nio.ch.SelectionKeyImpl@3c9d926a
> java.io.IOException: TIMED OUT
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
> 2009-08-04 09:56:21,307 WARN org.apache.zookeeper.ClientCnxn: Ignoring
> exception during shutdown output
> java.net.SocketException: Transport endpoint is not connected
>  at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>  at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
>  at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>  at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:956)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
> 2009-08-04 09:56:21,436 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu6/192.168.33.7:2222
> 2009-08-04 09:56:25,187 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x22e2b69b350001 to sun.nio.ch.SelectionKeyImpl@5017ff71
> java.io.IOException: TIMED OUT
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
> 2009-08-04 09:56:25,187 WARN org.apache.zookeeper.ClientCnxn: Ignoring
> exception during shutdown output
> java.net.SocketException: Transport endpoint is not connected
>  at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>  at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
>  at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>  at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:956)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
> 2009-08-04 09:56:25,316 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu6/192.168.33.7:2222
> 2009-08-04 09:56:27,437 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x322e2b5f53a0000 to sun.nio.ch.SelectionKeyImpl@57837ccb
> java.io.IOException: TIMED OUT
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:858)
> 2009-08-04 09:56:27,438 WARN org.apache.zookeeper.ClientCnxn: Ignoring
> exception during shutdown output
> java.net.SocketException: Transport endpoint is not connected
>  at sun.nio.ch.SocketChannelImpl.shutdown(Native Method)
>  at sun.nio.ch.SocketChannelImpl.shutdownOutput(SocketChannelImpl.java:651)
>  at sun.nio.ch.SocketAdaptor.shutdownOutput(SocketAdaptor.java:368)
>  at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:956)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:922)
> 2009-08-04 09:56:27,667 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu3/192.168.33.8:2222
> 2009-08-04 09:56:27,667 INFO org.apache.zookeeper.ClientCnxn: Priming
> connection to java.nio.channels.SocketChannel[connected local=/
> 192.168.33.5:54934 remote=ubuntu3/192.168.33.8:2222]
> 2009-08-04 09:56:27,667 INFO org.apache.zookeeper.ClientCnxn: Server
> connection successful
> 2009-08-04 09:56:27,670 WARN org.apache.zookeeper.ClientCnxn: Exception
> closing session 0x322e2b5f53a0000 to sun.nio.ch.SelectionKeyImpl@7c3edbed
> java.io.IOException: Session Expired
>  at
> org.apache.zookeeper.ClientCnxn$SendThread.readConnectResult(ClientCnxn.java:548)
>  at org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:661)
>  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:897)
> 2009-08-04 09:56:27,671 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Got ZooKeeper event,
> state: Expired, type: None, path: null
> 2009-08-04 09:56:27,671 ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer: ZooKeeper session
> expired
> 2009-08-04 09:56:27,671 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Restarting Region Server
> 2009-08-04 09:56:27,674 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> request=0.0, regions=8, stores=16, storefiles=16, storefileIndexSize=1,
> memstoreSize=0, usedHeap=202, maxHeap=2993, blockCacheSize=97949616,
> blockCacheFree=529898384, blockCacheCount=843, blockCacheHitRatio=5
> 2009-08-04 09:56:28,307 INFO org.apache.zookeeper.ClientCnxn: Priming
> connection to java.nio.channels.SocketChannel[connected local=/
> 192.168.33.5:47976 remote=ubuntu6/192.168.33.7:2222]
> 2009-08-04 09:56:28,307 INFO org.apache.zookeeper.ClientCnxn: Server
> connection successful
> 2009-08-04 09:56:29,527 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> server on 60020
> 2009-08-04 09:56:29,528 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 9 on 60020: exiting
> 2009-08-04 09:56:29,528 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 6 on 60020: exiting
> 2009-08-04 09:56:29,529 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 8 on 60020: exiting
> 2009-08-04 09:56:29,529 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 0 on 60020: exiting
> 2009-08-04 09:56:29,529 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 1 on 60020: exiting
> 2009-08-04 09:56:29,529 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Stopping infoServer
> 2009-08-04 09:56:29,539 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 4 on 60020: exiting
> 2009-08-04 09:56:29,541 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> IPC Server Responder
> 2009-08-04 09:56:29,542 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> IPC Server listener on 60020
> 2009-08-04 09:56:29,543 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 7 on 60020: exiting
> 2009-08-04 09:56:29,575 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 5 on 60020: exiting
> 2009-08-04 09:56:29,576 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 3 on 60020: exiting
> 2009-08-04 09:56:29,576 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server
> handler 2 on 60020: exiting
> 2009-08-04 09:56:29,681 INFO
> org.apache.hadoop.hbase.regionserver.CompactSplitThread:
> regionserver/192.168.33.5:60020.compactor exiting
> 2009-08-04 09:56:29,681 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer$MajorCompactionChecker:
> regionserver/192.168.33.5:60020.majorCompactionChecker exiting
> 2009-08-04 09:56:29,682 INFO
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
> regionserver/192.168.33.5:60020.cacheFlusher exiting
> 2009-08-04 09:56:29,682 INFO
> org.apache.hadoop.hbase.regionserver.LogFlusher:
> regionserver/192.168.33.5:60020.logFlusher exiting
> 2009-08-04 09:56:29,683 INFO
> org.apache.hadoop.hbase.regionserver.LogRoller: LogRoller exiting.
> 2009-08-04 09:56:29,694 ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to close log in
> abort
> java.io.IOException: java.io.IOException: Could not complete write to file
> /hbase/.logs/ubuntu9,60020,1249344129415/hlog.dat.1249350584168 by
> DFSClient_1458021990
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
>  at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:396)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>  at
> org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:94)
>  at
> org.apache.hadoop.hbase.RemoteExceptionHandler.checkThrowable(RemoteExceptionHandler.java:48)
>  at
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:631)
>  at java.lang.Thread.run(Thread.java:619)
> 2009-08-04 09:56:29,696 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350280477_34046,1249350588945
> 2009-08-04 09:56:29,696 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348370059_1744,1249348649989
> 2009-08-04 09:56:29,696 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249348939079_11112,1249349460035
> 2009-08-04 09:56:29,697 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,,1249348520980
> 2009-08-04 09:56:29,698 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350669622_41010,1249350974427
> 2009-08-04 09:56:29,698 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349361130_18298,1249349575293
> 2009-08-04 09:56:29,698 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249349568563_22014,1249349965622
> 2009-08-04 09:56:29,699 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Closed webpage,http:\x2F\x2Fnews.163.com <http://x2fnews.163.com/>
> \x2F09\x2F0803\x2F01\x2F5FOO155J0001124J.html1249350166325_32162,1249350537732
> 2009-08-04 09:56:29,699 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: aborting server at:
> 192.168.33.5:60020
> 2009-08-04 09:56:30,501 INFO org.apache.hadoop.hbase.Leases:
> regionserver/192.168.33.5:60020.leaseChecker closing leases
> 2009-08-04 09:56:30,501 INFO org.apache.hadoop.hbase.Leases:
> regionserver/192.168.33.5:60020.leaseChecker closed leases
> 2009-08-04 09:56:31,488 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: worker thread exiting
> 2009-08-04 09:56:31,488 INFO org.apache.zookeeper.ZooKeeper: Closing
> session: 0x322e2b5f53a0000
> 2009-08-04 09:56:31,488 INFO org.apache.zookeeper.ClientCnxn: Closing
> ClientCnxn for session: 0x322e2b5f53a0000
> 2009-08-04 09:56:31,489 INFO org.apache.zookeeper.ClientCnxn: Disconnecting
> ClientCnxn for session: 0x322e2b5f53a0000
> 2009-08-04 09:56:31,489 INFO org.apache.zookeeper.ZooKeeper: Session:
> 0x322e2b5f53a0000 closed
> 2009-08-04 09:56:31,489 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: regionserver/
> 192.168.33.5:60020 exiting
> 2009-08-04 09:56:31,490 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Starting shutdown
> thread.
> 2009-08-04 09:56:31,491 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Shutdown thread complete
> 2009-08-04 09:56:31,499 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics:
> Initializing RPC Metrics with hostName=HRegionServer, port=60020
> 2009-08-04 09:56:31,500 INFO
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
> globalMemStoreLimit=1.2g, globalMemStoreLimitLowMark=748.5m, maxHeap=2.9g
> 2009-08-04 09:56:31,500 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Runs every 10000000ms
> 2009-08-04 09:56:31,503 INFO org.apache.zookeeper.ZooKeeper: Initiating
> client connection,
> host=ubuntu9:2222,ubuntu7:2222,ubuntu3:2222,ubuntu2:2222,ubuntu6:2222
> sessionTimeout=30000
> watcher=org.apache.hadoop.hbase.regionserver.HRegionServer@68da4b71
> 2009-08-04 09:56:31,504 INFO org.apache.zookeeper.ClientCnxn: Attempting
> connection to server ubuntu6/192.168.33.7:2222
> 2009-08-04 09:56:31,519 INFO org.apache.zookeeper.ClientCnxn: Priming
> connection to java.nio.channels.SocketChannel[connected local=/
> 192.168.33.5:47982 remote=ubuntu6/192.168.33.7:2222]
> 2009-08-04 09:56:31,519 INFO org.apache.zookeeper.ClientCnxn: Server
> connection successful
>
>
>
>
>
>
>
>     contents in hbase-site.xml:
>     <configuration>
> <property>
>     <name>hbase.rootdir</name>
>     <value>hdfs://ubuntu6:9000/hbase</value>
>     <description>The directory shared by region servers.
>     </description>
>   </property>
> <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>     <description>The mode the cluster will be in. Possible values are
>       false: standalone and pseudo-distributed setups with managed
> Zookeeper
>       true: fully-distributed with unmanaged Zookeeper Quorum (see
> hbase-env.sh)
>     </description>
>   </property>
> <property>
>       <name>hbase.zookeeper.property.clientPort</name>
>       <value>2222</value>
>       <description>Property from ZooKeeper's config zoo.cfg.
>       The port at which the clients will connect.
>       </description>
>     </property>
>     <property>
>       <name>hbase.zookeeper.quorum</name>
>       <value>ubuntu2,ubuntu3,ubuntu7,ubuntu9,ubuntu6</value>
>       <description>Comma separated list of servers in the ZooKeeper Quorum.
>       For example, "host1.mydomain.com,host2.mydomain.com,
> host3.mydomain.com".
>       By default this is set to localhost for local and pseudo-distributed
> modes
>       of operation. For a fully-distributed setup, this should be set to a
> full
>       list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in
> hbase-env.sh
>       this is the list of servers which we will start/stop ZooKeeper on.
>       </description>
>     </property>
> </configuration>
> Any help?
>
> Thanks a lot,
> LvZheng
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message