Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E389410DD5 for ; Wed, 14 Aug 2013 04:52:26 +0000 (UTC) Received: (qmail 23893 invoked by uid 500); 14 Aug 2013 04:52:21 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 23864 invoked by uid 500); 14 Aug 2013 04:52:20 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Delivered-To: moderator for user@hbase.apache.org Received: (qmail 44514 invoked by uid 99); 14 Aug 2013 01:33:30 -0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of tjuhenryli@gmail.com designates 209.85.215.68 as permitted sender) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=e5E4VgEq9GmpQJDia7tvz/WJahwvsIF7/eEYpAaHg5k=; b=s8HSst5QpJAget72Nftjn2R6TVcegGvdOG7t3PfDRA1CsvtvMQ6eikM85vQJTgY1fo zhdehoIZS77AOewHAiBhNPYD6EfQTVvB+KdRWwtf2vrUNwWQp48wwrsIVpdkdo8/ESxV OMAmaFcPuVASWIIG6PXC+FqBeiDlxdPQPCC6teMibq1CzkO1da8bNZAmjWfWnpztQ7DH b1xG7e5ZmlK0kwfM/lqxzL0hO8hpCPaSC3Pqd+Ruq0aBgoEOAfLoEpzJx69R62IWWpQv x63ewciqsL8Ulh/CsQ+gCsxHbqXKcFtPV3kZKImF9oUiPnq7Q7pCbUtIkiBCMp6PSWrn aEIg== MIME-Version: 1.0 X-Received: by 10.152.120.101 with SMTP id lb5mr5203942lab.29.1376443984509; Tue, 13 Aug 2013 18:33:04 -0700 (PDT) Date: Wed, 14 Aug 2013 09:33:04 +0800 Message-ID: Subject: regionserver died when using Put to insert data From: =?GB2312?B?wO680Q==?= To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=089e0122901cbffe3204e3de5668 X-Virus-Checked: Checked by ClamAV on apache.org --089e0122901cbffe3204e3de5668 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi , Devs/Users ; Recently I use HBase API to insert big data into hbase;It's about 77G and my cluster has one hbase-master,two hbase-regionservers ; When the program executes a period of time, the regionserver automaticlly shutdown. And I restart regionservers , but this thing happens again . from the regionserver log : 2013-08-13 12:12:08,983 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.AggregateImplementation, com.zsmar.hbase.query.rowkey.RowKey3Endpoint] 2013-08-13 12:12:08,988 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics: requestsPerSecond=3D49507, numberOfOnlineRegions=3D2260, numberOfStores=3D2260, numberOfStorefiles=3D2368, storefileIndexSizeMB=3D12= , rootIndexSizeKB=3D12786, totalStaticIndexSizeKB=3D466828, totalStaticBloomSizeKB=3D54671, memstoreSizeMB=3D711, mbInMemoryWithoutWAL=3D267, numberOfPutsWithoutWAL=3D2409813, readRequestsCount=3D65095, writeRequestsCount=3D299008, compactionQueueSize=3D92, flushQueueSize=3D0, usedHeapMB=3D1182, maxHeapMB=3D1991, blockCacheSizeMB=3D54.35, blockCacheFreeMB=3D443.57, blockCacheCount=3D5779, blockCacheHitCount=3D1455353, blockCacheMissCount=3D240305, blockCacheEvictedCount=3D49, blockCacheHitRatio=3D85%, blockCacheHitCachingRatio=3D99%, hdfsBlocksLocalityIndex=3D75, slowHLogAppendCount=3D0, fsReadLatencyHistogramMean=3D0, fsReadLatencyHistogramCount=3D0, fsReadLatencyHistogramMedian=3D0, fsReadLatencyHistogram75th=3D0, fsReadLatencyHistogram95th=3D0, fsReadLatencyHistogram99th=3D0, fsReadLatencyHistogram999th=3D0, fsPreadLatencyHistogramMean=3D0, fsPreadLatencyHistogramCount=3D0, fsPreadLatencyHistogramMedian=3D0, fsPreadLatencyHistogram75th=3D0, fsPreadLatencyHistogram95th=3D0, fsPreadLatencyHistogram99th=3D0, fsPreadLatencyHistogram999th=3D0, fsWriteLatencyHistogramMean=3D0, fsWriteLatencyHistogramCount=3D0, fsWriteLatencyHistogramMedian=3D0, fsWriteLatencyHistogram75th=3D0, fsWriteLatencyHistogram95th=3D0, fsWriteLatencyHistogram99th=3D0, fsWriteLatencyHistogram999th=3D0 2013-08-13 12:12:08,991 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Replay of HLog required. Forcing server shutdown 2013-08-13 12:12:08,991 INFO org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Excluding unflushable region lbc_zte_1_nbr_index,436238C32A97DAB59D72E810C313CF4F100230,1376366743650.cf= 99f5047c85e155069c3970cdaf03c6. - trying to find a different region to flush. 2013-08-13 12:12:08,991 INFO org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush of region lbc_zte_1_imei_index,3333332A,1376364729049.4469e6b0500bf3f5ed0ac1247d24953= 7. due to global heap pressure 2013-08-13 12:12:08,991 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 2 on 60020: exiting 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020: exiting 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020: exiting 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60020 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020: exiting 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020: exiting 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 0 on 60020: exiting 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020: exiting 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 1 on 60020: exiting 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020: exiting 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020: exiting 2013-08-13 12:12:08,993 INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker: Sending interrupt to stop the worker thread 2013-08-13 12:12:08,993 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020: exiting 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020: exiting 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020: exiting 2013-08-13 12:12:08,992 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020: exiting 2013-08-13 12:12:08,993 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020: exiting 2013-08-13 12:12:08,993 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020: exiting 2013-08-13 12:12:08,993 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020: exiting 2013-08-13 12:12:08,993 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020: exiting 2013-08-13 12:12:08,993 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder 2013-08-13 12:12:08,994 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder 2013-08-13 12:12:08,993 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020: exiting 2013-08-13 12:12:08,993 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Stopping infoServer 2013-08-13 12:12:08,993 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020: exiting 2013-08-13 12:12:08,993 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020: exiting 2013-08-13 12:12:08,993 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020: exiting 2013-08-13 12:12:08,993 INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker: SplitLogWorker interrupted while waiting for task, exiting: java.lang.InterruptedException 2013-08-13 12:12:08,998 INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker: SplitLogWorker phd03.hadoop.audaque.com,60020,1376364509049 exiting 2013-08-13 12:12:09,023 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60030 2013-08-13 12:12:09,029 WARN org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call delete([B@69fba6f8 , {"ts":9223372036854775807,"totalColumns":2,"families":{"info":[{"timestam= p":1376367128966,"qualifier":"splitA","vlen":0},{"timestamp":1376367128966,= "qualifier":"splitB","vlen":0}]},"row":"lbc_zte_1,063AE3C37783FD39EE2142BEE= 43576C2200901,1376366676823.9b4e8b4ce35bf541fc1e9b5b77a22b62."}), rpc version=3D1, client version=3D29, methodsFingerPrint=3D-56040613 from 172.16.1.91:46113: output error 2013-08-13 12:12:09,032 WARN org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020 caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null 2013-08-13 12:12:09,032 INFO org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020: exiting 2013-08-13 12:12:09,038 WARN org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation= : Failed all from region=3D.META.,,1.1028785192, hostname=3D phd03.hadoop.audaque.com, port=3D60020 java.util.concurrent.ExecutionException: java.io.IOException: Call to phd03.hadoop.audaque.com/172.16.1.93:60020 failed on local exception: java.io.EOFException at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222) at java.util.concurrent.FutureTask.get(FutureTask.java:83) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementat= ion.processBatchCallback(HConnectionManager.java:1544) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementat= ion.processBatch(HConnectionManager.java:1396) at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:918) at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:774) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:749) at org.apache.hadoop.hbase.catalog.MetaEditor.put(MetaEditor.java:99) at org.apache.hadoop.hbase.catalog.MetaEditor.putToMetaTable(MetaEditor.jav= a:66) at org.apache.hadoop.hbase.catalog.MetaEditor.offlineParentInMeta(MetaEdito= r.java:188) at org.apache.hadoop.hbase.regionserver.SplitTransaction.createDaughters(Sp= litTransaction.java:327) at org.apache.hadoop.hbase.regionserver.SplitTransaction.execute(SplitTrans= action.java:457) at org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:= 67) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecuto= r.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.ja= va:918) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Call to phd03.hadoop.audaque.com/172.16.1.93:60020 failed on local exception: java.io.EOFException at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:1= 056) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1025) at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpc= Engine.java:150) at com.sun.proxy.$Proxy20.multi(Unknown Source) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementat= ion$3$1.call(HConnectionManager.java:1373) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementat= ion$3$1.call(HConnectionManager.java:1371) at org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCalla= ble.java:210) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementat= ion$3.call(HConnectionManager.java:1380) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementat= ion$3.call(HConnectionManager.java:1368) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBase= Client.java:672) at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:= 606) 2013-08-13 12:12:09,131 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Bloom filter type for hdfs://phd01:8020/apps/hbase/data/lbc_zte_1_imei_index/4469e6b0500bf3f5ed0a= c1247d249537/.tmp/e7bb489662344b26bc6de1e72c122eec: ROW, CompoundBloomFilterWriter 2013-08-13 12:12:09,131 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://phd01:8020/apps/hbase/data/lbc_zte_1_imei_index/4469e6b0500bf3f5ed0a= c1247d249537/.tmp/e7bb489662344b26bc6de1e72c122eec: CompoundBloomFilterWriter 2013-08-13 12:12:09,138 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /apps/hbase/data/lbc_zte_1_imei_index/4469e6b0500bf3f5ed0ac1247d249537/.tmp= /e7bb489662344b26bc6de1e72c122eec could only be replicated to 0 nodes instead of minReplication (=3D1). There are 3 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(= BlockManager.java:1322) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(F= SNamesystem.java:2369) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNo= deRpcServer.java:469) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransl= atorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:300) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clien= tNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:45= 843) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(P= rotobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1694) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1690) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformatio= n.java:1367) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1688) at org.apache.hadoop.ipc.Client.call(Client.java:1164) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine= .java:202) at com.sun.proxy.$Proxy14.addBlock(Unknown Source) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor= Impl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvo= cationHandler.java:164) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocation= Handler.java:83) at com.sun.proxy.$Proxy14.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.add= Block(ClientNamenodeProtocolTranslatorPB.java:288) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock= (DFSOutputStream.java:1150) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStrea= m(DFSOutputStream.java:1003) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.= java:463) 2013-08-13 12:12:09,140 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server phd03.hadoop.audaque.com ,60020,1376364509049: Replay of HLog required. Forcing server shutdown org.apache.hadoop.hbase.DroppedSnapshotException: region: lbc_zte_1_imei_index,3333332A,1376364729049.4469e6b0500bf3f5ed0ac1247d24953= 7. at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.= java:1472) at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.= java:1351) at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:129= 2) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStor= eFlusher.java:406) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushOneForGlobalPr= essure(MemStoreFlusher.java:202) at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher= .java:223) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /apps/hbase/data/lbc_zte_1_imei_index/4469e6b0500bf3f5ed0ac1247d249537= /.tmp/e7bb489662344b26bc6de1e72c122eec could only be replicated to 0 nodes instead of minReplication (=3D1). There are 3 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(= BlockManager.java:1322) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(F= SNamesystem.java:2369) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNo= deRpcServer.java:469) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransl= atorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:300) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clien= tNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:45= 843) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(P= rotobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1694) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1690) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformatio= n.java:1367) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1688) at org.apache.hadoop.ipc.Client.call(Client.java:1164) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine= .java:202) at com.sun.proxy.$Proxy14.addBlock(Unknown Source) at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor= Impl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvo= cationHandler.java:164) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocation= Handler.java:83) at com.sun.proxy.$Proxy14.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.add= Block(ClientNamenodeProtocolTranslatorPB.java:288) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock= (DFSOutputStream.java:1150) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStrea= m(DFSOutputStream.java:1003) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.= java:463) 2013-08-13 12:12:09,140 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.hadoop.hbase.coprocessor.AggregateImplementation, com.zsmar.hbase.query.rowkey.RowKey3Endpoint] 2013-08-13 12:12:09,140 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics: requestsPerSecond=3D49507, numberOfOnlineRegions=3D2260, numberOfStores=3D2260, numberOfStorefiles=3D2368, storefileIndexSizeMB=3D12= , rootIndexSizeKB=3D12786, totalStaticIndexSizeKB=3D466828, totalStaticBloomSizeKB=3D54671, memstoreSizeMB=3D711, mbInMemoryWithoutWAL=3D267, numberOfPutsWithoutWAL=3D2409813, readRequestsCount=3D65095, writeRequestsCount=3D299008, compactionQueueSize=3D92, flushQueueSize=3D0, usedHeapMB=3D1173, maxHeapMB=3D1991, blockCacheSizeMB=3D54.35, blockCacheFreeMB=3D443.57, blockCacheCount=3D5779, blockCacheHitCount=3D1455353, blockCacheMissCount=3D240305, blockCacheEvictedCount=3D49, blockCacheHitRatio=3D85%, blockCacheHitCachingRatio=3D99%, hdfsBlocksLocalityIndex=3D75, slowHLogAppendCount=3D0, fsReadLatencyHistogramMean=3D0, fsReadLatencyHistogramCount=3D0, fsReadLatencyHistogramMedian=3D0, fsReadLatencyHistogram75th=3D0, fsReadLatencyHistogram95th=3D0, fsReadLatencyHistogram99th=3D0, fsReadLatencyHistogram999th=3D0, fsPreadLatencyHistogramMean=3D0, fsPreadLatencyHistogramCount=3D0, fsPreadLatencyHistogramMedian=3D0, fsPreadLatencyHistogram75th=3D0, fsPreadLatencyHistogram95th=3D0, fsPreadLatencyHistogram99th=3D0, fsPreadLatencyHistogram999th=3D0, fsWriteLatencyHistogramMean=3D0, fsWriteLatencyHistogramCount=3D0, fsWriteLatencyHistogramMedian=3D0, fsWriteLatencyHistogram75th=3D0, fsWriteLatencyHistogram95th=3D0, fsWriteLatencyHistogram99th=3D0, fsWriteLatencyHistogram999th=3D0 2013-08-13 12:12:09,145 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Replay of HLog required. Forcing server shutdown 2013-08-13 12:12:09,146 INFO org.apache.hadoop.hbase.regionserver.LogRoller: LogRoller exiting. could someone know the reason why it happens ? and give some mesages . --089e0122901cbffe3204e3de5668--