Return-Path: Delivered-To: apmail-hbase-user-archive@www.apache.org Received: (qmail 44214 invoked from network); 10 Nov 2010 07:51:53 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 10 Nov 2010 07:51:53 -0000 Received: (qmail 14604 invoked by uid 500); 10 Nov 2010 07:52:23 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 14230 invoked by uid 500); 10 Nov 2010 07:52:20 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 14207 invoked by uid 99); 10 Nov 2010 07:52:19 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 10 Nov 2010 07:52:19 +0000 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests=MIME_QP_LONG_LINE,SPF_PASS,UNPARSEABLE_RELAY X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [196.30.82.197] (HELO relay01.entelligence.co.za) (196.30.82.197) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 10 Nov 2010 07:52:12 +0000 Received: from ([127.0.0.1]) with MailEnable ESMTP; Wed, 10 Nov 2010 09:50:36 +0200 User-Agent: Microsoft-MacOutlook/14.0.0.100526 Date: Wed, 10 Nov 2010 09:51:34 +0200 Subject: Re: org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor From: Seraph Imalia Sender: Clive Munro To: Message-ID: Thread-Topic: org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor In-Reply-To: Mime-version: 1.0 Content-type: text/plain; charset="ISO-8859-1" Content-transfer-encoding: quoted-printable X-ME-Bayesian: 0.000000 X-ME-Spam: No (-1010),Sender authenticated X-Virus-Checked: Checked by ClamAV on apache.org These... cat logs/hbase-root-regionserver-dynobuntu17.log.2010-11-09 | grep xciever cat logs/hbase-root-master-dynobuntu17.log.2010-11-09 | grep xciever cat logs/hbase-root-master-dynobuntu17.log | grep xciever cat logs/hbase-root-regionserver-dynobuntu17.log | grep xciever And these (cause on the link you sent it is spelt both ways)... cat logs/hbase-root-regionserver-dynobuntu17.log.2010-11-09 | grep xceiver cat logs/hbase-root-master-dynobuntu17.log.2010-11-09 | grep xceiver cat logs/hbase-root-master-dynobuntu17.log | grep xceiver cat logs/hbase-root-regionserver-dynobuntu17.log | grep xceiver Both came back with nothing at all :( I also scanned every log for the past 7 days and the "Got=A0brand-new decompressor" has only ever happened last night. =A0Whilst that does not seem to be an error message, it may lead us to what really caused it. =A0Under what conditions would it "Get a new decompressor"? Scanning the logs also revealed that "649681515:java.net.SocketTimeoutException: 60000 millis timeout while=A0waiting for channel to be ready for connect. ch :=A0java.nio.channels.SocketChannel[connection-pending=A0remote=3D/192.168.2.97:5 0010]" started happening 2 hours before the first "Got brand-new decompressor" (about 10 SocketTimeoutExceptions every 5 minutes). =A0The message also shows three times on the 4th Nov, once on 5th Nov and about 10 times on 8th - but were not as frequent or as dense as last night's problem. It is also interesting to note that this happened during a time when we are only at about 40% load to what it normally is during the day. Seraph On 2010/11/10 12:25 AM, "Ryan Rawson" wrote: >This sounds like it could be the dreaded 'xciever count' issue. >Threads are your resources here. See: > >http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A5 > >Let me know if you see anything like that. > > > >On Tue, Nov 9, 2010 at 2:22 PM, Seraph Imalia wrote: >> Hi Ryan, >> >> Thanks for replying so soon. >> >> Whatever it was, it has stopped happening, so I am breathing normally >> again and it is not so urgent anymore. =A0I need to try figure out what >> caused this though. =A0I get the feeling it is server resource related - >> almost like something using the HDD or CPU heavily. =A0atop did not show >> anything unusual, but the 1 regionserver/datanode was sluggish while I >>was >> debugging the problem. =A0It has stopped being sluggish and it seems too >> much of a coincidence that it is sluggish at the same time hbase gave >> those errors. =A0Also, the mention of codec and compression in the logs >> makes me thing it is related to CPU rather than HDD. =A0Syslog and Kernel >> logs also reveal nothing unusual. =A0Any ideas on how to figure out what >> happened? >> >> Logs in hadoop seem normal. =A0Both datanodes are showing the following: >> >> 2010-11-10 00:06:48,510 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: >> /192.168.2.97:50010, dest: /192.168.2.97:36783, bytes: 15480, op: >> HDFS_READ, cliID: DFSClient_1620748290, srvID: >> DS-1090448426-192.168.2.97-50010-1282311128239, blockid: >> blk_3714134476848125077_129818 >> 2010-11-10 00:06:48,621 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: >> /192.168.2.97:50010, dest: /192.168.2.97:36784, bytes: 516, op: >>HDFS_READ, >> cliID: DFSClient_1620748290, srvID: >> DS-1090448426-192.168.2.97-50010-1282311128239, blockid: >> blk_3714134476848125077_129818 >> 2010-11-10 00:06:48,688 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: >> /192.168.2.97:50010, dest: /192.168.2.97:36785, bytes: 516, op: >>HDFS_READ, >> cliID: DFSClient_1620748290, srvID: >> DS-1090448426-192.168.2.97-50010-1282311128239, blockid: >> blk_3714134476848125077_129818 >> 2010-11-10 00:06:48,791 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: >> /192.168.2.97:50010, dest: /192.168.2.97:36786, bytes: 516, op: >>HDFS_READ, >> cliID: DFSClient_1620748290, srvID: >> DS-1090448426-192.168.2.97-50010-1282311128239, blockid: >> blk_3714134476848125077_129818 >> 2010-11-10 00:06:48,940 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: >> /192.168.2.97:50010, dest: /192.168.2.97:36787, bytes: 516, op: >>HDFS_READ, >> cliID: DFSClient_1620748290, srvID: >> DS-1090448426-192.168.2.97-50010-1282311128239, blockid: >> blk_3714134476848125077_129818 >> 2010-11-10 00:06:49,039 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: >> /192.168.2.97:50010, dest: /192.168.2.97:36788, bytes: 516, op: >>HDFS_READ, >> cliID: DFSClient_1620748290, srvID: >> DS-1090448426-192.168.2.97-50010-1282311128239, blockid: >> blk_3714134476848125077_129818 >> 2010-11-10 00:06:49,110 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: >> /192.168.2.97:50010, dest: /192.168.2.97:36789, bytes: 516, op: >>HDFS_READ, >> cliID: DFSClient_1620748290, srvID: >> DS-1090448426-192.168.2.97-50010-1282311128239, blockid: >> blk_3714134476848125077_129818 >> 2010-11-10 00:06:49,204 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: >> /192.168.2.97:50010, dest: /192.168.2.97:36790, bytes: 516, op: >>HDFS_READ, >> cliID: DFSClient_1620748290, srvID: >> DS-1090448426-192.168.2.97-50010-1282311128239, blockid: >> blk_3714134476848125077_129818 >> 2010-11-10 00:06:49,291 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: >> /192.168.2.97:50010, dest: /192.168.2.97:36791, bytes: 516, op: >>HDFS_READ, >> cliID: DFSClient_1620748290, srvID: >> DS-1090448426-192.168.2.97-50010-1282311128239, blockid: >> blk_3714134476848125077_129818 >> 2010-11-10 00:06:49,375 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: >> /192.168.2.97:50010, dest: /192.168.2.97:36792, bytes: 1548, op: >> HDFS_READ, cliID: DFSClient_1620748290, srvID: >> DS-1090448426-192.168.2.97-50010-1282311128239, blockid: >> blk_3714134476848125077_129818 >> 2010-11-10 00:06:49,449 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: >> /192.168.2.97:50010, dest: /192.168.2.97:36793, bytes: 516, op: >>HDFS_READ, >> cliID: DFSClient_1620748290, srvID: >> DS-1090448426-192.168.2.97-50010-1282311128239, blockid: >> blk_3714134476848125077_129818 >> 2010-11-10 00:06:49,555 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: >> /192.168.2.97:50010, dest: /192.168.2.97:36794, bytes: 516, op: >> >> >> Namenode looks like this: >> >> 2010-11-10 00:03:17,947 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from >> 192.168.2.90 >> 2010-11-10 00:05:47,774 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus src=3D/hbase dst=3Dnull >>perm=3Dnull >> 2010-11-10 00:05:47,775 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus src=3D/hbase/-ROOT- >>dst=3Dnull pe >> rm=3Dnull >> 2010-11-10 00:05:47,775 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus src=3D/hbase/.META. >>dst=3Dnull pe >> rm=3Dnull >> 2010-11-10 00:05:47,776 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus >>src=3D/hbase/ChannelUIDTable ds >> t=3Dnull perm=3Dnull >> 2010-11-10 00:05:47,777 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus src=3D/hbase/UrlIndex >>dst=3Dnull >> perm=3Dnull >> 2010-11-10 00:05:47,820 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus >>src=3D/hbase/UrlIndex-hostCount >> =A0dst=3Dnull perm=3Dnull >> 2010-11-10 00:05:47,820 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus >>src=3D/hbase/UrlIndex-indexHost >> =A0dst=3Dnull perm=3Dnull >> 2010-11-10 00:05:47,864 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus >>src=3D/hbase/UrlIndex-indexUrlU >> ID dst=3Dnull perm=3Dnull >> 2010-11-10 00:08:17,953 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from >> 192.168.2.90 >> 2010-11-10 00:10:43,052 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus src=3D/hbase dst=3Dnull >>perm=3Dnull >> 2010-11-10 00:10:43,053 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus src=3D/hbase/-ROOT- >>dst=3Dnull pe >> rm=3Dnull >> 2010-11-10 00:10:43,054 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus src=3D/hbase/.META. >>dst=3Dnull pe >> rm=3Dnull >> 2010-11-10 00:10:43,054 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus >>src=3D/hbase/ChannelUIDTable ds >> t=3Dnull perm=3Dnull >> 2010-11-10 00:10:43,056 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus src=3D/hbase/UrlIndex >>dst=3Dnull >> perm=3Dnull >> 2010-11-10 00:10:43,100 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus >>src=3D/hbase/UrlIndex-hostCount >> =A0dst=3Dnull perm=3Dnull >> 2010-11-10 00:10:43,101 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus >>src=3D/hbase/UrlIndex-indexHost >> =A0dst=3Dnull perm=3Dnull >> 2010-11-10 00:10:43,143 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: >> ugi=3Droot,root ip=3D/192.168.2.97 cmd=3DlistStatus >>src=3D/hbase/UrlIndex-indexUrlU >> ID dst=3Dnull perm=3Dnull >> 2010-11-10 00:13:17,960 INFO >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from >> 192.168.2.90 >> >> >> Regards, >> Seraph >> >> >> >> On 2010/11/10 12:08 AM, "Ryan Rawson" wrote: >> >>>Looks like you are running into HDFS issues, can you check the >>>datanode logs for errors? >>> >>>-ryan >>> >>>On Tue, Nov 9, 2010 at 2:06 PM, Seraph Imalia wrote: >>>> Hi, >>>> >>>> Some more info: That same Region server just showed the following in >>>>the >>>> logs too - hope this explains it? >>>> >>>> Regards, >>>> Seraph >>>> >>>> 649681515:java.net.SocketTimeoutException: 60000 millis timeout while >>>> waiting for channel to be ready for connect. ch : >>>> java.nio.channels.SocketChannel[connection-pending >>>> remote=3D/192.168.2.97:50010] >>>> =A0at >>>> >>>>org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.j >>>>av >>>>a: >>>> 213) >>>> =A0at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404) >>>> =A0at >>>> >>>>org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFS >>>>Cl >>>>ie >>>> nt.java:1848) >>>> =A0at >>>> >>>>org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:192 >>>>2) >>>> =A0at >>>>org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(Bound >>>>ed >>>>Ra >>>> ngeFileInputStream.java:101) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(Bound >>>>ed >>>>Ra >>>> ngeFileInputStream.java:88) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(Bound >>>>ed >>>>Ra >>>> ngeFileInputStream.java:81) >>>> =A0at >>>> >>>>org.apache.hadoop.io.compress.BlockDecompressorStream.rawReadInt(BlockD >>>>ec >>>>om >>>> pressorStream.java:121) >>>> =A0at >>>> >>>>org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockD >>>>ec >>>>om >>>> pressorStream.java:66) >>>> =A0at >>>> >>>>org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStrea >>>>m. >>>>ja >>>> va:74) >>>> =A0at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) >>>> =A0at java.io.BufferedInputStream.read(BufferedInputStream.java:317) >>>> =A0at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:101 >>>>8) >>>> =A0at >>>>org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.next(HFile.java:1 >>>>15 >>>>9) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileSca >>>>nn >>>>er >>>> .java:58) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.jav >>>>a: >>>>79 >>>> ) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.jav >>>>a: >>>>23 >>>> 6) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.jav >>>>a: >>>>10 >>>> 6) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.nextInternal >>>>(H >>>>Re >>>> gion.java:1915) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.next(HRegion >>>>.j >>>>av >>>> a:1879) >>>> =A0at=20 >>>>org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:2500) >>>> =A0at=20 >>>>org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:2486) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.ja >>>>va >>>>:1 >>>> 733) >>>> =A0at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) >>>> =A0at >>>> >>>>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso >>>>rI >>>>mp >>>> l.java:25) >>>> =A0at java.lang.reflect.Method.invoke(Method.java:597) >>>> =A0at=20 >>>>org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657) >>>> =A0at >>>> >>>>org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:91 >>>>5) >>>> >>>> 2010-11-10 00:03:57,903 DEBUG >>>> org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes: >>>> Total=3D66.45012MB (69678000), Free=3D341.48737MB (358075472), >>>>Max=3D407.9375MB >>>> (427753472), Counts: Blocks=3D2147, Access=3D42032, Hit=3D39143, Miss=3D2889, >>>> Evictions=3D0, Evicted=3D0, Ratios: Hit Ratio=3D93.12666654586792%, Miss >>>> Ratio=3D6.8733349442481995%, Evicted/Run=3DNaN >>>> 2010-11-10 00:04:57,903 DEBUG >>>> org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes: >>>> Total=3D69.27812MB (72643376), Free=3D338.65936MB (355110096), >>>>Max=3D407.9375MB >>>> (427753472), Counts: Blocks=3D2192, Access=3D43926, Hit=3D40999, Miss=3D2927, >>>> Evictions=3D0, Evicted=3D0, Ratios: Hit Ratio=3D93.33652257919312%, Miss >>>> Ratio=3D6.663479655981064%, Evicted/Run=3DNaN >>>> >>>> >>>> >>>> On 2010/11/09 11:59 PM, "Seraph Imalia" wrote: >>>> >>>>>Hi, >>>>> >>>>>One of our region servers keeps doing the following - it has only just >>>>>started doing this since 40 minutes ago. =A0Our clients are able to get >>>>>data >>>>>from hBase, but after a short while, threads lock up and they start >>>>>waiting indefinitely for data to be returned. =A0What is wrong? - What >>>>>do >>>>>we >>>>>do? - I am desperate, please help as quick as you can. >>>>> >>>>>Regards, >>>>>Seraph >>>>> >>>>>2010-11-09 23:49:59,102 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:49:59,159 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:49:59,224 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:49:59,226 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:00,269 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:00,730 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:01,157 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:06,916 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:06,917 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:06,917 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:06,918 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:09,106 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:09,106 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:18,271 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:20,924 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:23,151 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:33,792 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:33,793 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:44,161 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:52,489 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:50:57,903 DEBUG >>>>>org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes: >>>>>Total=3D25.640144MB (26885640), Free=3D382.29736MB (400867832), >>>>>Max=3D407.9375MB >>>>>(427753472), Counts: Blocks=3D1493, Access=3D31181, Hit=3D28954, Miss=3D2227, >>>>>Evictions=3D0, Evicted=3D0, Ratios: Hit Ratio=3D92.85783171653748%, Miss >>>>>Ratio=3D7.142169773578644%, Evicted/Run=3DNaN >>>>>2010-11-09 23:50:57,996 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:51:31,922 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:51:31,923 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:51:31,924 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:51:57,903 DEBUG >>>>>org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes: >>>>>Total=3D28.028427MB (29389936), Free=3D379.90906MB (398363536), >>>>>Max=3D407.9375MB >>>>>(427753472), Counts: Blocks=3D1531, Access=3D31277, Hit=3D29008, Miss=3D2269, >>>>>Evictions=3D0, Evicted=3D0, Ratios: Hit Ratio=3D92.74546504020691%, Miss >>>>>Ratio=3D7.254531979560852%, Evicted/Run=3DNaN >>>>>2010-11-09 23:52:57,903 DEBUG >>>>>org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes: >>>>>Total=3D31.233871MB (32751088), Free=3D376.7036MB (395002384), >>>>>Max=3D407.9375MB >>>>>(427753472), Counts: Blocks=3D1582, Access=3D31483, Hit=3D29168, Miss=3D2315, >>>>>Evictions=3D0, Evicted=3D0, Ratios: Hit Ratio=3D92.64682531356812%, Miss >>>>>Ratio=3D7.353174686431885%, Evicted/Run=3DNaN >>>>>2010-11-09 23:53:57,903 DEBUG >>>>>org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes: >>>>>Total=3D34.532898MB (36210368), Free=3D373.4046MB (391543104), >>>>>Max=3D407.9375MB >>>>>(427753472), Counts: Blocks=3D1635, Access=3D31612, Hit=3D29246, Miss=3D2366, >>>>>Evictions=3D0, Evicted=3D0, Ratios: Hit Ratio=3D92.5154983997345%, Miss >>>>>Ratio=3D7.484499365091324%, Evicted/Run=3DNaN >>>>>2010-11-09 23:54:21,831 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:54:57,903 DEBUG >>>>>org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes: >>>>>Total=3D37.375MB (39190528), Free=3D370.5625MB (388562944), Max=3D407.9375MB >>>>>(427753472), Counts: Blocks=3D1681, Access=3D31761, Hit=3D29344, Miss=3D2417, >>>>>Evictions=3D0, Evicted=3D0, Ratios: Hit Ratio=3D92.39003658294678%, Miss >>>>>Ratio=3D7.609961926937103%, Evicted/Run=3DNaN >>>>>2010-11-09 23:55:45,289 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:55:45,289 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:55:48,079 INFO org.apache.hadoop.io.compress.CodecPool: >>>>>Got >>>>>brand-new decompressor >>>>>2010-11-09 23:55:57,903 DEBUG >>>>>org.apache.hadoop.hbase.io.hfile.LruBlockCache: Cache Stats: Sizes: >>>>>Total=3D40.266388MB (42222368), Free=3D367.6711MB (385531104), >>>>>Max=3D407.9375MB >>>>>(427753472), Counts: Blocks=3D1728, Access=3D33834, Hit=3D31364, Miss=3D2470, >>>>>Evictions=3D0, Evicted=3D0, Ratios: Hit Ratio=3D92.69965291023254%, Miss >>>>>Ratio=3D7.300348579883575%, Evicted/Run=3DNaN >>>>> >>>>> >>>>> >>>> >>>> >>>> >>>> >> >> >>