hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yi Liang <white...@gmail.com>
Subject Re: Problem connecting to region server
Date Thu, 01 Mar 2012 05:12:28 GMT
Thanks J-D.

The thread holding the lock:
"IPC Reader 0 on port 60020" prio=10 tid=0x00007f983c1aa800 nid=0x1ae9
waiting on condition [0x00007f983a915000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x000000041d9f51f0> (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
        at
java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:306)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:985)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:946)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:522)
        at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:316)
        - locked <0x000000041d964510> (a
org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)

I also put the whole dump here:  http://pastebin.com/f9BcrXUP

About the socked timeout exceptions in rs log, we actually saw them before,
sometime likely caused by datanode block report, but they had never caused
the region server lost response. I will have a look at the datanode log to
double check, what does "maxing your disks" means here?

Thanks,
Yi
On Thu, Mar 1, 2012 at 3:42 AM, Jean-Daniel Cryans <jdcryans@apache.org>wrote:

> There's a lot going in there and considering that I don't know if your
> selection if thread dumps/logs is the right one, my suggestions might
> be wrong.
>
> So in that thread dump the Listener thread is blocked on
> 0x000000041d964510, have you searched which thread holds it?
>
> Most of the time (almost 100% in my experience), getting the socket
> timeout client-side means you need to look at the "IPC Server handler"
> threads in the dump since this is where the client queries are
> processed.
>
> Regarding your log, it's getting socket timeouts from the
> Datanode-side. Were you maxing your disks? What was going there?
>
> Hope this helps,
>
> J-D
>
> On Tue, Feb 28, 2012 at 10:04 PM, Yi Liang <whitesky@gmail.com> wrote:
> > We're running hbase 0.90.3 with hadoop cdh3u2. Today, we ran into a
> problem
> > connecting to one region server.
> >
> > When running hbase hbck, the following error appeared:
> > Number of Tables: 16
> > Number of live region servers: 20
> > Number of dead region servers: 0
> > .12/02/29 13:06:58 INFO ipc.HbaseRPC: Problem connecting to server: /
> > 192.168.201.13:60020
> > ERROR: RegionServer: test13.xxx.com,60020,1327993969023 Unable to fetch
> > region information. java.net.SocketTimeoutException: Call to /
> > 192.168.201.13:60020 failed on socket timeout exception:
> > java.net.SocketTimeoutException: 60000 millis timeout while waiting for
> > channel to be ready for read. ch :
> > java.nio.channels.SocketChannel[connected local=/192.168.201.13:44956
> remote=/
> > 192.168.201.13:60020]
> >
> > and the final status is INCONSISTENT. We have to kill the RS to recover
> the
> > status.
> >
> > From jstack output of that regionserver process, we saw the thread "IPC
> > Server listener on 60020" had been blocked. We have tried several times
> > in several minutes, but the state just kept as BLOCKED:
> >
> > "IPC Server listener on 60020" daemon prio=10 tid=0x00007f983c57a800
> > nid=0x1b12 waiting for monitor entry [0x00007f98388f4000]
> >   java.lang.Thread.State: BLOCKED (on object monitor)
> >        at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.registerChannel(HBaseServer.java:347)
> >        - waiting to lock <0x000000041d964510> (a
> > org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader)
> >        at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doAccept(HBaseServer.java:496)
> >        at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.run(HBaseServer.java:422)
> >
> > Had it caused the problem connecting to server? But why had it always
> been
> > BLOCKED?
> >
> > Following is the RS log between the problem appeared and we killed the
> > process.
> >
> > 2012-02-29 12:06:12,117 INFO org.apache.hadoop.hbase.regionserver.Store:
> > Started compaction of 3 file(s) in cf=IndexInfo  into hdfs://
> >
> test02.xxx.com:30070/offline-hbase/News/4dae1f8cd991f17414ca4d86ff0884ad/.tmp
> ,
> > seqid=423340578, totalSize=8.5m
> > 2012-02-29 12:06:12,118 DEBUG org.apache.hadoop.hbase.regionserver.Store:
> > Compacting hdfs://
> >
> test02.xxx.com:30070/offline-hbase/News/4dae1f8cd991f17414ca4d86ff0884ad/IndexInfo/8324806988914852495
> ,
> > keycount=122337, bloomtype=NONE, size=8.4m
> > 2012-02-29 12:06:12,118 DEBUG org.apache.hadoop.hbase.regionserver.Store:
> > Compacting hdfs://
> >
> test02.xxx.com:30070/offline-hbase/News/4dae1f8cd991f17414ca4d86ff0884ad/IndexInfo/1116030618027381242
> ,
> > keycount=258, bloomtype=NONE, size=17.7k
> > 2012-02-29 12:06:12,118 DEBUG org.apache.hadoop.hbase.regionserver.Store:
> > Compacting hdfs://
> >
> test02.xxx.com:30070/offline-hbase/News/4dae1f8cd991f17414ca4d86ff0884ad/IndexInfo/3755533953967637627
> ,
> > keycount=372, bloomtype=NONE, size=25.8k
> > 2012-02-29 12:06:12,906 INFO org.apache.hadoop.hbase.regionserver.Store:
> > Completed major compaction of 3 file(s), new file=hdfs://
> >
> test02.xxx.com:30070/offline-hbase/News/4dae1f8cd991f17414ca4d86ff0884ad/IndexInfo/3731399222200436246
> ,
> > size=8.5m; total size for store is 8.5m
> > 2012-02-29 12:06:12,906 INFO
> org.apache.hadoop.hbase.regionserver.HRegion:
> > completed compaction on region
> > News,57addda034c334e4,1313319088489.4dae1f8cd991f17414ca4d86ff0884ad.
> after
> > 7sec
> > 2012-02-29 12:07:26,577 INFO org.apache.hadoop.hdfs.DFSClient: Could not
> > obtain block blk_1313036207534951503_65938873 from any node:
> > java.io.IOException: No live nodes contain current block. Will get new
> > block locations from namenode and retry...
> > 2012-02-29 12:07:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.49 GB,
> > free=646.64 MB, max=3.12 GB, blocks=29616, accesses=80631725,
> > hits=60715195, hitRatio=75.29%%, cachingAccesses=72673671,
> > cachingHits=59497193, cachingHitsRatio=81.86%%, evictions=3584,
> > evicted=13146860, evictedPerRun=3668.208740234375
> > 2012-02-29 12:12:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.62 GB,
> > free=518.67 MB, max=3.12 GB, blocks=30943, accesses=80640574,
> > hits=60722719, hitRatio=75.30%%, cachingAccesses=72682520,
> > cachingHits=59504717, cachingHitsRatio=81.86%%, evictions=3584,
> > evicted=13146860, evictedPerRun=3668.208740234375
> > 2012-02-29 12:15:06,937 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction
> > started; Attempting to free 319.74 MB of total=2.65 GB
> > 2012-02-29 12:15:06,955 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: Block cache LRU eviction
> > completed; freed=319.87 MB, total=2.34 GB, single=744.45 MB, multi=1.9
> GB,
> > memory=0 KB
> > 2012-02-29 12:17:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.45 GB,
> > free=692.03 MB, max=3.12 GB, blocks=28911, accesses=80652333,
> > hits=60732703, hitRatio=75.30%%, cachingAccesses=72694279,
> > cachingHits=59514701, cachingHitsRatio=81.86%%, evictions=3585,
> > evicted=13150645, evictedPerRun=3668.2412109375
> > 2012-02-29 12:18:52,867 WARN org.apache.hadoop.hdfs.DFSClient: Failed to
> > connect to /192.168.201.23:50010 for file
> >
> /offline-hbase/News/bec970594146b62ddf8bd450fc654acf/Content/621772312284239615
> > for block -1994034029269165490:java.net.SocketTimeoutException: 60000
> > millis timeout while waiting for channel to be ready for read. ch :
> > java.nio.channels.SocketChannel[connected local=/192.168.201.13:48546
> remote=/
> > 192.168.201.23:50010]
> >        at
> >
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
> >        at
> > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> >        at
> > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> >        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> >        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> >        at java.io.DataInputStream.readShort(DataInputStream.java:295)
> >        at
> >
> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1462)
> >        at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2024)
> >        at
> > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2099)
> >        at
> > org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >        at
> >
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> >        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> >        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:102)
> >        at
> > org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1094)
> >        at
> > org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:1036)
> >        at
> >
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1442)
> >        at
> >
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1299)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:136)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:96)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:77)
> >        at
> > org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1345)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.<init>(HRegion.java:2274)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateInternalScanner(HRegion.java:1131)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1123)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1107)
> >        at
> > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:2996)
> >        at
> > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:2898)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1630)
> >        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
> >        at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >        at java.lang.reflect.Method.invoke(Method.java:597)
> >        at
> > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:570)
> >        at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1039)
> > 2012-02-29 12:18:55,395 WARN org.apache.hadoop.hdfs.DFSClient: Failed to
> > connect to /192.168.201.23:50010 for file
> >
> /offline-hbase/News/a959f6488cb5f8a13c5e63e0a149b18b/IndexInfo/5814475919152417643
> > for block -4812950919171511907:java.net.SocketTimeoutException: 60000
> > millis timeout while waiting for channel to be ready for read. ch :
> > java.nio.channels.SocketChannel[connected local=/192.168.201.13:48597
> remote=/
> > 192.168.201.23:50010]
> >        at
> >
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
> >        at
> > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> >        at
> > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> >        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> >        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> >        at java.io.DataInputStream.readShort(DataInputStream.java:295)
> >        at
> >
> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1462)
> >        at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2024)
> >        at
> > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2099)
> >        at
> > org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >        at
> >
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> >        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> >        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:102)
> >        at
> > org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1094)
> >        at
> > org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:1036)
> >        at
> >
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1442)
> >        at
> >
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1299)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:136)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:96)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:77)
> >        at
> > org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1345)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.<init>(HRegion.java:2274)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateInternalScanner(HRegion.java:1131)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1123)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1107)
> >        at
> > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:2996)
> >        at
> > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:2898)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1630)
> >        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
> >        at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >        at java.lang.reflect.Method.invoke(Method.java:597)
> >        at
> > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:570)
> >        at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1039)
> > 2012-02-29 12:18:55,395 INFO org.apache.hadoop.hdfs.DFSClient: Could not
> > obtain block blk_-4812950919171511907_69296565 from any node:
> > java.io.IOException: No live nodes contain current block. Will get new
> > block locations from namenode and retry...
> > 2012-02-29 12:18:56,459 WARN org.apache.hadoop.hdfs.DFSClient:
> > DFSOutputStream ResponseProcessor exception  for block
> > blk_-8859664738058583740_69352641java.io.IOException: Bad response 1 for
> > block blk_-8859664738058583740_69352641 from datanode
> 192.168.201.23:50010
> >        at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2651)
> >
> > 2012-02-29 12:18:56,460 WARN org.apache.hadoop.hdfs.DFSClient: Error
> > Recovery for block blk_-8859664738058583740_69352641 bad datanode[1]
> > 192.168.201.23:50010
> > 2012-02-29 12:18:56,460 WARN org.apache.hadoop.hdfs.DFSClient: Error
> > Recovery for block blk_-8859664738058583740_69352641 in pipeline
> > 192.168.201.13:50010, 192.168.201.23:50010, 192.168.201.15:50010: bad
> > datanode 192.168.201.23:50010
> > 2012-02-29 12:18:56,460 INFO org.apache.hadoop.ipc.Client: Retrying
> connect
> > to server: /192.168.201.13:50020. Already tried 0 time(s).
> > 2012-02-29 12:19:01,111 WARN org.apache.hadoop.hdfs.DFSClient: Failed to
> > connect to /192.168.201.23:50010 for file
> >
> /offline-hbase/News/1ebe9ad2bbad7c8e584bce4cc22f8278/BasicInfo/7333739767200146616
> > for block 7663668744337108616:java.net.SocketTimeoutException: 60000
> millis
> > timeout while waiting for channel to be ready for read. ch :
> > java.nio.channels.SocketChannel[connected local=/192.168.201.13:48682
> remote=/
> > 192.168.201.23:50010]
> >        at
> >
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
> >        at
> > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> >        at
> > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> >        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> >        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> >        at java.io.DataInputStream.readShort(DataInputStream.java:295)
> >        at
> >
> org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1462)
> >        at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2024)
> >        at
> > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2099)
> >        at
> > org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
> >        at
> >
> org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
> >        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> >        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> >        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:102)
> >        at
> > org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1094)
> >        at
> > org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:1036)
> >        at
> >
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1442)
> >        at
> >
> org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1299)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:136)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:96)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:77)
> >        at
> > org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1345)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.<init>(HRegion.java:2274)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateInternalScanner(HRegion.java:1131)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1123)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1107)
> >        at
> > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:2996)
> >        at
> > org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:2898)
> >        at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1630)
> >        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
> >        at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >        at java.lang.reflect.Method.invoke(Method.java:597)
> >        at
> > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:570)
> >        at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1039)
> > 2012-02-29 12:19:02,772 WARN org.apache.hadoop.ipc.HBaseServer: IPC
> Server
> > Responder, call get([B@665bd4b4, row=64d3ef8647252a85, maxVersions=1,
> > cacheBlocks=true, timeRange=[0,9223372036854775807),
> > families={(family=BasicInfo, columns={EntryPage, HostName, SID, URL}),
> > (family=Content, columns={ArchItem, ContentGroup, HTTPBody, TagInfo}})
> from
> > 192.168.201.27:32866: output error
> > 2012-02-29 12:19:02,772 WARN org.apache.hadoop.ipc.HBaseServer: IPC
> Server
> > handler 58 on 60020 caught: java.nio.channels.ClosedChannelException
> >        at
> > sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
> >        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
> >        at
> > org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1389)
> >        at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1341)
> >        at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:727)
> >        at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:792)
> >        at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1083)
> >
> > 2012-02-29 12:22:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.46 GB,
> > free=678.89 MB, max=3.12 GB, blocks=29075, accesses=80653112,
> > hits=60733340, hitRatio=75.30%%, cachingAccesses=72695058,
> > cachingHits=59515338, cachingHitsRatio=81.86%%, evictions=3585,
> > evicted=13150645, evictedPerRun=3668.2412109375
> > 2012-02-29 12:27:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.46 GB,
> > free=678.89 MB, max=3.12 GB, blocks=29075, accesses=80653112,
> > hits=60733340, hitRatio=75.30%%, cachingAccesses=72695058,
> > cachingHits=59515338, cachingHitsRatio=81.86%%, evictions=3585,
> > evicted=13150645, evictedPerRun=3668.2412109375
> > 2012-02-29 12:32:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.46 GB,
> > free=678.89 MB, max=3.12 GB, blocks=29075, accesses=80653112,
> > hits=60733340, hitRatio=75.30%%, cachingAccesses=72695058,
> > cachingHits=59515338, cachingHitsRatio=81.86%%, evictions=3585,
> > evicted=13150645, evictedPerRun=3668.2412109375
> > 2012-02-29 12:37:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.46 GB,
> > free=678.89 MB, max=3.12 GB, blocks=29075, accesses=80653112,
> > hits=60733340, hitRatio=75.30%%, cachingAccesses=72695058,
> > cachingHits=59515338, cachingHitsRatio=81.86%%, evictions=3585,
> > evicted=13150645, evictedPerRun=3668.2412109375
> > 2012-02-29 12:42:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.46 GB,
> > free=678.89 MB, max=3.12 GB, blocks=29075, accesses=80653112,
> > hits=60733340, hitRatio=75.30%%, cachingAccesses=72695058,
> > cachingHits=59515338, cachingHitsRatio=81.86%%, evictions=3585,
> > evicted=13150645, evictedPerRun=3668.2412109375
> > 2012-02-29 12:47:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.46 GB,
> > free=678.89 MB, max=3.12 GB, blocks=29075, accesses=80653112,
> > hits=60733340, hitRatio=75.30%%, cachingAccesses=72695058,
> > cachingHits=59515338, cachingHitsRatio=81.86%%, evictions=3585,
> > evicted=13150645, evictedPerRun=3668.2412109375
> > 2012-02-29 12:52:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.46 GB,
> > free=678.89 MB, max=3.12 GB, blocks=29075, accesses=80653112,
> > hits=60733340, hitRatio=75.30%%, cachingAccesses=72695058,
> > cachingHits=59515338, cachingHitsRatio=81.86%%, evictions=3585,
> > evicted=13150645, evictedPerRun=3668.2412109375
> > 2012-02-29 12:57:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.46 GB,
> > free=678.89 MB, max=3.12 GB, blocks=29075, accesses=80653112,
> > hits=60733340, hitRatio=75.30%%, cachingAccesses=72695058,
> > cachingHits=59515338, cachingHitsRatio=81.86%%, evictions=3585,
> > evicted=13150645, evictedPerRun=3668.2412109375
> > 2012-02-29 13:05:35,372 DEBUG
> > org.apache.hadoop.hbase.regionserver.LogRoller: Hlog roll period
> 3600000ms
> > elapsed
> > 2012-02-29 13:05:35,377 INFO
> > org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter: Using
> > syncFs -- HDFS-200
> > 2012-02-29 13:07:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.46 GB,
> > free=678.89 MB, max=3.12 GB, blocks=29075, accesses=80653112,
> > hits=60733340, hitRatio=75.30%%, cachingAccesses=72695058,
> > cachingHits=59515338, cachingHitsRatio=81.86%%, evictions=3585,
> > evicted=13150645, evictedPerRun=3668.2412109375
> > 2012-02-29 13:12:50,103 DEBUG
> > org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRU Stats: total=2.46 GB,
> > free=678.89 MB, max=3.12 GB, blocks=29075, accesses=80653112,
> > hits=60733340, hitRatio=75.30%%, cachingAccesses=72695058,
> > cachingHits=59515338, cachingHitsRatio=81.86%%, evictions=3585,
> > evicted=13150645, evictedPerRun=3668.2412109375
> > Wed Feb 29 13:14:07 CST 2012 Killing regionserver
> >
> > Thanks,
> > Yi
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message