hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 冯宏华 <fenghong...@xiaomi.com>
Subject 答复: 答复: egionTooBusyException: Above memstore limit
Date Wed, 26 Feb 2014 11:01:26 GMT
0.94 doesn't throws RegionTooBusyException when memstore exceeds blockingMemstore...it waits
in regionserver, that's why you gets TimeoutException from client side. Nicolas has said this
in above mail.

Maybe you can try some actions suggested in above mails such as split out more regions to
balance the write pressure, randomize the rowKey to eliminate hotspot, and so on.

How many regions in your table? Do all regions encounter such RegionTooBusyException(in 0.96+)
or SocketTimeoutException(in 0.94)?
________________________________________
发件人: shapoor [esmaili_607@yahoo.com]
发送时间: 2014年2月26日 18:30
收件人: user@hbase.apache.org
主题: Re: 答复: egionTooBusyException: Above memstore limit

This is what I get from hbase 0.94 running the same task that lead to
org.apache.hadoop.hbase.RegionTooBusyException
in hbase 0.96.1.1-hadoop2
sometimes I get the feeling that I might not use full hbase capacity having
unconfigured featured.
What could solve this issue?

WARN client.HConnectionManager$HConnectionImplementation: Failed all from
region=test-table,doc-id-55157,1393408719943.2c75f461955aa1a1bd319177fa82b1fa.,
hostname=kcs-testhadoop01, port=60020
java.util.concurrent.ExecutionException: java.net.SocketTimeoutException:
Call to kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout
exception: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/192.168.111.210:37947
remote=kcs-testhadoop01/192.168.111.210:60020]
        at java.util.concurrent.FutureTask.report(FutureTask.java:122)
        at java.util.concurrent.FutureTask.get(FutureTask.java:188)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1598)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1450)
        at
org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:750)
        at
at.myPackage.backends.HbaseStorage.putDocument(HbaseStorage.java:259)
        at at.myPackage.evaluationTool.Evaluate.save(Evaluate.java:185)
        at
at.myPackage.evaluationTool.Evaluate.performSaveEvaluation(Evaluate.java:136)
        at at.myPackage.evaluationTool.Evaluate.evaluate(Evaluate.java:73)
        at
at.myPackage.evaluationTool.EvaluationTool.executeEvaluation(EvaluationTool.java:127)
        at
at.myPackage.evaluationTool.EvaluationTool.run(EvaluationTool.java:160)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.SocketTimeoutException: Call to
kcs-testhadoop01/192.168.111.210:60020 failed on socket timeout exception:
java.net.SocketTimeoutException: 60000 millis timeout while waiting for
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
local=/192.168.111.210:37947 remote=kcs-testhadoop01/192.168.111.210:60020]
        at
org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:1026)
        at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:999)
        at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
        at com.sun.proxy.$Proxy20.multi(Unknown Source)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1427)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1425)
        at
org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:215)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1434)
        at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1422)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        ... 3 more
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while
waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[connected local=/192.168.111.210:37947
remote=kcs-testhadoop01/192.168.111.210:60020]
        at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
        at
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:373)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
        at java.io.DataInputStream.readInt(DataInputStream.java:387)
        at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:646)

thx,



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/RegionTooBusyException-Above-memstore-limit-tp4056339p4056398.html
Sent from the HBase User mailing list archive at Nabble.com.
Mime
View raw message