hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Albert Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.
Date Wed, 12 Apr 2017 15:08:41 GMT

    [ https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15966020#comment-15966020
] 

Albert Lee commented on HBASE-17906:
------------------------------------

I find because the htablePools cache and connection cache have the same timeout.
I add a refresh mechanism and it works for me now.

> When a huge amount of data writing to hbase through thrift2, there will be a deadlock
error.
> --------------------------------------------------------------------------------------------
>
>                 Key: HBASE-17906
>                 URL: https://issues.apache.org/jira/browse/HBASE-17906
>             Project: HBase
>          Issue Type: Bug
>          Components: Client
>    Affects Versions: 0.98.21
>         Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>            Reporter: Albert Lee
>             Fix For: 1.2.2, 0.98.21
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a deadlock
error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message