Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 7327D200C8C for ; Tue, 23 May 2017 07:08:08 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 71B5D160BD4; Tue, 23 May 2017 05:08:08 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id B8990160BBF for ; Tue, 23 May 2017 07:08:07 +0200 (CEST) Received: (qmail 53838 invoked by uid 500); 23 May 2017 05:08:06 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 53827 invoked by uid 99); 23 May 2017 05:08:06 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 23 May 2017 05:08:06 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 6E085190D5A for ; Tue, 23 May 2017 05:08:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -100.002 X-Spam-Level: X-Spam-Status: No, score=-100.002 tagged_above=-999 required=6.31 tests=[RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id ePD2PxxF1yyx for ; Tue, 23 May 2017 05:08:05 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 0FC4A5FAE1 for ; Tue, 23 May 2017 05:08:05 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 5A0E2E073A for ; Tue, 23 May 2017 05:08:04 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 1110D21B56 for ; Tue, 23 May 2017 05:08:04 +0000 (UTC) Date: Tue, 23 May 2017 05:08:04 +0000 (UTC) From: "Anoop Sam John (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-18085) Prevent parallel purge in ObjectPool MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Tue, 23 May 2017 05:08:08 -0000 [ https://issues.apache.org/jira/browse/HBASE-18085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020668#comment-16020668 ] Anoop Sam John commented on HBASE-18085: ---------------------------------------- In ur test method for tryLock, there is no logic other than just try lock and release. If that is been removed as dead code by compiler? One way to avoid that is using the return value. See BlackHole in JMH and its usage.. The diff in numbers that reported by JMH benchmark is so huge! The impl of tryLock is having a volatile read and all. So this huge diff in numbers looks strange no? That was my doubt. > Prevent parallel purge in ObjectPool > ------------------------------------ > > Key: HBASE-18085 > URL: https://issues.apache.org/jira/browse/HBASE-18085 > Project: HBase > Issue Type: Bug > Reporter: Yu Li > Assignee: Yu Li > Attachments: e89l05465.st3.jstack, HBASE-18085.patch > > > Parallel purge in ObjectPool is meaningless and will cause contention issue since {{ReferenceQueue#poll}} has synchronization (source code shown below) > {code} > public Reference poll() { > if (head == null) > return null; > synchronized (lock) { > return reallyPoll(); > } > } > {code} > We observed threads blocking on the purge method while using offheap bucket cache, and we could easily reproduce this by testing the 100% cache hit case in bucket cache with enough reading threads. > We propose to add a purgeLock and use tryLock to avoid parallel purge. -- This message was sent by Atlassian JIRA (v6.3.15#6346)