Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 09C80200B34 for ; Sat, 2 Jul 2016 22:58:13 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 0857E160A5D; Sat, 2 Jul 2016 20:58:13 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 51EA0160A51 for ; Sat, 2 Jul 2016 22:58:12 +0200 (CEST) Received: (qmail 13192 invoked by uid 500); 2 Jul 2016 20:58:11 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 13177 invoked by uid 99); 2 Jul 2016 20:58:11 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 02 Jul 2016 20:58:11 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 170FC2C02A2 for ; Sat, 2 Jul 2016 20:58:11 +0000 (UTC) Date: Sat, 2 Jul 2016 20:58:11 +0000 (UTC) From: "Anastasia Braginsky (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-16162) Compacting Memstore : unnecessary push of active segments to pipeline MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Sat, 02 Jul 2016 20:58:13 -0000 [ https://issues.apache.org/jira/browse/HBASE-16162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15360328#comment-15360328 ] Anastasia Braginsky commented on HBASE-16162: --------------------------------------------- Couple of comments about the last patch: 1. In flushInMemory() method. After: // setting the inMemoryFlushInProgress flag again for the case this method is invoked // directly (only in tests) in the common path setting from true to true is idempotent Please add: inMemoryFlushInProgress.set(true); This is required for using flushInMemory() from the tests. 2. If we set the inMemoryFlushInProgress when compaction thread starts we should reset it once this thread is about to finish. More then that inMemoryFlushInProgress flag is now covering not only compaction, but also pushing active to pipeline. What I want to say is that we should remove the responsibility of resetting the inMemoryFlushInProgress flag from MemStoreCompactor and add it to the end of flushInMemory(), i.e.: @VisibleForTesting void flushInMemory() throws IOException { // Phase I: Update the pipeline ... // Phase II: Compact the pipeline try { if (allowCompaction.get()) { ... } } catch (IOException e) { ... } finally { // <<<< Please add this finally block for resetting the flag stopCompaction(); } } AND In MemStoreCompactor, in releaseResources() method, please remove the resetting. What do you say? > Compacting Memstore : unnecessary push of active segments to pipeline > --------------------------------------------------------------------- > > Key: HBASE-16162 > URL: https://issues.apache.org/jira/browse/HBASE-16162 > Project: HBase > Issue Type: Sub-task > Reporter: Anoop Sam John > Assignee: Anoop Sam John > Priority: Critical > Attachments: HBASE-16162.patch, HBASE-16162_V2.patch, HBASE-16162_V3.patch > > > We have flow like this > {code} > protected void checkActiveSize() { > if (shouldFlushInMemory()) { > InMemoryFlushRunnable runnable = new InMemoryFlushRunnable(); > } > getPool().execute(runnable); > } > } > private boolean shouldFlushInMemory() { > if(getActive().getSize() > inmemoryFlushSize) { > // size above flush threshold > return (allowCompaction.get() && !inMemoryFlushInProgress.get()); > } > return false; > } > void flushInMemory() throws IOException { > // Phase I: Update the pipeline > getRegionServices().blockUpdates(); > try { > MutableSegment active = getActive(); > pushActiveToPipeline(active); > } finally { > getRegionServices().unblockUpdates(); > } > // Phase II: Compact the pipeline > try { > if (allowCompaction.get() && inMemoryFlushInProgress.compareAndSet(false, true)) { > // setting the inMemoryFlushInProgress flag again for the case this method is invoked > // directly (only in tests) in the common path setting from true to true is idempotent > // Speculative compaction execution, may be interrupted if flush is forced while > // compaction is in progress > compactor.startCompaction(); > } > {code} > So every write of cell will produce the check checkActiveSize(). When we are at border of in mem flush, many threads doing writes to this memstore can get this checkActiveSize () to pass. Yes the AtomicBoolean is still false only. It is turned ON after some time once the new thread is started run and it push the active to pipeline etc. > In the new thread code of inMemFlush, we dont have any size check. It just takes the active segment and pushes that to pipeline. Yes we dont allow any new writes to memstore at this time. But before that write lock on region, other handler thread also might have added entry to this thread pool. When the 1st one finishes, it releases the lock on region and handler threads trying for write to memstore, might get lock and add some data. Now this 2nd in mem flush thread may get a chance and get the lock and so it just takes current active segment and flush that in memory ! This will produce very small sized segments to pipeline. -- This message was sent by Atlassian JIRA (v6.3.4#6332)