Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id B8790200B7A for ; Mon, 22 Aug 2016 07:11:22 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id B71AE160AAA; Mon, 22 Aug 2016 05:11:22 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id EB402160AC1 for ; Mon, 22 Aug 2016 07:11:21 +0200 (CEST) Received: (qmail 10215 invoked by uid 500); 22 Aug 2016 05:11:21 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 9882 invoked by uid 99); 22 Aug 2016 05:11:21 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Aug 2016 05:11:20 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id B93E92C0158 for ; Mon, 22 Aug 2016 05:11:20 +0000 (UTC) Date: Mon, 22 Aug 2016 05:11:20 +0000 (UTC) From: "Anoop Sam John (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Mon, 22 Aug 2016 05:11:22 -0000 [ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430054#comment-15430054 ] Anoop Sam John commented on HBASE-16417: ---------------------------------------- Mostly looks good. Though on the general use case (where not many updates/deletes) why cannot we flush all the segments in pipeline together when a flush to disk arise? In that case also, doing an in memory compaction for segments in pipeline (eg: You say when segments# >3) is to reduce #files flushed to disk. So another way for that is flush whole pipeline together. In fact I feel at flush to file comes, we should be flushing all segments in pipeline + active. So it is just like default memstore other than the in btw flush to in memory flattened structure. When MSLAB in place, CellChunkMap would be ideal. For off heap, any way we must need it, As a first step, CellArrayMap being default is fine. And good to see that ur tests reveal the overhead of scan for compact decision. And ya we should be doing that with out any compaction based test. And ya it is upto the user to know the pros and cons of in memory compaction and select that wisely. We should be well documenting that.. Great.. We are mostly in sync now :-) > In-Memory MemStore Policy for Flattening and Compactions > -------------------------------------------------------- > > Key: HBASE-16417 > URL: https://issues.apache.org/jira/browse/HBASE-16417 > Project: HBase > Issue Type: Sub-task > Reporter: Anastasia Braginsky > -- This message was sent by Atlassian JIRA (v6.3.4#6332)