hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Anoop Sam John (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions
Date Mon, 22 Aug 2016 05:11:20 GMT

    [ https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430054#comment-15430054

Anoop Sam John commented on HBASE-16417:

Mostly looks good.  Though on the general use case (where not many updates/deletes) why cannot
we flush all the segments in pipeline together when a flush to disk arise? In that case also,
doing an in memory compaction for segments in pipeline (eg: You say when segments# >3)
is to reduce #files flushed to disk.  So another way for that is flush whole pipeline together.
In fact I feel at flush to file comes, we should be flushing all segments in pipeline + active.
  So it is just like default memstore other than the in btw flush to in memory flattened structure.
 When MSLAB in place, CellChunkMap would be ideal.  For off heap, any way we must need it,
  As a first step, CellArrayMap being default is fine.
And good to see that ur tests reveal the overhead of scan for compact decision. And ya we
should be doing that with out any compaction based test. And ya it is upto the user to know
the pros and cons of in memory compaction and select that wisely. We should be well documenting
Great.. We are mostly in sync now :-)

> In-Memory MemStore Policy for Flattening and Compactions
> --------------------------------------------------------
>                 Key: HBASE-16417
>                 URL: https://issues.apache.org/jira/browse/HBASE-16417
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Anastasia Braginsky

This message was sent by Atlassian JIRA

View raw message