hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ramkrishna.s.vasudevan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk
Date Mon, 28 Nov 2016 06:12:58 GMT

    [ https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15701052#comment-15701052
] 

ramkrishna.s.vasudevan commented on HBASE-17081:
------------------------------------------------

bq.We actually didn't try something between 1 and 10... 
Regarding this I actually went thro my reports. What I found was that with only flushing the
tail anything more than 6 we had problems like
{code}
 Waited 91573ms on a compaction to clean up 'too many store files'; waited long enough...
proceeding with flush of TestTable,00000000000000000000943713,1478010005795.a63d191e8dcef46c598dd2db6bd1425d.
{code}
But when we do entire pipeline I think it should be fine. I have not gone above 6. 
The one thing that could be a problem is that when we have scans then we need to scan 10 segments
but if say the threshold was 5 then those 5 segments would have been merged to one and that
scan need to check for all the 5.
One question - What prompted you to ensure that flushing the entire pipeline is better than
flushing only the tail as you were doing earlier? I think  our concern was more on flusing
tail only will create lot of small files mainly. Do you observe anyother thing when flushing
only tail?



> Flush the entire CompactingMemStore content to disk
> ---------------------------------------------------
>
>                 Key: HBASE-17081
>                 URL: https://issues.apache.org/jira/browse/HBASE-17081
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Anastasia Braginsky
>            Assignee: Anastasia Braginsky
>         Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch, HBASE-17081-V03.patch,
Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another part is
divided between immutable segments in the compacting pipeline. Upon flush-to-disk request
we want to flush all of it to disk, in contrast to flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message