hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ramkrishna.s.vasudevan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk
Date Mon, 19 Dec 2016 18:16:58 GMT

    [ https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15761886#comment-15761886
] 

ramkrishna.s.vasudevan commented on HBASE-17081:
------------------------------------------------

Actually I can see related test failure in QA here with the patch that I committed. Infact
I did not see that report because I actually went for the commit with patch - V06 and was
waiting for final +1 from Anoop. And since when I wanted to commit V07 did not apply cleanly
I just waited for the updated patch and just commited. So had I seen the QA build of it I
would not have committed it. My bad. Did not know that the rebase on top of HBASE-17294 will
have this implication. I was just thinking it was a simple rebase. 
bq.Ya seem TestAsyncGetMultiThread fail is not by this patch. It seems to be because we changed
the default memstore to be CompactingMS.
Yes.
bq.TestHRegionWithInMemoryFlush seems failing? 
This again has got something to do with it I believe. Because V06 came clean. 

> Flush the entire CompactingMemStore content to disk
> ---------------------------------------------------
>
>                 Key: HBASE-17081
>                 URL: https://issues.apache.org/jira/browse/HBASE-17081
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Anastasia Braginsky
>            Assignee: Anastasia Braginsky
>         Attachments: HBASE-15787_8.patch, HBASE-17081-V01.patch, HBASE-17081-V02.patch,
HBASE-17081-V03.patch, HBASE-17081-V04.patch, HBASE-17081-V05.patch, HBASE-17081-V06.patch,
HBASE-17081-V06.patch, HBASE-17081-V07.patch, HBaseMeetupDecember2016-V02.pptx, Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another part is
divided between immutable segments in the compacting pipeline. Upon flush-to-disk request
we want to flush all of it to disk, in contrast to flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message