hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-10514) Forward port HBASE-10466, possible data loss when failed flushes
Date Wed, 12 Mar 2014 18:37:20 GMT

    [ https://issues.apache.org/jira/browse/HBASE-10514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13932146#comment-13932146

stack commented on HBASE-10514:

Thanks for the review [~anoop.hbase]

bq. So when one snapshot was already in place in MemStore and we again undergo a flush request,
now we dont decrease by the begin time memstoreSize. But we check with all MemStores.

Sorry.  Having trouble parsing the above.  Are you asking a question.  Yes, if a snapshot
is in place already, and we call flush -- we flush the existing snapshot only... that is how
this stuff has always worked but our memory accounting did not reflect this... it would subtrace
memstore size even though memstore was in place still post flush when existing snapshot.

bq. Now the MemStore returns the previous snapshot's size when one was in place. The snapshot()
to MemStore won't take new snaphot when already a snapshot is in place.

Yes.  This is how it works.  This patch does not change that.  This patch just makes our memory
accounting align w/ how snapshotting/flush actually works.

bq. ....When a flush requested while another was in progress....

Again pardon me for not following....  Only one flush can be going on at a time (you can make
a request any time).  I am not seeing how we can decrement twice the flush size.

I can add new tests no problem if you can come up with a scenario.

> Forward port HBASE-10466, possible data loss when failed flushes
> ----------------------------------------------------------------
>                 Key: HBASE-10514
>                 URL: https://issues.apache.org/jira/browse/HBASE-10514
>             Project: HBase
>          Issue Type: Bug
>            Reporter: stack
>            Assignee: stack
>            Priority: Critical
>             Fix For: 0.96.2, 0.98.1, 0.99.0, 0.94.18
>         Attachments: 10514.txt, 10514v2.txt, 10514v3.txt, 10514v3.txt, 10514v4.txt
> Critical data loss issues that we need to ensure are not in branches beyond 0.89fb. 
Assigning myself.

This message was sent by Atlassian JIRA

View raw message