hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Phil Yang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin
Date Wed, 09 Dec 2015 11:49:11 GMT

    [ https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15048552#comment-15048552

Phil Yang commented on HBASE-14004:

You are right, we should change the logic of replicator.

And I am not an expert so have a question about the idempotent of HBase operation: What will
happen if we replay an entry more than once? Considering these scenarios, the number is the
seq ids:

1, 2, 3, 4, 5---this is normal order
1, 3, 2, 4, 5---the order is wrong but each log we only read once.
1, 1, 2, 3, 4, 5---we replay one entry twice but they are continuous
1, 2, 3, 1, 4, 5----we replay one entry twice and they are not continuous
1, 2, 3, 1, 2, 3, 4, 5---the order is wrong but the subsequence is repeat so we make sure
the order

Are they all wrong except the first? It seems that the last one is not wrong?

> [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster
that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>                 Key: HBASE-14004
>                 URL: https://issues.apache.org/jira/browse/HBASE-14004
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: He Liangliang
>            Priority: Critical
>              Labels: replication, wal
> Looks like the current write path can cause inconsistency between memstore/hfile and
WAL which cause the slave cluster has more data than the master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  (may partially)
transported to the DNs which finally get persisted. As a result, the handler will rollback
the Memstore and the later flushed HFile will also skip this record.

This message was sent by Atlassian JIRA

View raw message