hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Heng Chen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin
Date Thu, 10 Dec 2015 03:17:11 GMT

    [ https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15049947#comment-15049947

Heng Chen commented on HBASE-14004:

1, 2, 3, 4, 5---this is normal order
1, 3, 2, 4, 5---the order is wrong but each log we only read once.
1, 1, 2, 3, 4, 5---we replay one entry twice but they are continuous
1, 2, 3, 1, 4, 5----we replay one entry twice and they are not continuous
1, 2, 3, 1, 2, 3, 4, 5---the order is wrong but the subsequence is repeat so we make sure
the order
[~yangzhe1991]  only the first one is right now in current logic.

WAL is RS Level,  but replay is in Region Level,  so in wal , seqId in increased one by one,
but we can't ensure it in region Level when replay, that's why i mentioned i made a mistake
in HBASE-14949. I will dig it deeper.

Would you mind update the doc as we mentioned above, Zhe?

> [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster
that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>                 Key: HBASE-14004
>                 URL: https://issues.apache.org/jira/browse/HBASE-14004
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: He Liangliang
>            Priority: Critical
>              Labels: replication, wal
> Looks like the current write path can cause inconsistency between memstore/hfile and
WAL which cause the slave cluster has more data than the master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  (may partially)
transported to the DNs which finally get persisted. As a result, the handler will rollback
the Memstore and the later flushed HFile will also skip this record.

This message was sent by Atlassian JIRA

View raw message