hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yu Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin
Date Thu, 10 Dec 2015 08:32:11 GMT

    [ https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15050322#comment-15050322

Yu Li commented on HBASE-14004:

bq. Yu, is it still a problem after we not using hsync?
My concern was mainly about whether it's possible to have duplicated entries in WAL of *different*

Think about the fail over case, replication queue will be transfered to some other RS then
entries of the failed RS WAL will be replicated, meanwhile the same WAL will be split, replayed
and its entries written into WAL of the new RS serving the same region. In this situation
we add a {{isReplay}} flag in WALEdit to avoid duplicated replication (see below code segment
in {{FSHlog#append}})
if (entry.getEdit().isReplay()) {
  // Set replication scope null so that this won't be replicated

I could see a similar situation here:
# hflush timeout due to network failure but actually persisted
# WAL logger tries to re-write the buffered entries to a new WAL but new WAL creation failed
due to the same network failure, and returns fail to client
# region got reassigned due to balance or hbase shell command
# client retry writing to the same region served by a new RS and succeed

In this case we have duplicated entries in different RS

Feel free to correct me if the assumption is wrong, but if it's possible, then we need to
handle it in HBASE-14949 [~chenheng]

> [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster
that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>                 Key: HBASE-14004
>                 URL: https://issues.apache.org/jira/browse/HBASE-14004
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: He Liangliang
>            Priority: Critical
>              Labels: replication, wal
> Looks like the current write path can cause inconsistency between memstore/hfile and
WAL which cause the slave cluster has more data than the master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  (may partially)
transported to the DNs which finally get persisted. As a result, the handler will rollback
the Memstore and the later flushed HFile will also skip this record.

This message was sent by Atlassian JIRA

View raw message