hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Phil Yang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin
Date Mon, 07 Dec 2015 13:53:11 GMT

    [ https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15044970#comment-15044970

Phil Yang commented on HBASE-14004:

Furthermore, I find another issue that since HDFS-744 supported hsync(), it also added CreateFlag.SYNC_BLOCK
in FileSystem.create() that "Force closed blocks to disk". Which means it will send a syncBlock
flag in the last DFSPacket in every endBlock(). If we don't add this just like currently HBase,
the files we save on HDFS are not synced to disk immediately. We are having the risk of losing
data when we just flush a MemStore into HFile or we just compact some HFiles because we think
these data have been saved and we may delete WAL or old HFiles, right?

If I am not wrong, we can create a new issue to have discussion there since it is independent

> [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster
that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>                 Key: HBASE-14004
>                 URL: https://issues.apache.org/jira/browse/HBASE-14004
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: He Liangliang
>            Priority: Critical
>              Labels: replication, wal
> Looks like the current write path can cause inconsistency between memstore/hfile and
WAL which cause the slave cluster has more data than the master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  (may partially)
transported to the DNs which finally get persisted. As a result, the handler will rollback
the Memstore and the later flushed HFile will also skip this record.

This message was sent by Atlassian JIRA

View raw message