hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Duo Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin
Date Wed, 28 Oct 2015 10:00:36 GMT

    [ https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14978108#comment-14978108
] 

Duo Zhang commented on HBASE-14004:
-----------------------------------

[~carp84] This is an inherent problem of RPC based systems due to temporary network failure.
HBase use {{hflush}} to sync WAL,  I do not know the details that if hflush will call namenode
to update length, but in any case, the last RPC call could fail at client side but succeed
at server side(network failure when writing return value back).

And sure, this should be a bug in HBase. I checked the code, if an exception is thrown from
hflush, {{FSHLog.SyncRunner}} simply passes it to upper layer. So it could happen that hflush
is succeeded at HDFS, but HBase think it is failed and cause inconsistency.

I think we need to find a way to make sure whether the WAL is actually persisted at HDFS.
And if DFSClient already has retry, then I think killing regionserver is enough? Any suggestions
[~carp84]?

Thanks.

> [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster
that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-14004
>                 URL: https://issues.apache.org/jira/browse/HBASE-14004
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: He Liangliang
>            Priority: Critical
>              Labels: replication, wal
>
> Looks like the current write path can cause inconsistency between memstore/hfile and
WAL which cause the slave cluster has more data than the master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  (may partially)
transported to the DNs which finally get persisted. As a result, the handler will rollback
the Memstore and the later flushed HFile will also skip this record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message