hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Duo Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin
Date Sat, 05 Dec 2015 08:03:11 GMT

    [ https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15042744#comment-15042744

Duo Zhang commented on HBASE-14004:

Fixing HBase writing path that we should retry logging WAL in a new file rather than rollback
To be clear, this means we will hold the {{WAL.sync}} request if there are some entries have
already been written out but not acked and never return until we successfully write them out
and get ack back. And if {{WAL.sync}} or {{WAL.write}} fails(maybe due to queue full), we
will still rollback MemStore since we can confirm that the WAL entries have not been written
out. Right?

And I think there is another task for us. Now the DFSOutputStream does not provide a public
method to get acked length. We can open a issue of HDFS project and use reflection first in
HBase. But there is still a problem that {{hflush}} or {{hsync}} does not return the acked
length which means get acked length and {{hsync}} are two separated operations so it is hard
to get the exact acked length after calling {{hsync}}. Maybe we could get current total write
out bytes first(not acked length) and then call {{hsync}}, the acked length after calling
{{hsync}} must be larger than this value so it is safe to use this value as "acked length".
Any thoughts?


> [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster
that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>                 Key: HBASE-14004
>                 URL: https://issues.apache.org/jira/browse/HBASE-14004
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: He Liangliang
>            Priority: Critical
>              Labels: replication, wal
> Looks like the current write path can cause inconsistency between memstore/hfile and
WAL which cause the slave cluster has more data than the master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  (may partially)
transported to the DNs which finally get persisted. As a result, the handler will rollback
the Memstore and the later flushed HFile will also skip this record.

This message was sent by Atlassian JIRA

View raw message