hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Heng Chen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin
Date Mon, 07 Dec 2015 03:08:11 GMT

    [ https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15044308#comment-15044308
] 

Heng Chen commented on HBASE-14004:
-----------------------------------

{quote}
To be clear, this means we will hold the WAL.sync request if there are some entries have already
been written out but not acked and never return until we successfully write them out and get
ack back. And if WAL.sync or WAL.write fails(maybe due to queue full), we will still rollback
MemStore since we can confirm that the WAL entries have not been written out. Right?
{quote}
I have a big concern about it.  If we not configure hsync every time( hsync  periodically),
 it means there are always some entries we make hflush but not hsync.  And as our logical
designed, when one hflush failed, we close old wal and open a new one,  the entries which
not hsync will be written into new WAL.   
If RS crashed at this time,  what will happen?  Is it means some entries may be already in
place (you have told to client your mutation successed and data was really in place on DN
already) will lost.  I think it is a regression.  Because one failed mutation may cause more
mutations inconsistency.
I think it is also [~carp84] concern as his problem.





> [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster
that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-14004
>                 URL: https://issues.apache.org/jira/browse/HBASE-14004
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: He Liangliang
>            Priority: Critical
>              Labels: replication, wal
>
> Looks like the current write path can cause inconsistency between memstore/hfile and
WAL which cause the slave cluster has more data than the master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  (may partially)
transported to the DNs which finally get persisted. As a result, the handler will rollback
the Memstore and the later flushed HFile will also skip this record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message