hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Heng Chen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-14004) [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster that is not in the origin
Date Wed, 09 Dec 2015 02:58:11 GMT

    [ https://issues.apache.org/jira/browse/HBASE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15047929#comment-15047929
] 

Heng Chen commented on HBASE-14004:
-----------------------------------

{quote}
Have to be careful how we do this. We have elaborate accounting now that allows only one 'sync'
type, either a hflush or a hsync, but not a mix of both.
{quote}
After read notes in doc, i begin to agree with stack.   
Why we need hsync?  The concern about using 'hflush' is we may lost data when 3 DNs and RS
crash at the same time, right?  It is really small probability. But if we introduce hsync
(for example hsync periodically), it will cause latency between master and slave. Is it worth
to do it? 

And inconsistency problem about this issue in replication could be fixed if replicator only
read entries 'acked hflushed' just like we do in recovery process when hflush failed, right?
And as our design,  we only use hsync to ensure  data inconsistency in replication but data
lost still happen because we NOT use 'hsync' in write path. If so, why NOT we just use hflush?
 

> [Replication] Inconsistency between Memstore and WAL may result in data in remote cluster
that is not in the origin
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-14004
>                 URL: https://issues.apache.org/jira/browse/HBASE-14004
>             Project: HBase
>          Issue Type: Bug
>          Components: regionserver
>            Reporter: He Liangliang
>            Priority: Critical
>              Labels: replication, wal
>
> Looks like the current write path can cause inconsistency between memstore/hfile and
WAL which cause the slave cluster has more data than the master cluster.
> The simplified write path looks like:
> 1. insert record into Memstore
> 2. write record to WAL
> 3. sync WAL
> 4. rollback Memstore if 3 fails
> It's possible that the HDFS sync RPC call fails, but the data is already  (may partially)
transported to the DNs which finally get persisted. As a result, the handler will rollback
the Memstore and the later flushed HFile will also skip this record.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message