hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ted Yu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles
Date Thu, 05 Jan 2017 17:08:58 GMT

    [ https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801889#comment-15801889

Ted Yu commented on HBASE-17290:

The LOG in that catch block is at error level.
Consider changing it to DEBUG since the absence of hfile implies error in the commit step
(which should be remedied by operator retry).

> Potential loss of data for replication of bulk loaded hfiles
> ------------------------------------------------------------
>                 Key: HBASE-17290
>                 URL: https://issues.apache.org/jira/browse/HBASE-17290
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 1.3.0
>            Reporter: Ted Yu
>            Assignee: Ashish Singhi
>             Fix For: 2.0.0, 1.4.0
>         Attachments: HBASE-17290.patch
> Currently the support for replication of bulk loaded hfiles relies on bulk load marker
written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the write of
bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, the replication
wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human retry is not
robust solution.

This message was sent by Atlassian JIRA

View raw message