hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HDFS-5924) Utilize OOB upgrade message processing for writes
Date Thu, 20 Feb 2014 21:31:22 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13907520#comment-13907520
] 

Kihwal Lee edited comment on HDFS-5924 at 2/20/14 9:27 PM:
-----------------------------------------------------------

If the delivery of the OOB ack was not successful due to a network or hardware issue and there
was only one replica in the pipeline, the write will fail.  This is no worse than the current
behavior.  Data loss is typically referred to situations where data was successfully written,
but a part or all of it becomes unavailable permanently. Here, it is different; the write
simply fails.

In short, OOB acking is used for the smoother upgrade process, but (1) this feature won't
block shutdown indefinitely and (2) if an OOB ack is not delivered, things will fall back
to the existing non-upgrade behavior.


was (Author: kihwal):
If the delivery of the OOB ack was not successful due to a network or hardware issue and there
was only one replica in the pipeline, the write will fail.  This is no worse than the current
behavior.  Data loss is typically referred to situations where data was successfully written,
but a part or all of it becomes unavailable permanently. Here, it is different; the write
simply fails.

In short, OOB acking is used for the smoother upgrade process, but (1) this feature won't
block shutdown indefinitely and (2) if an OOB ack not delivered, things will fall back to
the existing non-upgrade behavior.

> Utilize OOB upgrade message processing for writes
> -------------------------------------------------
>
>                 Key: HDFS-5924
>                 URL: https://issues.apache.org/jira/browse/HDFS-5924
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode, ha, hdfs-client, namenode
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>         Attachments: HDFS-5924_RBW_RECOVERY.patch, HDFS-5924_RBW_RECOVERY.patch
>
>
> After HDFS-5585 and HDFS-5583, clients and datanodes can coordinate shutdown-restart
in order to minimize failures or locality loss.
> In this jira, HDFS client is made aware of the restart OOB ack and perform special write
pipeline recovery. Datanode is also modified to load marked RBW replicas as RBW instead of
RWR as long as the restart did not take long. 
> For clients, it considers doing this kind of recovery only when there is only one node
left in the pipeline or the restarting node is a local datanode.  For both clients and datanodes,
the timeout or expiration is configurable, meaning this feature can be turned off by setting
timeout variables to 0.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message