hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-11856) Ability to re-add Upgrading Nodes (remote) to pipeline for future pipeline updates
Date Thu, 01 Jun 2017 21:48:04 GMT

     [ https://issues.apache.org/jira/browse/HDFS-11856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Kihwal Lee updated HDFS-11856:
       Resolution: Fixed
     Hadoop Flags: Reviewed
    Fix Version/s: 2.7.4
           Status: Resolved  (was: Patch Available)

> Ability to re-add Upgrading Nodes (remote) to pipeline for future pipeline updates
> ----------------------------------------------------------------------------------
>                 Key: HDFS-11856
>                 URL: https://issues.apache.org/jira/browse/HDFS-11856
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client, rolling upgrades
>    Affects Versions: 2.7.3
>            Reporter: Vinayakumar B
>            Assignee: Vinayakumar B
>             Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>         Attachments: HDFS-11856-01.patch, HDFS-11856-02.branch-2.patch, HDFS-11856-02.patch,
HDFS-11856-branch-2-02.patch, HDFS-11856-branch-2.7-02.patch, HDFS-11856-branch-2.8-02.patch
> During rolling upgrade if the DN gets restarted, then it will send special OOB_RESTART
status to all streams opened for write.
> 1. Local clients will wait for 30 seconds to datanode to come back.
> 2. Remote clients will consider these nodes as bad nodes and continue with pipeline recoveries
and write. These restarted nodes will be considered as bad, and will be excluded for lifetime
of stream.
> In case of small cluster, where total nodes itself is 3, each time a remote node restarts
for upgrade, it will be excluded.
> So a stream writing to 3 nodes initial, will end-up writing to only one node at the end,
there are no other nodes to replace.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message