hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nanda kumar (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-399) Handle pipeline discovery on SCM restart.
Date Sun, 16 Sep 2018 18:59:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16616856#comment-16616856

Nanda kumar commented on HDDS-399:

Thanks [~msingh] for updating the patch and splitting the patch into multiple jiras, it really
helps in reviewing. Overall the patch looks good to me.

We don't need Mapping#addContainerToPipeline, we can pass PipelineSelector to ContainerStateManager
and add the containers to pipeline.

Why do we need to create new {{SCM}} instance during restart?
Thanks for adding {{scm.join()}} after stopping and {{waitForClusterToBeReady()}} after starting
it again.

{{hashCode}} method is missing. Whenever we Override {{equals}} method, we should also Override

{{finalizePipeline}} method implementation has to be fixed.

In {{removeContainerFromPipeline}} we should also call {{closePipelineIfNoOpenContainers}}.

> Handle pipeline discovery on SCM restart.
> -----------------------------------------
>                 Key: HDDS-399
>                 URL: https://issues.apache.org/jira/browse/HDDS-399
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: SCM
>    Affects Versions: 0.2.1
>            Reporter: Mukul Kumar Singh
>            Assignee: Mukul Kumar Singh
>            Priority: Blocker
>             Fix For: 0.2.1
>         Attachments: HDDS-399.001.patch, HDDS-399.002.patch, HDDS-399.003.patch, HDDS-399.004.patch
> On SCM restart, as part on node registration, SCM should find out the list on open pipeline
on the node. Once all the nodes of the pipeline have reported back, they should be added as
active pipelines for further allocations.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message