hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Elek, Marton (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDDS-199) Implement ReplicationManager to handle underreplication of closed containers
Date Mon, 23 Jul 2018 14:12:00 GMT

     [ https://issues.apache.org/jira/browse/HDDS-199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Elek, Marton updated HDDS-199:
    Attachment: HDDS-199.017.patch

> Implement ReplicationManager to handle underreplication of closed containers
> ----------------------------------------------------------------------------
>                 Key: HDDS-199
>                 URL: https://issues.apache.org/jira/browse/HDDS-199
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>          Components: SCM
>            Reporter: Elek, Marton
>            Assignee: Elek, Marton
>            Priority: Major
>             Fix For: 0.2.1
>         Attachments: HDDS-199.001.patch, HDDS-199.002.patch, HDDS-199.003.patch, HDDS-199.004.patch,
HDDS-199.005.patch, HDDS-199.006.patch, HDDS-199.007.patch, HDDS-199.008.patch, HDDS-199.009.patch,
HDDS-199.010.patch, HDDS-199.011.patch, HDDS-199.012.patch, HDDS-199.013.patch, HDDS-199.014.patch,
HDDS-199.015.patch, HDDS-199.016.patch, HDDS-199.017.patch
> HDDS/Ozone supports Open and Closed containers. In case of specific conditions (container
is full, node is failed) the container will be closed and will be replicated in a different
way. The replication of Open containers are handled with Ratis and PipelineManger.
> The ReplicationManager should handle the replication of the ClosedContainers. The replication
information will be sent as an event (UnderReplicated/OverReplicated). 
> The Replication manager will collect all of the events in a priority queue (to replicate
first the containers where more replica is missing) calculate the destination datanode (first
with a very simple algorithm, later with calculating scatter-width) and send the Copy/Delete
container to the datanode (CommandQueue).
> A CopyCommandWatcher/DeleteCommandWatcher are also included to retry the copy/delete
in case of failure. This is an in-memory structure (based on HDDS-195) which can requeue the
underreplicated/overreplicated events to the prioirity queue unless the confirmation of the
copy/delete command is arrived.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message