hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Huafeng Wang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12405) Clean up removed erasure coding policies from namenode
Date Tue, 12 Sep 2017 08:35:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16162668#comment-16162668

Huafeng Wang commented on HDFS-12405:

I got few questions about this issue. Why do we have to clean up the removed policies? I think
NameNode's restart is not frequently enough so clean up at that time can only cover a little
portion of policies, so clean up them when NameNode restart would suffice? 

> Clean up removed erasure coding policies from namenode
> ------------------------------------------------------
>                 Key: HDFS-12405
>                 URL: https://issues.apache.org/jira/browse/HDFS-12405
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: erasure-coding
>            Reporter: SammiChen
>            Assignee: Huafeng Wang
>              Labels: hdfs-ec-3.0-nice-to-have
> Currently, when an erasure coding policy is removed, it's been transited to "removed"
state. User cannot apply policy with "removed" state to file/directory anymore.  The policy
cannot be safely removed from the system unless we know there are no existing files or directories
that use this "removed" policy. To find out whether there are files or directories which are
using the policy is time consuming in runtime and might impact the Namenode performance. So
a better choice is doing the work when NameNode restarts and loads Inodes. Collecting the
information at that time will not introduce much extra overhead. 

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message