hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jing Zhao (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-6969) Archival Storage: INode#getStoragePolicyID should always return the latest storage policy
Date Fri, 29 Aug 2014 23:53:53 GMT

     [ https://issues.apache.org/jira/browse/HDFS-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Jing Zhao updated HDFS-6969:

    Attachment: HDFS-6969.001.patch

bq. But looks like currently the replicas cannot be placed on the correct storages even if
the block placement policy returns the correct results. Will continue digging.

The cause of the issue is that the current code does not include storage type array into the
protobuf msg. The 001 patch adds this fix and the unit tests can pass.

Actually the current trunk need the same fix. However, since there is not use case for storage
types in trunk, it is not direct to have a unit test for it. So currently I just include the
fix here. But I can also file a separate jira to fix trunk.

> Archival Storage: INode#getStoragePolicyID should always return the latest storage policy
> -----------------------------------------------------------------------------------------
>                 Key: HDFS-6969
>                 URL: https://issues.apache.org/jira/browse/HDFS-6969
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: balancer, namenode
>            Reporter: Jing Zhao
>            Assignee: Jing Zhao
>         Attachments: HDFS-6969.000.patch, HDFS-6969.001.patch
> In general, every file should only provide exact one storage policy for the Mover, no
matter its snapshot states. Suppose a file /foo/bar, and it is contained in snapshots s1 and
s2 of the root. If /foo/bar, /.snapshot/s1/foo/bar and /.snapshot/s2/foo/bar have different
storage policies, when running Mover, we have to select one of the storage policies, among
which the latest one should be the best. And if /foo/bar is deleted, we should still use its
storage policy before the deletion, since the file deletion should not trigger data migration.
> Thus maybe what we can do is:
> 1. For a file with policy directly specified on it, alway follow the latest
> 2. Otherwise follow its latest parental path to identify its storage policy (simply following
the parent link)

This message was sent by Atlassian JIRA

View raw message