hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yuanbo Liu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11293) [SPS]: Local DN should be given prefernce as source node, when target available in same node
Date Mon, 09 Jan 2017 03:50:58 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15810584#comment-15810584

Yuanbo Liu commented on HDFS-11293:

[~umamaheswararao] Thanks for your patch. Most part of your code look good to me.
Please add
after {{existing.remove(datanodeStorageInfo.getStorageType());}} in line 307. Otherwise the
source node will be chose twice.
Here is the test case:
  @Test(timeout = 300000)
  public void testBlockMoveInSameDatanodeWithWARM() throws Exception {
    StorageType[][] diskTypes =
        new StorageType[][]{{StorageType.DISK, StorageType.ARCHIVE},
            {StorageType.ARCHIVE, StorageType.SSD},
            {StorageType.DISK, StorageType.DISK},
            {StorageType.DISK, StorageType.DISK}};

    config.setLong("dfs.block.size", DEFAULT_BLOCK_SIZE);
    hdfsCluster = startCluster(config, diskTypes, diskTypes.length,
        storagesPerDatanode, capacity);
    dfs = hdfsCluster.getFileSystem();

    // Change policy to ONE_SSD
    dfs.setStoragePolicy(new Path(file), "WARM");
    FSNamesystem namesystem = hdfsCluster.getNamesystem();
    INode inode = namesystem.getFSDirectory().getINode(file);


    waitExpectedStorageType(file, StorageType.DISK, 1, 30000);
    waitExpectedStorageType(file, StorageType.ARCHIVE, 2, 30000);

> [SPS]: Local DN should be given prefernce as source node, when target available in same
> --------------------------------------------------------------------------------------------
>                 Key: HDFS-11293
>                 URL: https://issues.apache.org/jira/browse/HDFS-11293
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Yuanbo Liu
>            Assignee: Uma Maheswara Rao G
>            Priority: Critical
>         Attachments: HDFS-11293-HDFS-10285-00.patch
> In {{FsDatasetImpl#createTemporary}}, we use {{volumeMap}} to get replica info by block
pool id. But in this situation:
> {code}
> datanode A => {DISK, SSD}, datanode B => {DISK, ARCHIVE}.
> 1. the same block replica exists in A[DISK] and B[DISK].
> 2. the block pool id of datanode A and datanode B are the same.
> {code}
> Then we start to change the file's storage policy and move the block replica in the cluster.
Very likely we have to move block from B[DISK] to A[SSD], at this time, datanode A throws

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message