hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rakesh R (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr
Date Mon, 03 Jul 2017 11:20:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16072272#comment-16072272
] 

Rakesh R commented on HDFS-11965:
---------------------------------

Thanks [~surendrasingh]. We have few more {{under replicated}} occurrences to be changed,
please take care.
{code}

+  //Check if file is under-replicated or some blocks are not
+  //satisfy the policy. If file is under-replicate, SPS will

+  /**
+   * Increase retry count for underReplicated file.
+   */

+  /**
+   * Check if can retry for under replicated file.
+   */

+  //Check if some blocks are under-replicated.

+   * Test SPS for under replicated file .

+   * Test retry failure for under replicated file .

+   * Test SPS for over replicated file .
{code}

Please change {{over replicated file}} to {{extra redundant file blocks}}

> [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr
> ---------------------------------------------------------------------------------------
>
>                 Key: HDFS-11965
>                 URL: https://issues.apache.org/jira/browse/HDFS-11965
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: namenode
>    Affects Versions: HDFS-10285
>            Reporter: Surendra Singh Lilhore
>            Assignee: Surendra Singh Lilhore
>         Attachments: HDFS-11965-HDFS-10285.001.patch, HDFS-11965-HDFS-10285.002.patch,
HDFS-11965-HDFS-10285.003.patch, HDFS-11965-HDFS-10285.004.patch
>
>
> The test case is failing because all the required replicas are not moved in expected
storage. This is happened because of delay in datanode registration after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with NameNode. (one
datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move one replica
in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this condition
is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
>                 && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>               blockStorageMovementNeeded
>                   .add(storageMovementAttemptedResult.getTrackId());
>             ....................
>             ......................
>             } else {
>             ....................
>             ......................
>               this.sps.postBlkStorageMovementCleanup(
>                   storageMovementAttemptedResult.getTrackId());
>             }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK replica. Now
Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, timeout,
fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message