hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uma Maheswara Rao G (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11965) [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr
Date Fri, 07 Jul 2017 22:49:02 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078804#comment-16078804
] 

Uma Maheswara Rao G commented on HDFS-11965:
--------------------------------------------

Thank you [~surendrasingh] for the update. A quick comment
{code}
case FEW_LOW_REDUNDENCY_BLOCKS:
+                LOG.info("Adding trackID " + blockCollectionID
+                    + " back to retry queue as some of the blocks"
+                    + " are low redundant.");
+                this.storageMovementNeeded.add(blockCollectionID);
{code}
When there are no other elements in the this.storageMovementNeeded list, this element comes
for every 300ms and log this info. So, shall we make this as debug to reduce too much logging
in that case?

Please file the JIRA for definite retry implementation which we discussed in previous comments.

> [SPS]: Should give chance to satisfy the low redundant blocks before removing the xattr
> ---------------------------------------------------------------------------------------
>
>                 Key: HDFS-11965
>                 URL: https://issues.apache.org/jira/browse/HDFS-11965
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: namenode
>    Affects Versions: HDFS-10285
>            Reporter: Surendra Singh Lilhore
>            Assignee: Surendra Singh Lilhore
>         Attachments: HDFS-11965-HDFS-10285.001.patch, HDFS-11965-HDFS-10285.002.patch,
HDFS-11965-HDFS-10285.003.patch, HDFS-11965-HDFS-10285.004.patch, HDFS-11965-HDFS-10285.005.patch,
HDFS-11965-HDFS-10285.006.patch
>
>
> The test case is failing because all the required replicas are not moved in expected
storage. This is happened because of delay in datanode registration after cluster restart.
> Scenario :
> 1. Start cluster with 3 DataNodes.
> 2. Create file and set storage policy to WARM.
> 3. Restart the cluster.
> 4. Now Namenode and two DataNodes started first and  got registered with NameNode. (one
datanode  not yet registered)
> 5. SPS scheduled block movement based on available DataNodes (It will move one replica
in ARCHIVE based on policy).
> 6. Block movement also success and Xattr removed from the file because this condition
is true {{itemInfo.isAllBlockLocsAttemptedToSatisfy()}}.
> {code}
> if (itemInfo != null
>                 && !itemInfo.isAllBlockLocsAttemptedToSatisfy()) {
>               blockStorageMovementNeeded
>                   .add(storageMovementAttemptedResult.getTrackId());
>             ....................
>             ......................
>             } else {
>             ....................
>             ......................
>               this.sps.postBlkStorageMovementCleanup(
>                   storageMovementAttemptedResult.getTrackId());
>             }
> {code}
> 7. Now third DN registered with namenode and its reported one more DISK replica. Now
Namenode has two DISK and one ARCHIVE replica.
> In test case we have condition to check the number of DISK replica..
> {code} DFSTestUtil.waitExpectedStorageType(testFileName, StorageType.DISK, 1, timeout,
fs);{code}
> This condition never became true and test case will be timed out.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message