hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jing Zhao (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-9876) shouldProcessOverReplicated should not count number of pending replicas
Date Tue, 01 Mar 2016 23:11:18 GMT

     [ https://issues.apache.org/jira/browse/HDFS-9876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Jing Zhao updated HDFS-9876:
    Attachment: HDFS-9876.001.patch

Remove unused internalBlock.

> shouldProcessOverReplicated should not count number of pending replicas
> -----------------------------------------------------------------------
>                 Key: HDFS-9876
>                 URL: https://issues.apache.org/jira/browse/HDFS-9876
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Takuya Fukudome
>            Assignee: Jing Zhao
>         Attachments: HDFS-9876.000.patch, HDFS-9876.001.patch, HDFS-9876.001.patch
> Currently when checking if we should process over-replicated block in {{addStoredBlock}},
we count both the number of reported replicas and pending replicas. However, {{processOverReplicatedBlock}}
chooses excess replicas only among all the reported storages of the block. So in a situation
where we have over-replicated replica/internal blocks which only reside in the pending queue,
we will not be able to choose any extra replica to delete.
> For contiguous blocks, this causes {{chooseExcessReplicasContiguous}} to do nothing.
But for striped blocks, this may cause endless loop in {{chooseExcessReplicasStriped}} in
the following while loop:
> {code}
>       while (candidates.size() > 1) {
>         List<DatanodeStorageInfo> replicasToDelete = placementPolicy
>             .chooseReplicasToDelete(nonExcess, candidates, (short) 1,
>                 excessTypes, null, null);
> {code}

This message was sent by Atlassian JIRA

View raw message