hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
Date Wed, 19 Aug 2015 04:15:46 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14702419#comment-14702419
] 

Hadoop QA commented on HDFS-8884:
---------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  13m 44s | Pre-patch trunk JavaDoc compilation may be
broken. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any @author tags.
|
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to include 1 new
or modified test files. |
| {color:red}-1{color} | javac |   0m 11s | The patch appears to cause the build to fail.
|
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | http://issues.apache.org/jira/secure/attachment/12751189/HDFS-8884.002.patch
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7ecbfd4 |
| Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12038/console |


This message was automatically generated.

> Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
> -----------------------------------------------------------
>
>                 Key: HDFS-8884
>                 URL: https://issues.apache.org/jira/browse/HDFS-8884
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Yi Liu
>            Assignee: Yi Liu
>         Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch
>
>
> In current BlockPlacementPolicyDefault, when choosing datanode storage to place block,
we have following logic:
> {code}
>         final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
>             chosenNode.getStorageInfos());
>         int i = 0;
>         boolean search = true;
>         for (Iterator<Map.Entry<StorageType, Integer>> iter = storageTypes
>             .entrySet().iterator(); search && iter.hasNext(); ) {
>           Map.Entry<StorageType, Integer> entry = iter.next();
>           for (i = 0; i < storages.length; i++) {
>             StorageType type = entry.getKey();
>             final int newExcludedNodes = addIfIsGoodTarget(storages[i],
> {code}
> We will iterate (actually two {{for}}, although they are usually small value) all storages
of the candidate datanode even the datanode itself is not good (e.g. decommissioned, stale,
too busy..), since currently we do all the check in {{addIfIsGoodTarget}}.
> We can fail-fast: check the datanode related conditions first, if the datanode is not
good, then no need to shuffle and iterate the storages. Then it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message