hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8946) Improve choosing datanode storage for block placement
Date Thu, 27 Aug 2015 16:35:47 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14717005#comment-14717005
] 

Hadoop QA commented on HDFS-8946:
---------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m 24s | Pre-patch trunk has 4 extant Findbugs (version
3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any @author tags.
|
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to include 1 new
or modified test files. |
| {color:green}+1{color} | javac |   7m 40s | There were no new javac warning messages. |
| {color:green}+1{color} | javadoc |   9m 52s | There were no new javadoc warning messages.
|
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does not increase
the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 19s | There were no new checkstyle issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that end in whitespace.
|
| {color:green}+1{color} | install |   1m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with eclipse:eclipse.
|
| {color:green}+1{color} | findbugs |   2m 26s | The patch does not introduce any new Findbugs
(version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  7s | Pre-build of native portion |
| {color:green}+1{color} | hdfs tests | 161m 10s | Tests passed in hadoop-hdfs. |
| | | 205m 25s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | http://issues.apache.org/jira/secure/attachment/12752741/HDFS-8946.002.patch
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0bf2854 |
| Pre-patch Findbugs warnings | https://builds.apache.org/job/PreCommit-HDFS-Build/12162/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
|
| hadoop-hdfs test log | https://builds.apache.org/job/PreCommit-HDFS-Build/12162/artifact/patchprocess/testrun_hadoop-hdfs.txt
|
| Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/12162/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep
3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/12162/console |


This message was automatically generated.

> Improve choosing datanode storage for block placement
> -----------------------------------------------------
>
>                 Key: HDFS-8946
>                 URL: https://issues.apache.org/jira/browse/HDFS-8946
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Yi Liu
>            Assignee: Yi Liu
>         Attachments: HDFS-8946.001.patch, HDFS-8946.002.patch
>
>
> This JIRA is to:
> Improve chooseing datanode storage for block placement:
> In {{BlockPlacementPolicyDefault}} ({{chooseLocalStorage}}, {{chooseRandom}}), we have
following logic to choose datanode storage to place block.
> For given storage type, we iterate storages of the datanode. But for datanode, it only
cares about the storage type. In the loop, we check according to Storage type and return the
first storage if the storages of the type on the datanode fit in requirement. So we can remove
the iteration of storages, and just need to do once to find a good storage of given type,
it's efficient if the storages of the type on the datanode don't fit in requirement since
we don't need to loop all storages and do the same check.
> Besides, no need to shuffle the storages, since we only need to check according to the
storage type on the datanode once.
> This also improves the logic and make it more clear.
> {code}
>       if (excludedNodes.add(localMachine) // was not in the excluded list
>           && isGoodDatanode(localDatanode, maxNodesPerRack, false,
>               results, avoidStaleNodes)) {
>         for (Iterator<Map.Entry<StorageType, Integer>> iter = storageTypes
>             .entrySet().iterator(); iter.hasNext(); ) {
>           Map.Entry<StorageType, Integer> entry = iter.next();
>           for (DatanodeStorageInfo localStorage : DFSUtil.shuffle(
>               localDatanode.getStorageInfos())) {
>             StorageType type = entry.getKey();
>             if (addIfIsGoodTarget(localStorage, excludedNodes, blocksize,
>                 results, type) >= 0) {
>               int num = entry.getValue();
>               ...
> {code}
> (current logic above)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message