[ https://issues.apache.org/jira/browse/HDFS-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12727865#action_12727865
]
Tsz Wo (Nicholas), SZE commented on HDFS-385:
---------------------------------------------
> Why is excludedNodes a HashMap? A HashSet should perform at least as well and use less
memory.
It is unbelievable that java.util.HashSet is indeed a HashMap. I learned this some time ago.
The following is copied from the [HashSet API|http://java.sun.com/javase/6/docs/api/java/util/HashSet.html]:
"This class implements the Set interface, backed by a hash table (actually a HashMap instance)..."
> Design a pluggable interface to place replicas of blocks in HDFS
> ----------------------------------------------------------------
>
> Key: HDFS-385
> URL: https://issues.apache.org/jira/browse/HDFS-385
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
> Fix For: 0.21.0
>
> Attachments: BlockPlacementPluggable.txt, BlockPlacementPluggable2.txt, BlockPlacementPluggable3.txt,
BlockPlacementPluggable4.txt, BlockPlacementPluggable4.txt
>
>
> The current HDFS code typically places one replica on local rack, the second replica
on remote random rack and the third replica on a random node of that remote rack. This algorithm
is baked in the NameNode's code. It would be nice to make the block placement algorithm a
pluggable interface. This will allow experimentation of different placement algorithms based
on workloads, availability guarantees and failure models.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
|