[ https://issues.apache.org/jira/browse/HDFS-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12728594#action_12728594
]
dhruba borthakur commented on HDFS-385:
---------------------------------------
@Hong: * javadoc for chooseTarget contains an unused parameter : I wiill fix.
* two versions of chooseTarget. I will make the default as you suggested.
* asymmetry for chooseTarget which takes a list of DatanodeDescriptor: I would
like to leave it as it is because lots of code in the namenode depends on this behaviour.
Is this ok with you?
@Matei:
HashMap vs HashSet, not much difference as explained by Nicholas. Also, the last path does
not have any excludedNodes parameters
* verifyBlockPlacement part of the abstract API? This method is used by fsck to verify that
the block satisfies the placement policy.
> Design a pluggable interface to place replicas of blocks in HDFS
> ----------------------------------------------------------------
>
> Key: HDFS-385
> URL: https://issues.apache.org/jira/browse/HDFS-385
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
> Fix For: 0.21.0
>
> Attachments: BlockPlacementPluggable.txt, BlockPlacementPluggable2.txt, BlockPlacementPluggable3.txt,
BlockPlacementPluggable4.txt, BlockPlacementPluggable4.txt
>
>
> The current HDFS code typically places one replica on local rack, the second replica
on remote random rack and the third replica on a random node of that remote rack. This algorithm
is baked in the NameNode's code. It would be nice to make the block placement algorithm a
pluggable interface. This will allow experimentation of different placement algorithms based
on workloads, availability guarantees and failure models.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
|