hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sanjay Radia (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-385) Design a pluggable interface to place replicas of blocks in HDFS
Date Thu, 09 Jul 2009 22:28:15 GMT

    [ https://issues.apache.org/jira/browse/HDFS-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12729431#action_12729431

Sanjay Radia commented on HDFS-385:

>The usage model it to define a policy for the entire cluster when you create the cluster.
This is especially useful when you have an HDFS instance on Amazon EB2 instance for example.
This is not intended to be dynamic in any shape or form for a specified cluster. 

Given the above should the system record the policy in the fsImage to prevent it from being
Similarly should the balancer check to see if it has the same policy as the NN?
In the past folks have complained that hadoop is too easy to misconfigure.

>> I'm a little concerned that the Balancer and Fsck will contradict a policy based
on filename because BlockPlacementPolicy.isValidMove() and BlockPlacementPolicy.verifyBlockPlacement()
>I agree that there isn't an elegant way of materializing a filename during a rebalance
operation. One workaround is for you to run a fsck to find the mapping from blocks to a file.
Then you can use this information in your modified Balancer to do what is appropriate for
you. You can also use the tool in HADOOP-5019 for this purpose.

These are hard problems which indicate that this work is experiemental and that it will be
a while before we figure out the right APIs.
However the experimentation is useful and as long it does not impact the base code in a negative
way, we should be able to add such features to hadoop after careful review. 
We should mark such new experimental APIs as "unstable"  so that we are free to change them
down the road.

> Design a pluggable interface to place replicas of blocks in HDFS
> ----------------------------------------------------------------
>                 Key: HDFS-385
>                 URL: https://issues.apache.org/jira/browse/HDFS-385
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>             Fix For: 0.21.0
>         Attachments: BlockPlacementPluggable.txt, BlockPlacementPluggable2.txt, BlockPlacementPluggable3.txt,
BlockPlacementPluggable4.txt, BlockPlacementPluggable4.txt, BlockPlacementPluggable5.txt
> The current HDFS code typically places one replica on local rack, the second replica
on remote random rack and the third replica on a random node of that remote rack. This algorithm
is baked in the NameNode's code. It would be nice to make the block placement algorithm a
pluggable interface. This will allow experimentation of different placement algorithms based
on workloads, availability guarantees and failure models.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message