hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-385) Design a pluggable interface to place replicas of blocks in HDFS
Date Thu, 09 Jul 2009 00:42:15 GMT

    [ https://issues.apache.org/jira/browse/HDFS-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12728986#action_12728986
] 

dhruba borthakur commented on HDFS-385:
---------------------------------------

The usage model it to define a policy for the entire cluster when you create the cluster.
This is especially useful when you have an HDFS instance on Amazon EB2 instance for example.
This is not intended to be dynamic in any shape or form for a specified cluster.

> Existing files will retain policy 1. Fsck will report violations for policy 2 for the
old files; correct?
Correct.

> It would be an admin error to configure NN and Balancer with different policies; correct?
There is no check for this; correct?
Correct.  

> Q. The policy manager is global to the file system. Can it have its own config to to
do different policies for different subtrees?
Sure can. I do not have a use-case for now that needs different policies for different files.
But when it is required, we can always do that.

You could also use this to co-locate blocks of the same fle in the same set of datanodes.
But here again, I do not see different policies for different files.

> Design a pluggable interface to place replicas of blocks in HDFS
> ----------------------------------------------------------------
>
>                 Key: HDFS-385
>                 URL: https://issues.apache.org/jira/browse/HDFS-385
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>             Fix For: 0.21.0
>
>         Attachments: BlockPlacementPluggable.txt, BlockPlacementPluggable2.txt, BlockPlacementPluggable3.txt,
BlockPlacementPluggable4.txt, BlockPlacementPluggable4.txt, BlockPlacementPluggable5.txt
>
>
> The current HDFS code typically places one replica on local rack, the second replica
on remote random rack and the third replica on a random node of that remote rack. This algorithm
is baked in the NameNode's code. It would be nice to make the block placement algorithm a
pluggable interface. This will allow experimentation of different placement algorithms based
on workloads, availability guarantees and failure models.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message