hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rodrigo Schmidt (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1094) Intelligent block placement policy to decrease probability of block loss
Date Sat, 10 Jul 2010 17:02:55 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12887061#action_12887061
] 

Rodrigo Schmidt commented on HDFS-1094:
---------------------------------------

According to your definition, what prevents me from saying that there is only one node group,
which is the set of all nodes in the cluster? Or why not to say that all triples are node
groups? Or any combination of k>3 nodes?

What makes me think this definition is not sound is the fact that it doesn't link node groups
to replication policies. For the rationale to be sound, there should be something saying what
is the node group a policy generates.

Your examples seem to suggest maximal sets from which any triple is a valid selection for
replication. If that's the case, you are oversimplifying things. The restriction that one
replica is on a rack and two replicas are in another one prevents several triples in both
approaches, and this has non-negligible impact in the math.

Your argument about disjoint and non-disjoint groups is valid only if you consider groups
of equal size. A strategy with big non-disjoint groups can be as good as better than one with
disjoint groups if you reduce the size of the groups it defines.



> Intelligent block placement policy to decrease probability of block loss
> ------------------------------------------------------------------------
>
>                 Key: HDFS-1094
>                 URL: https://issues.apache.org/jira/browse/HDFS-1094
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>            Reporter: dhruba borthakur
>            Assignee: Rodrigo Schmidt
>         Attachments: prob.pdf, prob.pdf
>
>
> The current HDFS implementation specifies that the first replica is local and the other
two replicas are on any two random nodes on a random remote rack. This means that if any three
datanodes die together, then there is a non-trivial probability of losing at least one block
in the cluster. This JIRA is to discuss if there is a better algorithm that can lower probability
of losing a block.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message