[ https://issues.apache.org/jira/browse/HDFS1094?page=com.atlassian.jira.plugin.system.issuetabpanels:commenttabpanel&focusedCommentId=12856716#action_12856716
]
Jitendra Nath Pandey commented on HDFS1094:

> I think you are throwing one replica at a time on the cluster. The probability of first
missing the failed nodes is (Nr)/N. The probability of the second falling into live node
excluding the one that already has the first replica is (Nr1)/N.
Shouldn't it be (Nr)/N , (Nr1)/N1 , (Nr2)/N2 and so on ?
Similarly, probability that all replicas reside on the failed nodes, it would be
r/N * (r1)/(N1) * ... * 1/(Nr+1) = 1/C(N,r) which is same as in Karthik's formula.
> Intelligent block placement policy to decrease probability of block loss
> 
>
> Key: HDFS1094
> URL: https://issues.apache.org/jira/browse/HDFS1094
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
>
> The current HDFS implementation specifies that the first replica is local and the other
two replicas are on any two random nodes on a random remote rack. This means that if any three
datanodes die together, then there is a nontrivial probability of losing at least one block
in the cluster. This JIRA is to discuss if there is a better algorithm that can lower probability
of losing a block.

This message is automatically generated by JIRA.

If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa

For more information on JIRA, see: http://www.atlassian.com/software/jira
