[ https://issues.apache.org/jira/browse/HDFS1094?page=com.atlassian.jira.plugin.system.issuetabpanels:commenttabpanel&focusedCommentId=12856263#action_12856263
]
Karthik Ranganathan commented on HDFS1094:

Just synced up with some folks (including Rodrigo), I think the mismatch was from the following:
The expected number of blocks lost is the same in both cases. This is because when 3 nodes
die:
In scheme 1, there is a high probability data is lost, and a little of it is lost when it
happens.
In scheme 2, there is a low probability data is lost, but more of it is lost when it happens.
The product of the probability and the number of blocks lost is the same. That gives us 2
choices  reduce the probability or the number of blocks lost.
The underlying issue is that any data loss is bad be it a little or a lot (especially for
some kinds of applications). So we are better off lowering the probability that any data is
lost than the number of blocks lost at a time. From that perspective, this is a good change.
> Intelligent block placement policy to decrease probability of block loss
> 
>
> Key: HDFS1094
> URL: https://issues.apache.org/jira/browse/HDFS1094
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
>
> The current HDFS implementation specifies that the first replica is local and the other
two replicas are on any two random nodes on a random remote rack. This means that if any three
datanodes die together, then there is a nontrivial probability of losing at least one block
in the cluster. This JIRA is to discuss if there is a better algorithm that can lower probability
of losing a block.

This message is automatically generated by JIRA.

If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa

For more information on JIRA, see: http://www.atlassian.com/software/jira
