hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1094) Intelligent block placement policy to decrease probability of block loss
Date Wed, 01 Sep 2010 16:59:58 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12905093#action_12905093
] 

Steve Loughran commented on HDFS-1094:
--------------------------------------

I'm trying to understand this, it'll take me a while to get round the maths. 

# If you keep stuff all off the same switch, replication costs are lower, maybe P(loss) is
lower, but the cost of a network partition is higher, as then the probability of any copy
of the data being visible is reduced. That may or may not be something to worry about. Konstantin
does, clearly; I think it may be best to detect a partition event and maybe drop into safe
mode if a significant percentage of the cluster just goes away. This depends on what you want:
replication storm and possible cascade  failures vs hard outage. Either way, the ops team
get paged.

# If your cluster is partitioned into independent power sources/UPSes, then again, less independence.
Failure of the power source takes a big chunk of the cluster offline. This won't look significantly
different to the NN/JT, unless they are hooked into events from the UPS.

# Batches of HDDs may suffer from the same flaws. In an ideal world, you'd know the history
of every HDD and avoid having all copies of the data on same the same batch of disks until
you considered them bedded in. This would imply knowing about the internal disk state of every
datanode though...



> Intelligent block placement policy to decrease probability of block loss
> ------------------------------------------------------------------------
>
>                 Key: HDFS-1094
>                 URL: https://issues.apache.org/jira/browse/HDFS-1094
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>            Reporter: dhruba borthakur
>            Assignee: Rodrigo Schmidt
>         Attachments: calculate_probs.py, failure_rate.py, prob.pdf, prob.pdf
>
>
> The current HDFS implementation specifies that the first replica is local and the other
two replicas are on any two random nodes on a random remote rack. This means that if any three
datanodes die together, then there is a non-trivial probability of losing at least one block
in the cluster. This JIRA is to discuss if there is a better algorithm that can lower probability
of losing a block.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message