Return-Path: Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: (qmail 2672 invoked from network); 9 Jul 2010 20:24:49 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 9 Jul 2010 20:24:49 -0000 Received: (qmail 69170 invoked by uid 500); 9 Jul 2010 20:24:49 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 69105 invoked by uid 500); 9 Jul 2010 20:24:49 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 69097 invoked by uid 99); 9 Jul 2010 20:24:48 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 09 Jul 2010 20:24:48 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.22] (HELO thor.apache.org) (140.211.11.22) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 09 Jul 2010 20:24:46 +0000 Received: from thor (localhost [127.0.0.1]) by thor.apache.org (8.13.8+Sun/8.13.8) with ESMTP id o69KGsEv008162 for ; Fri, 9 Jul 2010 20:16:55 GMT Message-ID: <4288570.292551278706614962.JavaMail.jira@thor> Date: Fri, 9 Jul 2010 16:16:54 -0400 (EDT) From: "Joydeep Sen Sarma (JIRA)" To: hdfs-issues@hadoop.apache.org Subject: [jira] Commented: (HDFS-1094) Intelligent block placement policy to decrease probability of block loss In-Reply-To: <24967451.18401271056063147.JavaMail.jira@thor> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HDFS-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12886861#action_12886861 ] Joydeep Sen Sarma commented on HDFS-1094: ----------------------------------------- one can't have a different node-group for each block/file. that would defeat the whole point. (in fact - every block today is in a 3-node node-group - and there are gazillions of such node groups that overlap). the reduction in data loss probability comes out of the fact that the odds of 3 nodes falling into the same node-group is small. (if they don't fall into the same node-group - there's no data loss). if the number of node groups is very large (because of overlaps) - then the probability of 3 failing nodes falling into the same node group will start going up (just because there are more node-groups to choose from). the more the node-groups are exclusive - the better. that means the number of node-groups is minimized wrt. a constant number of nodes. as i mentioned - the size of the node-group is dictated to some extent by re-replication bandwidth. one wants very small node groups - but that doesn't work because there's not enough re-replication bandwidth (a familiar problem in RAID). if u take some standard cluster (say 8 racks x 40 nodes) - how many distinct node groups would ur algorithm end up with? > Intelligent block placement policy to decrease probability of block loss > ------------------------------------------------------------------------ > > Key: HDFS-1094 > URL: https://issues.apache.org/jira/browse/HDFS-1094 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node > Reporter: dhruba borthakur > Assignee: Rodrigo Schmidt > Attachments: prob.pdf, prob.pdf > > > The current HDFS implementation specifies that the first replica is local and the other two replicas are on any two random nodes on a random remote rack. This means that if any three datanodes die together, then there is a non-trivial probability of losing at least one block in the cluster. This JIRA is to discuss if there is a better algorithm that can lower probability of losing a block. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.