Return-Path: Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: (qmail 25429 invoked from network); 30 Sep 2009 16:17:45 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 30 Sep 2009 16:17:45 -0000 Received: (qmail 89982 invoked by uid 500); 30 Sep 2009 16:17:45 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 89933 invoked by uid 500); 30 Sep 2009 16:17:45 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 89923 invoked by uid 99); 30 Sep 2009 16:17:45 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 Sep 2009 16:17:45 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 Sep 2009 16:17:43 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 7B3A4234C004 for ; Wed, 30 Sep 2009 09:17:23 -0700 (PDT) Message-ID: <2103554405.1254327443492.JavaMail.jira@brutus> Date: Wed, 30 Sep 2009 09:17:23 -0700 (PDT) From: "Cosmin Lehene (JIRA)" To: hdfs-issues@hadoop.apache.org Subject: [jira] Updated: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block. In-Reply-To: <794054798.1253257617910.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cosmin Lehene updated HDFS-630: ------------------------------- Attachment: HDFS-630.patch Patch for 0.20 branch. Added public LocatedBlock addBlock(String src, String clientName, DatanodeInfo[] excludedNodes) throws IOException; to ClientProtocol and implemented methods in both DFSClient and NameNode Added method to FSNameSystem too DFSClient will keep track of nodes that timeout when creating a new block and pass that list when retrying. NameNode will pass the excludedNodes list to FSNameSystem and so on. Fixed /src/test/org/apache/hadoop/hdfs/TestDFSClientRetries.java to reflect changes in DFSClient Kept the old interface as well on server side. We've tested on a cluster with HBase on top and it worked fine. > In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block. > ------------------------------------------------------------------------------------------------------------------- > > Key: HDFS-630 > URL: https://issues.apache.org/jira/browse/HDFS-630 > Project: Hadoop HDFS > Issue Type: New Feature > Components: hdfs client > Affects Versions: 0.20.1, 0.21.0 > Reporter: Ruyue Ma > Assignee: Ruyue Ma > Priority: Minor > Fix For: 0.21.0 > > Attachments: HDFS-630.patch > > > created from hdfs-200. > If during a write, the dfsclient sees that a block replica location for a newly allocated block is not-connectable, it re-requests the NN to get a fresh set of replica locations of the block. It tries this dfs.client.block.write.retries times (default 3), sleeping 6 seconds between each retry ( see DFSClient.nextBlockOutputStream). > This setting works well when you have a reasonable size cluster; if u have few datanodes in the cluster, every retry maybe pick the dead-datanode and the above logic bails out. > Our solution: when getting block location from namenode, we give nn the excluded datanodes. The list of dead datanodes is only for one block allocation. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.