Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6647EE202 for ; Mon, 3 Dec 2012 22:12:01 +0000 (UTC) Received: (qmail 45163 invoked by uid 500); 3 Dec 2012 22:12:01 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 45121 invoked by uid 500); 3 Dec 2012 22:12:01 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 45038 invoked by uid 99); 3 Dec 2012 22:12:01 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Dec 2012 22:12:01 +0000 Date: Mon, 3 Dec 2012 22:12:01 +0000 (UTC) From: "Harsh J (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: <124634114.55494.1354572721080.JavaMail.jiratomcat@arcas> In-Reply-To: <1060197130.44869.1354276200065.JavaMail.jiratomcat@arcas> Subject: [jira] [Updated] (HDFS-4246) The exclude node list should be more forgiving, for each output stream MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harsh J updated HDFS-4246: -------------------------- Status: Open (was: Patch Available) > The exclude node list should be more forgiving, for each output stream > ---------------------------------------------------------------------- > > Key: HDFS-4246 > URL: https://issues.apache.org/jira/browse/HDFS-4246 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs-client > Affects Versions: 2.0.0-alpha > Reporter: Harsh J > Assignee: Harsh J > Priority: Minor > Attachments: HDFS-4246.patch > > > Originally observed by Inder on the mailing lists: > {quote} > Folks, > i was wondering if there is any mechanism/logic to move a node back from the excludedNodeList to live nodes to be tried for new block creation. > In the current DFSOutputStream code i do not see this. The use-case is if the write timeout is being reduced and certain nodes get aggressively added to the excludedNodeList and the client caches DFSOutputStream then the excludedNodes never get tried again in the lifetime of the application caching DFSOutputStream > {quote} > What this leads to, is a special scenario, that may impact smaller clusters more than larger ones: > 1. File is opened for continuous hflush/sync-based writes, such as a HBase WAL for example. This file is gonna be kept open for a very very long time, by design. > 2. Over time, nodes are excluded for various errors, such as DN crashes, network failures, etc. > 3. Eventually, exclude list == live nodes list or close, and the write suffers. At time of equality, the write also fails with an error of not being able to get a block allocation. > We should perhaps make the excludeNodes list a timed-cache collection, so that even if it begins filling up, the older excludes are pruned away, giving those nodes a try again for later. > One place we have to be careful about, though, is rack-failures. Those sometimes never come back fast enough, and can be problematic to retry code with such an eventually-forgiving list. Perhaps we can retain forgiven nodes and if they are entered again, we may double or triple the forgiveness value (in time units), to counter this? Its just one idea. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira