Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 63471 invoked from network); 5 May 2008 23:28:27 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 5 May 2008 23:28:27 -0000 Received: (qmail 52392 invoked by uid 500); 5 May 2008 23:28:27 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 52349 invoked by uid 500); 5 May 2008 23:28:27 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 52324 invoked by uid 99); 5 May 2008 23:28:27 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 05 May 2008 16:28:27 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 05 May 2008 23:27:43 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 2F4F1234C100 for ; Mon, 5 May 2008 16:27:56 -0700 (PDT) Message-ID: <1461490276.1210030076178.JavaMail.jira@brutus> Date: Mon, 5 May 2008 16:27:56 -0700 (PDT) From: "Arun C Murthy (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Updated: (HADOOP-3333) job failing because of reassigning same tasktracker to failing tasks In-Reply-To: <245009369.1209662815864.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-3333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arun C Murthy updated HADOOP-3333: ---------------------------------- Comment: was deleted > job failing because of reassigning same tasktracker to failing tasks > -------------------------------------------------------------------- > > Key: HADOOP-3333 > URL: https://issues.apache.org/jira/browse/HADOOP-3333 > Project: Hadoop Core > Issue Type: Bug > Components: mapred > Affects Versions: 0.16.3 > Reporter: Christian Kunz > Assignee: Arun C Murthy > Priority: Critical > Fix For: 0.18.0 > > Attachments: HADOOP-3333_0_20080503.patch, HADOOP-3333_1_20080505.patch > > > We have a long running a job in a 2nd atttempt. Previous job was failing and current jobs risks to fail as well, because reduce tasks failing on marginal TaskTrackers are assigned repeatedly to the same TaskTrackers (probably because it is the only available slot), eventually running out of attempts. > Reduce tasks should be assigned to the same TaskTrackers at most twice, or TaskTrackers need to get some better smarts to find failing hardware. > BTW, mapred.reduce.max.attempts=12, which is high, but does not help in this case. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.