Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C6A539762 for ; Wed, 11 Jan 2012 21:11:20 +0000 (UTC) Received: (qmail 54752 invoked by uid 500); 11 Jan 2012 21:11:19 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 54662 invoked by uid 500); 11 Jan 2012 21:11:18 -0000 Mailing-List: contact mapreduce-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-user@hadoop.apache.org Delivered-To: mailing list mapreduce-user@hadoop.apache.org Received: (qmail 54646 invoked by uid 99); 11 Jan 2012 21:11:18 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 11 Jan 2012 21:11:18 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of baishen.lists@gmail.com designates 209.85.212.176 as permitted sender) Received: from [209.85.212.176] (HELO mail-wi0-f176.google.com) (209.85.212.176) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 11 Jan 2012 21:11:13 +0000 Received: by wibhi5 with SMTP id hi5so1045595wib.35 for ; Wed, 11 Jan 2012 13:10:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=zASn2exkuB9Iiivzj9rEUq6tvWNloQfVcZ+FkozV/q0=; b=JEX80WVdWWRd1JMDeehkgzc/15B24PSxaLC+oREUGf1/ILNWwy6/rO6D8lLjnJ8/ag EovcUfrlG2FE5vskpv9djTRo6E8KRFcEqmZ5hiNC0t2ZMC5xrMEFglD6YHMcqvT012QF 27xHDARJzy0zNTyANL2CZt0i52d6WrvpzZtuc= MIME-Version: 1.0 Received: by 10.180.19.130 with SMTP id f2mr13886886wie.12.1326316252262; Wed, 11 Jan 2012 13:10:52 -0800 (PST) Received: by 10.180.103.194 with HTTP; Wed, 11 Jan 2012 13:10:52 -0800 (PST) In-Reply-To: <03AF7A47D43C954E92AC1F7CD5C124020C2153B0@TLN-MBX1.corp.seven.com> References: <03AF7A47D43C954E92AC1F7CD5C124020C21514E@TLN-MBX1.corp.seven.com> <03AF7A47D43C954E92AC1F7CD5C124020C2153B0@TLN-MBX1.corp.seven.com> Date: Wed, 11 Jan 2012 16:10:52 -0500 Message-ID: Subject: Re: Capacity Scheduler problem From: Bai Shen To: mapreduce-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=bcaec53f37ed13710f04b6471142 --bcaec53f37ed13710f04b6471142 Content-Type: text/plain; charset=ISO-8859-1 Nope. Like I said, all I did was change mapred.fairscheduler.assignmultiple to false on my cluster and that fixed the issue. On Wed, Jan 11, 2012 at 5:31 AM, Marek Miglinski wrote: > > That's not the case, I've even removed > hadoop-fairscheduler-0.20.2-cdh3u2.jar from hadoop lib folder and > fair-scheduler.xml from hadoop conf folder and it didn't help... Any ideas? > > ________________________________ > From: Bai Shen [baishen.lists@gmail.com] > Sent: Tuesday, January 10, 2012 9:35 PM > To: mapreduce-user@hadoop.apache.org > Subject: Re: Capacity Scheduler problem > > Turn off the fairscheduler multiple task assign setting. I just had the > same problem with my cluster. > > On Tue, Jan 10, 2012 at 11:34 AM, Marek Miglinski > wrote: > Hello guys, > > 1. I have concern with my 3 node cluster, I run capacity scheduler with 4 > queues and one has 30% of cluster resources, the problem is that when I > schedule a job, all tasks are assigned to one single node which takes all > of it's mappers and works quite slow. Are there any settings for > mapred/capacity scheduler to assign mappers on all nodes by even number of > tasks? > > 2. I've set my capacity-scheduler.xml settings > "mapred.capacity-scheduler.queue.job1.capacity" for all queues as wanted > but jobtracker doesn't reallocate resources from free queues if only one is > working, why? > > > Thanks, > Marek M. > > --bcaec53f37ed13710f04b6471142 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Nope.=A0 Like I said, all I did was change mapred.fairscheduler.assignmultiple to false on my cluster and that fixed = the issue.

On Wed, Jan 11, 2012 at= 5:31 AM, Marek Miglinski <mmiglinski@seven.com> wrote:

That's not the case, I've even removed hadoop-fairscheduler-0.20.2-= cdh3u2.jar from hadoop lib folder and fair-scheduler.xml from hadoop conf f= older and it didn't help... Any ideas?

________________________________
From: Bai Shen [baishen.lists@gm= ail.com]
Sent: Tuesday, January 10, 2012 9:35 PM
To: mapreduce-user@hado= op.apache.org
Subject: Re: Capacity Scheduler problem

Turn off the fairscheduler multiple task assign setting. =A0I just had the = same problem with my cluster.

On Tue, Jan 10, 2012 at 11:34= AM, Marek Miglinski <mmiglinski= @seven.com<mailto:mmiglinski= @seven.com>> wrote:
Hello guys,

1. I have concern with my 3 node cluster, I run capacity scheduler with 4 q= ueues and one has 30% of cluster resources, the problem is that when I sche= dule a job, all tasks are assigned to one single node which takes all of it= 's mappers and works quite slow. Are there any settings for mapred/capa= city scheduler to assign mappers on all nodes by even number of tasks?

2. I've set my capacity-scheduler.xml settings "mapred.capacity-sc= heduler.queue.job1.capacity" for all queues as wanted but jobtracker d= oesn't reallocate resources from free queues if only one is working, wh= y?


Thanks,
Marek M.


--bcaec53f37ed13710f04b6471142--