Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id F2E7517DCF for ; Tue, 21 Apr 2015 09:58:18 +0000 (UTC) Received: (qmail 52709 invoked by uid 500); 21 Apr 2015 09:58:13 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 52589 invoked by uid 500); 21 Apr 2015 09:58:13 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 52574 invoked by uid 99); 21 Apr 2015 09:58:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 21 Apr 2015 09:58:13 +0000 X-ASF-Spam-Status: No, hits=3.2 required=5.0 tests=HTML_MESSAGE,SPF_SOFTFAIL X-Spam-Check-By: apache.org Received-SPF: softfail (nike.apache.org: transitioning domain of garlanaganarasimha@huawei.com does not designate 54.76.25.247 as permitted sender) Received: from [54.76.25.247] (HELO mx1-eu-west.apache.org) (54.76.25.247) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 21 Apr 2015 09:57:47 +0000 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [119.145.14.65]) by mx1-eu-west.apache.org (ASF Mail Server at mx1-eu-west.apache.org) with ESMTPS id 0B9AF27E47 for ; Tue, 21 Apr 2015 07:53:54 +0000 (UTC) Received: from 172.24.2.119 (EHLO szxeml433-hub.china.huawei.com) ([172.24.2.119]) by szxrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CKC07974; Tue, 21 Apr 2015 15:50:35 +0800 (CST) Received: from SZXEML505-MBX.china.huawei.com ([169.254.1.194]) by szxeml433-hub.china.huawei.com ([10.82.67.210]) with mapi id 14.03.0158.001; Tue, 21 Apr 2015 15:49:13 +0800 From: "Naganarasimha G R (Naga)" To: "user@hadoop.apache.org" Subject: RE: Is there any way to limit the concurrent running mappers per job? Thread-Topic: Is there any way to limit the concurrent running mappers per job? Thread-Index: AQHQe75+/36ssrWVW0iMgZ1Rja9F0Z1WeesAgACbv+Q= Date: Tue, 21 Apr 2015 07:49:12 +0000 Message-ID: References: , In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.18.148.62] Content-Type: multipart/alternative; boundary="_000_AD354F56741A1B47882A625909A59C692BDD6729SZXEML505MBXchi_" MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Virus-Checked: Checked by ClamAV on apache.org --_000_AD354F56741A1B47882A625909A59C692BDD6729SZXEML505MBXchi_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi Sanjeev, YARN already supports to map the deprecated configuration name to the new o= ne so even if "mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob" i= s used it would be having the same behavior. Also it needs to be noted that "mapreduce.jobtracker.taskscheduler.maxrunni= ngtasks.perjob" is jobtracker config so it will have no impact on YARN. Only way to configure, is by configuring schedulers to impact headroom of a= user or application. Please refer http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/CapacityS= cheduler.html & http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yar= n-site/FairScheduler.html. Regards, Naga ________________________________ From: Sanjeev Tripurari [sanjeev.tripurari@inmobi.com] Sent: Tuesday, April 21, 2015 11:54 To: user@hadoop.apache.org Subject: Re: Is there any way to limit the concurrent running mappers per j= ob? Hi, Check if this works for you, mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob Some properties have been changed with yarn implementation https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Dep= recatedProperties.html -Sanjeev On Tue, Apr 21, 2015 at 4:32 AM, Zhe Li > wrote: Hi, after upgraded to Hadoop 2 (yarn), I found that 'mapred.jobtracker.task= Scheduler.maxRunningTasksPerJob' no longer worked, right? One workaround is to use queue to limit it, but it's not easy to control it= from job submitter. Is there any way to limit the concurrent running mappers per job? Any documents or discussions before? BTW, any way to search this mailing list before I post a new question? Thanks very much. _____________________________________________________________ The information contained in this communication is intended solely for the = use of the individual or entity to whom it is addressed and others authoriz= ed to receive it. It may contain confidential or legally privileged informa= tion. If you are not the intended recipient you are hereby notified that an= y disclosure, copying, distribution or taking any action in reliance on the= contents of this information is strictly prohibited and may be unlawful. I= f you have received this communication in error, please notify us immediate= ly by responding to this email and then delete it from your system. The fir= m is neither liable for the proper and complete transmission of the informa= tion contained in this communication nor for any delay in its receipt. --_000_AD354F56741A1B47882A625909A59C692BDD6729SZXEML505MBXchi_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable
Hi Sanjeev,
YARN a= lready supports to map the deprecated configuration name to the new one so = even if "mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob"<= /span> is used it would be having the same behavior.
Also it needs to be noted that "mapreduce.jobtracker.taskscheduler.maxrunningtasks.p= erjob" is jobtracker config so it will have no impact on YARN.
Only way to configure, is by configuring schedulers to impact headroom= of a user or application. Please refer 
http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/C= apacityScheduler.html & http://hadoop.apache.org/docs/stable/hadoop-yar= n/hadoop-yarn-site/FairScheduler.html.  

Regards,
Naga
From: Sanjeev Tripurari [sanjeev.tripurar= i@inmobi.com]
Sent: Tuesday, April 21, 2015 11:54
To: user@hadoop.apache.org
Subject: Re: Is there any way to limit the concurrent running mapper= s per job?

Hi,

Check if this works for you, 
mapreduce.= jobtracker.taskscheduler.maxrunningtasks.perjob

Some properties have been changed with yarn implementation

-Sanjeev



On Tue, Apr 21, 2015 at 4:32 AM, Zhe Li <allenlee.lz@= gmail.com> wrote:
Hi, after upgraded to Hadoop 2 (yarn), I found that 'mapre= d.jobtracker.taskScheduler.maxRunningTasksPerJob' no longer worked, right?

One workaround is to use queue to limit it, but it's not easy to contr= ol it from job submitter.
Is there any way to limit the concurrent running mappers per job?
Any documents or discussions before?

BTW, any way to search this mailing list before I post a new question?=

Thanks very much.


____________= _________________________________________________
The information contained in this communication is intended solely f= or the use of the individual or entity to whom it is addressed and others a= uthorized to receive it. It may contain confidential or legally privileged information. If you are not the intende= d recipient you are hereby notified that any disclosure, copying, distribut= ion or taking any action in reliance on the contents of this information is= strictly prohibited and may be unlawful. If you have received this communication in error, please notify = us immediately by responding to this email and then delete it from your sys= tem. The firm is neither liable for the proper and complete transmission of= the information contained in this communication nor for any delay in its receipt.
--_000_AD354F56741A1B47882A625909A59C692BDD6729SZXEML505MBXchi_--