Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D377F10026 for ; Tue, 19 Nov 2013 22:12:37 +0000 (UTC) Received: (qmail 1864 invoked by uid 500); 19 Nov 2013 22:12:32 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 1659 invoked by uid 500); 19 Nov 2013 22:12:32 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 1644 invoked by uid 99); 19 Nov 2013 22:12:32 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 19 Nov 2013 22:12:32 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of sandy.ryza@cloudera.com designates 209.85.192.177 as permitted sender) Received: from [209.85.192.177] (HELO mail-pd0-f177.google.com) (209.85.192.177) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 19 Nov 2013 22:12:28 +0000 Received: by mail-pd0-f177.google.com with SMTP id q10so3508950pdj.36 for ; Tue, 19 Nov 2013 14:12:08 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=6VNX4wvdkAXNB/vc+Y1826d4Zje9aQ8CrphKtJp/eGg=; b=ekgwbb/KaXYLZkYD8Pqu0me/zuh6w4pagETXA2MdojMLjoo0uMwHR4wnHYDChDm7NE oRp62crbf+kh0s7D7yxDuaEwLEhJfYuIpfxfE3k3unYjRgaBdIh724gVteVcPhGRAosI D2fn41wKJsekC6Wu5FoQmzcDXcoEIDt9xDdhl5hSTTjjr8Ak5q2go16KR0ybqR+lqh4I 45Jmo9Xx6Y/h5xpBRrPY32p0+0cPjKFcXvCOez7XdYfufQIYcT3z72hVFd3CLV58nygt Jb41KYmimKJndqTBGugdws0YellON+pJpQlCoRIMw5p/5UaYF11F5n8imbM3NDp+hOmG NEHQ== X-Gm-Message-State: ALoCoQn4gonhBLvgt9Gnb+WuwhNN9G64Gh8dRlgWvy9NSuRIDKFieK/VhJtj1qBb252Ug2cbMYZ8 MIME-Version: 1.0 X-Received: by 10.66.163.164 with SMTP id yj4mr28990323pab.91.1384899128248; Tue, 19 Nov 2013 14:12:08 -0800 (PST) Received: by 10.70.52.2 with HTTP; Tue, 19 Nov 2013 14:12:08 -0800 (PST) In-Reply-To: References: Date: Tue, 19 Nov 2013 14:12:08 -0800 Message-ID: Subject: Re: Limit on total jobs running using fair scheduler From: Sandy Ryza To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b86ec5e96d4df04eb8ef449 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b86ec5e96d4df04eb8ef449 Content-Type: text/plain; charset=ISO-8859-1 Unfortunately, this is not possible in the MR1 fair scheduler without setting the jobs for individual pools. In MR2, fair scheduler hierarchical queues will allow setting maxRunningApps at the top of the hierarchy, which would have the effect you're looking for. -Sandy On Tue, Nov 19, 2013 at 2:01 PM, Omkar Joshi wrote: > Not sure about the fair scheduler but in capacity scheduler you can > achieve this by controlling number of jobs/applications per queue. > > Thanks, > Omkar Joshi > *Hortonworks Inc.* > > > On Tue, Nov 19, 2013 at 3:26 AM, Ivan Tretyakov < > itretyakov@griddynamics.com> wrote: > >> Hello! >> >> We are using CDH 4.1.1 (Version: 2.0.0-mr1-cdh4.1.1) and fair-scheduler. >> We need to limit total number of jobs which can run at the same time on >> cluster. >> I can see maxRunningJobs options but it sets limit for pool or user. >> We wouldn't like to limit each pool or user we just need to set limit on >> total number of jobs running. >> >> Is it possible to do it using fair scheduler? >> Can capacity scheduler help here? >> Maybe there are other options to achieve the goal? >> >> Thanks in advance! >> >> -- >> Best Regards >> Ivan Tretyakov >> >> > > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or entity > to which it is addressed and may contain information that is confidential, > privileged and exempt from disclosure under applicable law. If the reader > of this message is not the intended recipient, you are hereby notified that > any printing, copying, dissemination, distribution, disclosure or > forwarding of this communication is strictly prohibited. If you have > received this communication in error, please contact the sender immediately > and delete it from your system. Thank You. --047d7b86ec5e96d4df04eb8ef449 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Unfortunately, this is not possible in the MR1 fair schedu= ler without setting the jobs for individual pools. =A0In MR2, fair schedule= r hierarchical queues will allow setting maxRunningApps at the top of the h= ierarchy, which would have the effect you're looking for.

-Sandy


<= div class=3D"gmail_quote">On Tue, Nov 19, 2013 at 2:01 PM, Omkar Joshi <ojoshi@hortonworks.com> wrote:
Not sure about the fair sch= eduler but in capacity scheduler you can achieve this by controlling number= of jobs/applications per queue.

Thanks,
Omkar Joshi


On Tue, Nov 19, 2013 at 3:26 AM, Ivan Tr= etyakov <itretyakov@griddynamics.com> wrote:
Hello!

We are using CDH 4.1.1 (Version:= 2.0.0-mr1-cdh4.1.1) and fair-scheduler.
We need to limit total n= umber of jobs which can run at the same time on cluster.
I can se= e=A0maxRunningJobs options but it sets limit for pool or user.
We wouldn't like to limit each pool or user we just need to set li= mit on total number of jobs running.

Is it possibl= e to do it using fair scheduler?
Can capacity scheduler help here= ?=A0
Maybe there are other options to achieve the goal?

Thanks in advance!

<= /div>--
Best Regards
Ivan Tretyakov



CONFIDENTIALITY NOTICE
NOTICE: This message is = intended for the use of the individual or entity to which it is addressed a= nd may contain information that is confidential, privileged and exempt from= disclosure under applicable law. If the reader of this message is not the = intended recipient, you are hereby notified that any printing, copying, dis= semination, distribution, disclosure or forwarding of this communication is= strictly prohibited. If you have received this communication in error, ple= ase contact the sender immediately and delete it from your system. Thank Yo= u.
--047d7b86ec5e96d4df04eb8ef449--