Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DE5501044E for ; Tue, 2 Jul 2013 17:57:05 +0000 (UTC) Received: (qmail 44989 invoked by uid 500); 2 Jul 2013 17:56:58 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 44651 invoked by uid 500); 2 Jul 2013 17:56:55 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 44639 invoked by uid 99); 2 Jul 2013 17:56:54 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 02 Jul 2013 17:56:54 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of sandy.ryza@cloudera.com designates 209.85.160.52 as permitted sender) Received: from [209.85.160.52] (HELO mail-pb0-f52.google.com) (209.85.160.52) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 02 Jul 2013 17:56:48 +0000 Received: by mail-pb0-f52.google.com with SMTP id xa12so6350183pbc.39 for ; Tue, 02 Jul 2013 10:56:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=3E2EFdtoEN7musvh8lbCPhVFgs5HiL0PqWqRuc9+ye4=; b=lnKRGsfeYW16ZCL14NxniOpehrUqkOx5qPlR+sv9t6Gltpq55zLKxMYkgFoRWyTmr9 FKURiIuKXBQ7pWvBSdlGA4KNZaRKf7sXEJAwjU1ZDNr7Lbel2VrLlTXQ8d8DswE6LllD JSTxzidxqN6Z3w4oezrFHAMnyFURzuUKV/XWpJ6nshPjDzpAbuwLhYDZpZ5h3mbelcxh UAgTXfr2BynhrEK8ApYyhcOkN13XuAthpReXSrf+0UtJ6KeLf2AUdNEoBKX3Lo5eaec9 9JgxXY1XvoZVXqj/9cYidGOjaIedqS1iakC7TUryLfafAQ3CAdbMC0lnmybhGjvZ7+7n G13A== MIME-Version: 1.0 X-Received: by 10.66.149.198 with SMTP id uc6mr30417718pab.61.1372787787008; Tue, 02 Jul 2013 10:56:27 -0700 (PDT) Received: by 10.70.8.37 with HTTP; Tue, 2 Jul 2013 10:56:26 -0700 (PDT) In-Reply-To: References: <869970D71E26D7498BDAC4E1CA92226B658D64E9@MBX021-E3-NJ-2.exch021.domain.local> Date: Tue, 2 Jul 2013 10:56:26 -0700 Message-ID: Subject: Re: Containers and CPU From: Sandy Ryza To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b6da8c265a28204e08b1095 X-Gm-Message-State: ALoCoQnriMAB8ly8deOfdZyu9/DywcIVP1Fk2hAWkTugiaExr5HpRY6p5cRTOBTtObOSZBpNW7vo X-Virus-Checked: Checked by ClamAV on apache.org --047d7b6da8c265a28204e08b1095 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable CPU limits are only enforced if cgroups is turned on. With cgroups on, they are only limited when there is contention, in which case tasks are given CPU time in proportion to the number of cores requested for/allocated to them. Does that make sense? -Sandy On Tue, Jul 2, 2013 at 9:50 AM, Chuan Liu wrote: > I believe this is the default behavior.**** > > By default, only memory limit on resources is enforced.**** > > The capacity scheduler will use DefaultResourceCalculator to compute > resource allocation for containers by default, which also does not take C= PU > into account.**** > > ** ** > > -Chuan**** > > ** ** > > *From:* John Lilley [mailto:john.lilley@redpoint.net] > *Sent:* Tuesday, July 02, 2013 8:57 AM > *To:* user@hadoop.apache.org > *Subject:* Containers and CPU**** > > ** ** > > I have YARN tasks that benefit from multicore scaling. However, they > don=92t **always** use more than one core. I would like to allocate > containers based only on memory, and let each task use as many cores as > needed, without allocating exclusive CPU =93slots=94 in the scheduler. F= or > example, on an 8-core node with 16GB memory, I=92d like to be able to run= 3 > tasks each consuming 4GB memory and each using as much CPU as they like. > Is this the default behavior if I don=92t specify CPU restrictions to the > scheduler?**** > > Thanks**** > > John**** > > ** ** > > ** ** > --047d7b6da8c265a28204e08b1095 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable
CPU limits are only enforced if cgroups is turned on. =A0W= ith cgroups on, they are only limited when there is contention, in which ca= se tasks are given CPU time in proportion to the number of cores requested = for/allocated to them. =A0Does that make sense?

-Sandy


<= div class=3D"gmail_quote">On Tue, Jul 2, 2013 at 9:50 AM, Chuan Liu <= chuanliu@microsoft.com> wrote:

I believe this is the = default behavior.

By default, only memor= y limit on resources is enforced.

The capacity scheduler= will use DefaultResourceCalculator to compute resource allocation for cont= ainers by default, which also does not take CPU into account.=

=A0

-Chuan

=A0

From: John Lilley [mailto:john.lilley@redpoint.net] Sent: Tuesday, July 02, 2013 8:57 AM
To: user= @hadoop.apache.org
Subject: Containers and CPU

=A0

I have YARN tasks that benefit from multicore scalin= g.=A0 However, they don=92t *always* use more than one core.=A0 I wo= uld like to allocate containers based only on memory, and let each task use= as many cores as needed, without allocating exclusive CPU =93slots=94 in the scheduler.=A0 For example, on an 8-core n= ode with 16GB memory, I=92d like to be able to run 3 tasks each consuming 4= GB memory and each using as much CPU as they like.=A0 Is this the default b= ehavior if I don=92t specify CPU restrictions to the scheduler?

Thanks

John

=A0

=A0


--047d7b6da8c265a28204e08b1095--