Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B941817981 for ; Fri, 7 Nov 2014 13:29:24 +0000 (UTC) Received: (qmail 37461 invoked by uid 500); 7 Nov 2014 13:29:16 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 37333 invoked by uid 500); 7 Nov 2014 13:29:15 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 37323 invoked by uid 99); 7 Nov 2014 13:29:15 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Nov 2014 13:29:15 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of sufi@eaiti.com designates 209.85.215.54 as permitted sender) Received: from [209.85.215.54] (HELO mail-la0-f54.google.com) (209.85.215.54) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 07 Nov 2014 13:29:11 +0000 Received: by mail-la0-f54.google.com with SMTP id s18so4344983lam.41 for ; Fri, 07 Nov 2014 05:28:05 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=39FbLPL78eVTUy9zU6whWoAzg+BNaJm1yx1h3qusy7g=; b=YeSHteyk2P93KE6FUhwJzbrB2GZwFdxGkw0zSLrGTNNIv1wZ93r5JPX6CgaAH/n6DE 7HFH0RxQ9qyxistCeMX6OZ6OID0S+9dkNCRUyrTlsQRHK7K2u/YZvmaIeCdIvpO+m4rR pS6IAHVbsPlq9H9yMrfBVMLFO+KsG1i3U3531t5Yr4nEjeNhLHI9MDwz67bYLpvRuoiU H+o+I2OERjSPQPhgNrqyLqUP4HqbfmXAV/OBpmGNfiD4iodWItEkxAntYtZPGuRv+N4f qt2rYEHrA1JV6OGh+gvNjkd1LJHfnfA2ihg78Gqt13um1j0ycKCan61izDK1KFRE+zvq RqEQ== X-Gm-Message-State: ALoCoQk4VDXUGoIC9xttbJC2mMSEwtkPt+IR5EA2fe83w7qiPOjZOLM9jgidbGEk5Kg1N8s9brnl MIME-Version: 1.0 X-Received: by 10.112.173.100 with SMTP id bj4mr2999799lbc.78.1415366885242; Fri, 07 Nov 2014 05:28:05 -0800 (PST) Received: by 10.112.254.197 with HTTP; Fri, 7 Nov 2014 05:28:05 -0800 (PST) In-Reply-To: References: Date: Fri, 7 Nov 2014 08:28:05 -0500 Message-ID: Subject: Re: One query related to cgroupsin 2.4.1 From: Sufi Nawaz To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c240666c127f050744c87a X-Virus-Checked: Checked by ClamAV on apache.org --001a11c240666c127f050744c87a Content-Type: text/plain; charset=UTF-8 Please advise how to remove myself from this list. Thank you, *Sufi Nawaz *Application Innovator e: sufi@eaiti.com */ *w: www.eaiti.com o: (571) 306-4683 */ *c: (940) 595-1285 On Fri, Nov 7, 2014 at 5:37 AM, Naganarasimha G R (Naga) < garlanaganarasimha@huawei.com> wrote: > Hi All, > > In 2.4.1 When there is multiple components like Hbase and yarn runnning > on the same node can we restrict the cpu usage of of all YARN containers in > that NM through cgroups > Assume a node has 10 cores and cpu vcores for this NM is configured to 7. > so suppose 7 containers each requesting for 1 core is run does cgroup > ensure that cpu usgae of all the containers > do not increase more than 700% by all the containers? > > Basically I want to restrict the cpu usage of all the containers in a > given node so that the system processes and other components like HDFS, > hbase etc.. run well along with YARN > How to acheive this? > > Regards, > > Naganarasimha G R > > Huawei Technologies Co., Ltd., > > > --001a11c240666c127f050744c87a Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Please advise how to remove myself from this list.

Thank you,

Sufi Nawaz=C2=A0
Application Innovator

o: (571) = 306-4683=C2=A0/=C2=A0c: (940) 595-1285

<= /div>

On Fri, Nov 7, 2014 at 5:37 AM, Naganarasimh= a G R (Naga) <garlanaganarasimha@huawei.com> wro= te:
Hi All,

In 2.4.1 When there is multiple components like Hbase and yarn runnnin= g on the same node can we restrict the cpu usage of of all YARN containers = in that NM through cgroups
Assume a node has 10 cores and cpu vcores for this NM is configured to= 7. so suppose 7 containers each requesting for 1 core is run does cgroup e= nsure that cpu usgae of all the containers=C2=A0
do not increase more than 700% by all the containers?=C2=A0

Basically I want to restrict the cpu usage of all the containers in a = given node so that the system processes and other components like HDFS, hba= se etc.. run well along with YARN
How to acheive this?

Regards,

Naganarasimha G R

Huawei Technologies Co., Ltd.,=C2=A0



--001a11c240666c127f050744c87a--