Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5E569176BC for ; Mon, 29 Sep 2014 16:44:30 +0000 (UTC) Received: (qmail 15994 invoked by uid 500); 29 Sep 2014 16:44:21 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 15870 invoked by uid 500); 29 Sep 2014 16:44:21 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 15860 invoked by uid 99); 29 Sep 2014 16:44:21 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 29 Sep 2014 16:44:21 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of yuzhihong@gmail.com designates 209.85.160.177 as permitted sender) Received: from [209.85.160.177] (HELO mail-yk0-f177.google.com) (209.85.160.177) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 29 Sep 2014 16:43:55 +0000 Received: by mail-yk0-f177.google.com with SMTP id q200so1094946ykb.22 for ; Mon, 29 Sep 2014 09:43:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=og97M2ElATsXTHR//+w63Vv4MvH8vUMC9TG/2ytGn58=; b=LpzJOqE6lNqv41RXY/toRojz2iHnFJll/XjxZWHrHKMgb1U6D79bO6wN+Ems8EXHK/ jS9vZG/ZlvBiaOql+cdLsmgi73EtKeE/nHwA5TPiJS/WlpPa+TPA+KKoowEqX2XJmM3Q f8BDsRP2YE9MmLnIEv/CzoZn5OygpuKcWW2MGolkiR6NoBBLyyttH8PuJjNCgOI/kBSz dAGsy55mbyouIXjR+HKtyW2SLhi9SFIZ/4QHSVC1l+2DHRyRH65dB8wwzUzycuk5024g Z2MUC5sQOyVCVWzcvOYCzVF5jCqsMZzgKPFvkOz9q3dWF5MAnnic/arHvwp8tH6deAg/ 6vdA== MIME-Version: 1.0 X-Received: by 10.236.231.39 with SMTP id k37mr56454393yhq.32.1412009034420; Mon, 29 Sep 2014 09:43:54 -0700 (PDT) Received: by 10.170.163.70 with HTTP; Mon, 29 Sep 2014 09:43:54 -0700 (PDT) In-Reply-To: References: Date: Mon, 29 Sep 2014 09:43:54 -0700 Message-ID: Subject: Re: Using Yarn in end to end tests From: Ted Yu To: "common-user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=089e0103e730ea999d050436f8ac X-Virus-Checked: Checked by ClamAV on apache.org --089e0103e730ea999d050436f8ac Content-Type: text/plain; charset=UTF-8 I got the following message after clicking on the link: must be logged in Can you give login information ? Cheers On Mon, Sep 29, 2014 at 9:40 AM, Alex Newman wrote: > I am currently developing tests that use a mini yarn cluster. Because it is > running on circle-ci I need to use the absolute minimum amount of > memory. > > I'm currently setting > conf.setFloat("yarn. > nodemanager.vmem-pmem-ratio", 8.0f); > conf.setBoolean("mapreduce.map.speculative", false); > conf.setBoolean("mapreduce.reduce.speculative", false); > conf.setInt("yarn.scheduler.minimum-allocation-mb", 128); > conf.setInt("yarn.scheduler.maximum-allocation-mb", 256); > conf.setInt("yarn.nodemanager.resource.memory-mb", 256); > conf.setInt("mapreduce.map.memory.mb", 128); > conf.set("mapreduce.map.java.opts", "-Xmx128m"); > > conf.setInt("mapreduce.reduce.memory.mb", 128); > conf.set("mapreduce.reduce.java.opts", "-Xmx128m"); > conf.setInt("mapreduce.task.io.sort.mb", 64); > > conf.setInt("yarn.app.mapreduce.am.resource.mb", 128); > conf.set("yarn.app.mapreduce.am.command-opts", "-Xmx109m"); > > conf.setInt("yarn.scheduler.minimum-allocation-vcores", 1); > conf.setInt("yarn.scheduler.maximum-allocation-vcores", 1); > conf.setInt("yarn.nodemanager.resource.cpu-vcores", 1); > conf.setInt("mapreduce.map.cpu.vcore", 1); > conf.setInt("mapreduce.reduce.cpu.vcore", 1); > > conf.setInt("mapreduce.tasktracker.map.tasks.maximum", 1); > conf.setInt("mapreduce.tasktracker.reduce.tasks.maximum", 1); > > conf.setInt("yarn.scheduler.capacity.root.capacity",1); > conf.setInt("yarn.scheduler.capacity.maximum-applications", 1); > > conf.setInt("mapreduce.jobtracker.taskscheduler.maxrunningtasks.perjob", 1); > > but I am still seeing many child tasks running > > https://circle-artifacts.com/gh/OhmData/hbase-public/314/artifacts/2/tmp/memory-usage.txt > > Any ideas on how to actually limit yarn to one or two children at a time? > --089e0103e730ea999d050436f8ac Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I got the following message after clicking on the link:
must be logged in
Can you give login information ?
Cheers

On M= on, Sep 29, 2014 at 9:40 AM, Alex Newman <posix4e@gmail.com>= wrote:
I am currently developing tests t= hat use a mini yarn cluster. Because it is
running on circle-ci I need to use the absolute minimum amount of
memory.

I'm currently setting
=C2=A0 =C2=A0 conf.setFloat("yarn.
nodemanager.vmem-pmem-ratio", 8.0f);
=C2=A0 =C2=A0 conf.setBoolean("mapreduce.map.speculative", false)= ;
=C2=A0 =C2=A0 conf.setBoolean("mapreduce.reduce.speculative", fal= se);
=C2=A0 =C2=A0 conf.setInt("yarn.scheduler.minimum-allocation-mb",= 128);
=C2=A0 =C2=A0 conf.setInt("yarn.scheduler.maximum-allocation-mb",= 256);
=C2=A0 =C2=A0 conf.setInt("yarn.nodemanager.resource.memory-mb", = 256);
=C2=A0 =C2=A0 conf.setInt("mapreduce.map.memory.mb", 128);
=C2=A0 =C2=A0 conf.set("mapreduce.map.java.opts", "-Xmx128m&= quot;);

=C2=A0 =C2=A0 conf.setInt("mapreduce.reduce.memory.mb", 128);
=C2=A0 =C2=A0 conf.set("mapreduce.reduce.java.opts", "-Xmx12= 8m");
=C2=A0 =C2=A0 conf.setInt("mapreduce.task.io.sort.mb", 64);

=C2=A0 =C2=A0 conf.setInt("yarn.app.mapreduce.am.resource.mb", 12= 8);
=C2=A0 =C2=A0 conf.set("yarn.app.mapreduce.am.command-opts", &quo= t;-Xmx109m");

=C2=A0 =C2=A0 conf.setInt("yarn.scheduler.minimum-allocation-vcores&qu= ot;, 1);
=C2=A0 =C2=A0 conf.setInt("yarn.scheduler.maximum-allocation-vcores&qu= ot;, 1);
=C2=A0 =C2=A0 conf.setInt("yarn.nodemanager.resource.cpu-vcores",= 1);
=C2=A0 =C2=A0 conf.setInt("mapreduce.map.cpu.vcore", 1);
=C2=A0 =C2=A0 conf.setInt("mapreduce.reduce.cpu.vcore", 1);

=C2=A0 =C2=A0 conf.setInt("mapreduce.tasktracker.map.tasks.maximum&quo= t;, 1);
=C2=A0 =C2=A0 conf.setInt("mapreduce.tasktracker.reduce.tasks.maximum&= quot;, 1);

=C2=A0 =C2=A0 conf.setInt("yarn.scheduler.capacity.root.capacity"= ,1);
=C2=A0 =C2=A0 conf.setInt("yarn.scheduler.capacity.maximum-application= s", 1);
=C2=A0 =C2=A0 conf.setInt("mapreduce.jobtracker.taskscheduler.maxrunni= ngtasks.perjob", 1);

but I am still seeing many child tasks running
https://circle-artifacts.com/g= h/OhmData/hbase-public/314/artifacts/2/tmp/memory-usage.txt

Any ideas on how to actually limit yarn to one or two children at a time?

--089e0103e730ea999d050436f8ac--