Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 714FC17C06 for ; Fri, 17 Oct 2014 20:30:23 +0000 (UTC) Received: (qmail 1967 invoked by uid 500); 17 Oct 2014 20:30:15 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 1863 invoked by uid 500); 17 Oct 2014 20:30:15 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 1851 invoked by uid 99); 17 Oct 2014 20:30:15 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Oct 2014 20:30:15 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of glingappa@pivotal.io designates 209.85.220.174 as permitted sender) Received: from [209.85.220.174] (HELO mail-vc0-f174.google.com) (209.85.220.174) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Oct 2014 20:29:47 +0000 Received: by mail-vc0-f174.google.com with SMTP id hq12so1176900vcb.5 for ; Fri, 17 Oct 2014 13:29:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=S1XTNA91nl1socTQAPaWOjYCVJeXjhqtxj2OOHjI27I=; b=if/szx921WOdKKYJPlhGTF4pl62G/bPP8G7WKEry+cvoBXM37VuCwKSkBXKAfXD/ZG 42Olij0btSBT5EKgJP+tvguK+uSzmBoFA00Uw9xqQcToUrlhj9Ydofvt/Wp+UGtk+91I k/E7yedr0azgitiqfXL5CFn75neMmLusaqmGejwNzkwMadCjpARw2CHsSMp3YfKvGTgb gy5K+xkG1JwRoNA+Iq22aukWIdQUacdyFe0vDoYRba6xSPk/TzptodXe2t6cfbDH3o28 XkR45MurPPS8YJnRUr+/1Of2N8h0oU9XZmadKtWRieiNH/6tzEax/OfZlxv7ks6SLRkn SpdQ== X-Gm-Message-State: ALoCoQmevicP52eeIGLJPZ0rPjjSKbX69vATjcquWrjNjGFUiuI9QADndX/kePk0eNhzDKIxzczQ MIME-Version: 1.0 X-Received: by 10.52.102.70 with SMTP id fm6mr33453vdb.85.1413577786282; Fri, 17 Oct 2014 13:29:46 -0700 (PDT) Received: by 10.52.114.131 with HTTP; Fri, 17 Oct 2014 13:29:46 -0700 (PDT) In-Reply-To: <54415EEC.2070302@gmail.com> References: <54415EEC.2070302@gmail.com> Date: Fri, 17 Oct 2014 13:29:46 -0700 Message-ID: Subject: Re: Dynamically set map / reducer memory From: Girish Lingappa To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a1134c1e2d071390505a4392d X-Virus-Checked: Checked by ClamAV on apache.org --001a1134c1e2d071390505a4392d Content-Type: text/plain; charset=UTF-8 Peter If you are using oozie to launch the MR jobs you can specify the memory requirements in the workflow action specific to each job, in the workflow xml you are using to launch the job. If you are writing your own driver program to launch the jobs you can still set these parameters in the job configuration you are using to launch the job. In the case where you modified mapred-site.xml to set your memory requirements did you change that on the client machine where you are launching the job? Please share more details on the setup and the way you are launching the jobs so we can better understand the problem you are facing Girish On Fri, Oct 17, 2014 at 11:24 AM, peter 2 wrote: > HI Guys, > I am trying to run a few MR jobs in a succession, some of the jobs don't > need that much memory and others do. I want to be able to tell hadoop how > much memory should be allocated for the mappers of each job. > I know how to increase the memory for a mapper JVM, through the mapred > xml. > I tried manually setting the mapreduce.reduce.java.opts = -Xmxm > , but wasn't picked up by the mapper jvm, the global setting was always > been picked up . > > In summation > Job 1 - Mappers need only 250 Mg of Ram > Job2 - Mapper > Reducer need around - 2Gb > > I don't want to be able to set those restrictions prior to submitting the > job to my hadoop cluster. > --001a1134c1e2d071390505a4392d Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Peter

If you are using oozie to launch = the MR jobs you can specify the memory requirements in the workflow action = specific to each job, in the workflow xml you are using to launch the job. = If you are writing your own driver program to launch the jobs you can still= set these parameters in the job configuration you are using to launch the = job.
=C2=A0In the case where you modified mapred-site.xml to set = your memory requirements did you change that on the client machine where yo= u are launching the job?
=C2=A0Please share more details on the s= etup and the way you are launching the jobs so we can better understand the= problem you are facing

Girish

On Fri, Oct 17, 2014 at 1= 1:24 AM, peter 2 <regestrer@gmail.com> wrote:
=20 =20 =20
HI Guys,
I am trying to run a few MR jobs in a succession, some of the jobs don't need that much memory and others do. I want to be able to tel= l hadoop how much memory should be allocated=C2=A0 for the mappers of eac= h job.
I know how to increase the memory for a mapper JVM, through the mapred xml.
I tried manually setting the=C2=A0 <= /span>mapreduce.reduce.java.opts =3D -Xmx<someNumber>m ,= but wasn't picked up by the mapper jvm, the global setting was always been picked up .

In summation
Job 1 - Mappers need only 250 Mg of Ram
Job2 - Mapper
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Reducer ne= ed around - 2Gb

I don't want to be able to set those restrictions prior to submitting the job to my hadoop cluster.

--001a1134c1e2d071390505a4392d--