hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Girish Lingappa <glinga...@pivotal.io>
Subject Re: Dynamically set map / reducer memory
Date Fri, 17 Oct 2014 20:29:46 GMT
Peter

If you are using oozie to launch the MR jobs you can specify the memory
requirements in the workflow action specific to each job, in the workflow
xml you are using to launch the job. If you are writing your own driver
program to launch the jobs you can still set these parameters in the job
configuration you are using to launch the job.
 In the case where you modified mapred-site.xml to set your memory
requirements did you change that on the client machine where you are
launching the job?
 Please share more details on the setup and the way you are launching the
jobs so we can better understand the problem you are facing

Girish

On Fri, Oct 17, 2014 at 11:24 AM, peter 2 <regestrer@gmail.com> wrote:

>  HI Guys,
> I am trying to run a few MR jobs in a succession, some of the jobs don't
> need that much memory and others do. I want to be able to tell hadoop how
> much memory should be allocated  for the mappers of each job.
> I know how to increase the memory for a mapper JVM, through the mapred
> xml.
> I tried manually setting the  mapreduce.reduce.java.opts = -Xmx<someNumber>m
> , but wasn't picked up by the mapper jvm, the global setting was always
> been picked up .
>
> In summation
> Job 1 - Mappers need only 250 Mg of Ram
> Job2 - Mapper
>            Reducer need around - 2Gb
>
> I don't want to be able to set those restrictions prior to submitting the
> job to my hadoop cluster.
>

Mime
View raw message