hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From peter 2 <regest...@gmail.com>
Subject Dynamically set map / reducer memory
Date Fri, 17 Oct 2014 18:24:44 GMT
HI Guys,
I am trying to run a few MR jobs in a succession, some of the jobs don't 
need that much memory and others do. I want to be able to tell hadoop 
how much memory should be allocated  for the mappers of each job.
I know how to increase the memory for a mapper JVM, through the mapred xml.
I tried manually setting the mapreduce.reduce.java.opts= 
-Xmx<someNumber>m , but wasn't picked up by the mapper jvm, the global 
setting was always been picked up .

In summation
Job 1 - Mappers need only 250 Mg of Ram
Job2 - Mapper
            Reducer need around - 2Gb

I don't want to be able to set those restrictions prior to submitting 
the job to my hadoop cluster.

Mime
View raw message