hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Juan Pino <juancitomiguel...@gmail.com>
Subject mapred.child.java.opts and mapreduce.reduce.java.opts
Date Mon, 02 Apr 2012 09:30:22 GMT
Hello,

I have a job that requires a bit more memory than the default for the
reducer (not for the mapper).
So for this I have this property in my configuration file:

mapreduce.reduce.java.opts=-Xmx4000m

When I run the job, I can see its configuration in the web interface and I
see that indeed I have mapreduce.reduce.java.opts set to -Xmx4000m
but I also have mapred.child.java.opts set to -Xmx200m and when I ps -ef
the java process, it is using -Xmx200m.

So to make my job work I had to set mapred.child.java.opts=-Xmx4000m in my
configuration file.
However I don't need that much memory for the mapper.
How can I set more memory only for the mapper ? Is the only solution to set
mapred.child.java.opts to -Xmx4000m, mapreduce.reduce.java.opts to -Xmx4000m
and mapreduce.map.java.opts to -Xmx200m ?

I am using hadoop 1.0.1.

Thank you very much,

Juan

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message