hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Saptarshi Guha" <saptarshi.g...@gmail.com>
Subject Re: OutofMemory Error, inspite of large amounts provided
Date Mon, 29 Dec 2008 03:47:36 GMT
Caught it in action.
Running  ps -e -o 'vsz pid ruser args' |sort -nr|head -5
on a machine where the map task was running
04812 16962 sguha    /home/godhuli/custom/jdk1.6.0_11/jre/bin/java
-Xmx200m -Djava.io.tmpdir=/home/godhuli/custom/hdfs/mapred/local/taskTracker/jobcache/job_200812282102_0003/attempt_200812282102_0003_m_000000_0/work/tmp
-classpath /attempt_200812282102_0003_m_000000_0/work
-Dhadoop.tasklog.totalLogFileSize=0 org.apache.hadoop.mapred.Child 40443 attempt_200812282102_0003_m_000000_0 1525207782

Also, the reducer only used 540mb. I notice -Xmx200m was passed, how
to change it?

On Sun, Dec 28, 2008 at 10:19 PM, Saptarshi Guha
<saptarshi.guha@gmail.com> wrote:
> On Sun, Dec 28, 2008 at 4:33 PM, Brian Bockelman <bbockelm@cse.unl.edu> wrote:
>> Hey Saptarshi,
>> Watch the running child process while using "ps", "top", or Ganglia
>> monitoring.  Does the map task actually use 16GB of memory, or is the memory
>> not getting set properly?
>> Brian
> I haven't figured out how to run ganglia, however, also the children
> quit before i can see their memory usage. The trackers all use
> 16GB.(from the ps command). However, i noticed some use 512MB
> only(when i manged to catch them in time)
> Regards

Saptarshi Guha - saptarshi.guha@gmail.com

View raw message