hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arun C Murthy <...@yahoo-inc.com>
Subject Re: JobTracker Faiing to respond with OutOfMemory error
Date Sat, 06 Dec 2008 19:54:11 GMT

On Dec 6, 2008, at 11:40 AM, charles du wrote:

> I used the default value, which I believe is 1000 MB. My cluster has  
> about
> 30 machines. Each machine is configured to run up to 5 tasks. We run  
> hourly,
> daily jobs on the cluster. When OOM happened, I was running a job  
> with 1500
> - 1600 mappers and 40 reducers.
>
> I noticed that the memory usage of the job tracker keeps getting  
> up.  In one
> or two days, the job tracker uses about 1Gbytes memory, and stops  
> responding
> to any request. Thanks.
>
>

Do you know how many total tasks (across all jobs) were executed in  
the day or two by the JT?

Couple of workarounds:
1. Move to hadoop-0.18 - we've fixed https://issues.apache.org/jira/browse/HADOOP-3670 
.
2. Increase the JT's heapsize to 2G or 3G.

Arun


Mime
View raw message