hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "charles du" <taiping...@gmail.com>
Subject Re: JobTracker Faiing to respond with OutOfMemory error
Date Sun, 07 Dec 2008 22:48:00 GMT
Thanks for the information. It helps a lot.

On Sat, Dec 6, 2008 at 11:54 AM, Arun C Murthy <acm@yahoo-inc.com> wrote:

>
> On Dec 6, 2008, at 11:40 AM, charles du wrote:
>
>  I used the default value, which I believe is 1000 MB. My cluster has about
>> 30 machines. Each machine is configured to run up to 5 tasks. We run
>> hourly,
>> daily jobs on the cluster. When OOM happened, I was running a job with
>> 1500
>> - 1600 mappers and 40 reducers.
>>
>> I noticed that the memory usage of the job tracker keeps getting up.  In
>> one
>> or two days, the job tracker uses about 1Gbytes memory, and stops
>> responding
>> to any request. Thanks.
>>
>>
>>
> Do you know how many total tasks (across all jobs) were executed in the day
> or two by the JT?
>
> Couple of workarounds:
> 1. Move to hadoop-0.18 - we've fixed
> https://issues.apache.org/jira/browse/HADOOP-3670.
> 2. Increase the JT's heapsize to 2G or 3G.
>
> Arun
>
>


-- 
tp

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message