hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Loddengaard <a...@cloudera.com>
Subject Re: jobtracker.jsp reports "GC overhead limit exceeded"
Date Fri, 30 Jul 2010 19:19:45 GMT
err, "ps aux", not "ps".

Alex

On Fri, Jul 30, 2010 at 3:19 PM, Alex Loddengaard <alex@cloudera.com> wrote:

> What does "ps" show you?  How much memory is being used by the jobtracker,
> and how large is its heap (loop for HADOOP_HEAPSIZE in hadoop-env.sh)?  Also
> consider turning on GC logging, which will find its way to the jobtracker
> .out log in /var/log/hadoop:
>
> <http://java.sun.com/developer/technicalArticles/Programming/GCPortal/>
>
> Alex
>
>
> On Fri, Jul 30, 2010 at 3:10 PM, jiang licht <licht_jiang@yahoo.com>wrote:
>
>> http://server:50030/jobtracker.jsp generates the following error message:
>>
>> HTTP ERROR: 500
>>
>> GC overhead limit exceeded
>>
>> RequestURI=/jobtracker.jsp
>> Caused by:
>>
>> java.lang.OutOfMemoryError: GC overhead limit exceeded
>>
>> Powered by Jetty://
>>
>> The jobtracker is running below the limit. But "hadoop job -status" seems
>> to halt and does not response ...
>>
>> The last 2 lines of jobtracker logs:
>>
>> 2010-07-30 13:53:18,482 DEBUG org.apache.hadoop.mapred.JobTracker: Got
>> heartbeat from: tracker_host1:localhost.localdomain/127.0.0.1:53914(restarted: false
initialContact: false acceptNewTasks: true) with
>> responseId: -31252
>> 2010-07-30 13:55:32,917 DEBUG org.apache.hadoop.mapred.JobTracker:
>> Starting launching task sweep
>>
>> Any thought about this?
>>
>> Thanks!
>> --Michael
>>
>>
>>
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message