hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arun C Murthy <...@hortonworks.com>
Subject Re: Out of memory (heap space) errors on job tracker
Date Fri, 08 Jun 2012 18:59:21 GMT
This shouldn't be happening at all...

What version of hadoop are you running? Potentially you need configs to protect the JT that
you are missing, those should ensure your hadoop-1.x JT is very reliable.


On Jun 8, 2012, at 8:26 AM, David Rosenstrauch wrote:

> Our job tracker has been seizing up with Out of Memory (heap space) errors for the past
2 nights.  After the first night's crash, I doubled the heap space (from the default of 1GB)
to 2GB before restarting the job.  After last night's crash I doubled it again to 4GB.
> This all seems a bit puzzling to me.  I wouldn't have thought that the job tracker should
require so much memory.  (The NameNode, yes, but not the job tracker.)
> Just wondering if this behavior sounds reasonable, or if perhaps there might be a bigger
problem at play here.  Anyone have any thoughts on the matter?
> Thanks,
> DR

Arun C. Murthy
Hortonworks Inc.

View raw message