hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sameer Paranjpye <same...@yahoo-inc.com>
Subject Re: OutOfMemory exception
Date Thu, 26 Oct 2006 21:56:50 GMT
Which config file did you change mapred.child.java.opts in? For 
map/reduce tasks the order in which Hadoop applies config files is:

Server config files:
hadoop-default.xml
mapred-default.xml

Client config files:
hadoop-default.xml
mapred-default.xml
hadoop-site.xml

Server config files:
hadoop-site.xml

So if you changed the Java opts in the servers mapred-default.xml they 
would be clobbered by your client config. If you set Java opts in your 
servers hadoop-site.xml, your tasks should get enough heap to work with.

David Pollak wrote:
> Howdy,
> 
> I'm new to Hadoop.  I've got a network of 8 machines with ~1.8TB of 
> storage.  My first Hadoop test run is to count the URLs in a set of 
> crawled pages (~1.6M pages consuming about 70GB of space.)  When I run 
> my app (or just run the Grep example) on the data set, the map task gets 
> to 100%, then I get an IOException and when I review the logs, there's 
> an OutOfMemory error listed in the tasktracker logs ("INFO 
> org.apache.hadoop.madred.TaskRunner: task_0001_m_000258_0 
> java.lang.OutOfMemoryError: Java heap space.")
> 
> I've tried upping mapred.child.java.opts, but that doesn't seem to make 
> a difference.
> 
> Any suggestions on what I can do?
> 
> Thanks,
> 
> David
> 
> 


Mime
View raw message