hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 麦树荣 <shurong....@qunar.com>
Subject 答复: How to troubleshoot OutOfMemoryError
Date Tue, 25 Dec 2012 07:31:43 GMT

I guess It means out of memory.

发件人: Junior Mint [mailto:junior.minto.0@gmail.com]
发送时间: 2012年12月24日 11:39
收件人: user@hadoop.apache.org
主题: Re: How to troubleshoot OutOfMemoryError


On Mon, Dec 24, 2012 at 11:30 AM, 周梦想 <ablozhou@gmail.com<mailto:ablozhou@gmail.com>>
I encountered the OOM problem, because i don't set ulimit open files limit. It had nothing
to do with Memory. Memory is sufficient.

Best Regards,

2012/12/22 Manoj Babu <manoj444@gmail.com<mailto:manoj444@gmail.com>>

I faced the same issue due to too much of logging that fills the task tracker log folder.


On Sat, Dec 22, 2012 at 9:10 PM, Stephen Fritz <stephenf@cloudera.com<mailto:stephenf@cloudera.com>>
Troubleshooting OOMs in the map/reduce tasks can be tricky, see page 118 of Hadoop Operations<http://books.google.com/books?id=W5VWrrCOuQ8C&pg=PA123&lpg=PA123&dq=mapred+child+address+space+size&source=bl&ots=PCdqGFbU-Z&sig=ArgpJroU7UEmMqMB_hwXoCq7whk&hl=en&sa=X&ei=TNPVUMjjHsS60AGHtoHQDA&ved=0CEUQ6AEwAw#v=onepage&q=mapred%20child%20address%20space%20size&f=false>
for a couple of settings which could affect the frequency of OOMs which aren't necessarily

To answer your question about getting the heap dump, you should be able to add "-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/some/path" to your mapred.child.java.opts, then look for the heap dump in
that path next time you see the OOM.

On Fri, Dec 21, 2012 at 11:33 PM, David Parks <davidparks21@yahoo.com<mailto:davidparks21@yahoo.com>>
I’m pretty consistently seeing a few reduce tasks fail with OutOfMemoryError (below). It
doesn’t kill the job, but it slows it down.

In my current case the reducer is pretty darn simple, the algorithm basically does:

1.       Do you have 2 values for this key?

2.       If so, build a json string and emit a NullWritable and Text value.

The string buffer I use to build the json is re-used, and I can’t see anywhere in my code
that would be taking more than ~50k of memory at any point in time.

But I want to verify, is there a way to get the heap dump and all after this error? I’m
running on AWS MapReduce v1.0.3 of Hadoop.

Error: java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.shuffleInMemory(ReduceTask.java:1711)
        at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getMapOutput(ReduceTask.java:1571)
        at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:1412)
        at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:1344)

View raw message