hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pramod N <npramo...@gmail.com>
Subject Re: Out of memory error by Node Manager, and shut down
Date Thu, 23 May 2013 16:05:06 GMT
Looks like the problem is with jvm heap size. Its trying to create a new
thread and threads require native memory for internal JVM related things.

One of the possible solution is to reduce java heap size(to increase free
native memory). Is there any other information about the memory status
(malloc debug information etc) on NN? That would give more information
about the NN's memory status.

Hope this helps.

*Pramod N*
Bruce Wayne of web
@machinelearner <https://twitter.com/machinelearner>

On Thu, May 23, 2013 at 6:42 PM, Krishna Kishore Bonagiri <
write2kishore@gmail.com> wrote:

> Hi,
>   I have got the following error in node manager's log, and it got shut
> down, after about 10000 application were run after it was started. Any clue
> why does it occur... or is this a bug?
> 2013-05-22 11:53:34,456 FATAL
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[process
> reaper,5,main] threw an Error.  Shutting down now...
> java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830,
> errno 11
>         at java.lang.Thread.startImpl(Native Method)
>         at java.lang.Thread.start(Thread.java:887)
>         at java.lang.ProcessInputStream.<init>(UNIXProcess.java:472)
>         at java.lang.UNIXProcess$1$1$1.run(UNIXProcess.java:157)
>         at
> java.security.AccessController.doPrivileged(AccessController.java:202)
>         at java.lang.UNIXProcess$1$1.run(UNIXProcess.java:137)
> Thanks,
> Kishore

View raw message