hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 谢良 <xieli...@xiaomi.com>
Subject 答复: OOM/crashes due to process number limit
Date Fri, 19 Oct 2012 04:10:48 GMT
what's the exactly OOM error message, is it sth like "OutOfMemoryError: unable to create new
native thread" ?
________________________________
发件人: Aiden Bell [aiden449@gmail.com]
发送时间: 2012年10月18日 22:24
收件人: user@hadoop.apache.org
主题: OOM/crashes due to process number limit

Hi All,

Im running quite a basic map/reduce job with 10 or so map tasks. During the task's execution,
the
entire stack (and my OS for that matter) start failing due to being unable to fork() new processes.
It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this resource. RAM utilisation
is fine however.
This still occurs with ulimit set to unlimited.

Any ideas or advice would be great, it seems very sketchy for a task that doesn't require
much grunt.

Cheers!

Mime
View raw message