hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aiden Bell <aiden...@gmail.com>
Subject Re: 答复: OOM/crashes due to process number limit
Date Fri, 19 Oct 2012 11:04:22 GMT
Yep, and then the entire OS can't fork new processes.

On 19 October 2012 05:10, 谢良 <xieliang@xiaomi.com> wrote:

>  what's the exactly OOM error message, is it sth like "OutOfMemoryError:
> unable to create new native thread" ?
>  ------------------------------
> *发件人:* Aiden Bell [aiden449@gmail.com]
> *发送时间:* 2012年10月18日 22:24
> *收件人:* user@hadoop.apache.org
> *主题:* OOM/crashes due to process number limit
>
>  Hi All,
>
> Im running quite a basic map/reduce job with 10 or so map tasks. During
> the task's execution, the
> entire stack (and my OS for that matter) start failing due to being unable
> to fork() new processes.
> It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this
> resource. RAM utilisation is fine however.
> This still occurs with ulimit set to unlimited.
>
> Any ideas or advice would be great, it seems very sketchy for a task that
> doesn't require much grunt.
>
> Cheers!
>
>


-- 
------------------------------------------------------------------
Never send sensitive or private information via email unless it is
encrypted. http://www.gnupg.org

Mime
View raw message