hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aiden Bell <aiden...@gmail.com>
Subject OOM/crashes due to process number limit
Date Thu, 18 Oct 2012 14:24:50 GMT
Hi All,

Im running quite a basic map/reduce job with 10 or so map tasks. During the
task's execution, the
entire stack (and my OS for that matter) start failing due to being unable
to fork() new processes.
It seems Hadoop (1.0.3) is creating 700+ threads and exhausting this
resource. RAM utilisation is fine however.
This still occurs with ulimit set to unlimited.

Any ideas or advice would be great, it seems very sketchy for a task that
doesn't require much grunt.


View raw message