hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stas Oskin <stas.os...@gmail.com>
Subject Re: "Too many open files" error, which gets resolved after some time
Date Mon, 03 Aug 2009 20:52:05 GMT
Hi Raghu.

Thanks for the clarification and for explaining the potential issue.

It is not just the fds, the applications that hit fd limits hit thread
> limits as well. Obviously Hadoop can not sustain this as the range of
> applications increases. It will be fixed one way or the other.
>

Can you please clarify the thread limit matter?

AFAIK it only happens if the allocated stack too large, and we speak about
thousands of threads ( a possible solution described here:
http://candrews.integralblue.com/2009/01/preventing-outofmemoryerror-native-thread/
).

So how it's tied to fd's?

Thanks.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message