hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Lilley <john.lil...@redpoint.net>
Subject Ubuntu open file limits
Date Wed, 30 Sep 2015 12:07:19 GMT
Greetings,

We are starting to support Ubuntu 12.04 LTS servers and HDP. But we are hitting the "open
file limits" problem. Unfortunately setting this system-wide for ubuntu seems difficult --
no matter what we try, YARN tasks always show the result of ulimit -n as 1024 (or if we attempt
to override, 4096). Something is setting a system-wide hard open-file limit to 4096 before
the ResourceManager and NodeManagers start, and our tasks also get that limit. But this causes
all sorts of problems, as you must know Hadoop really wants this limit to be 65536 or more.

What I want is to change the system-wide default open-file limit for everything so that Hadoop
services and everything else pick that up. How do we do that?

We're tried all of the obvious stuff from stackoverflow etc, like:


# vi /etc/security/limits.conf

* soft nofile 65536

* hard nofile 65536

root soft nofile 65536

root hard nofile 65536

But none of this seems to affect the RM/NM limits.

Thanks
john


Mime
View raw message