hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sandy Ryza <sandy.r...@cloudera.com>
Subject Re: Too many open files error with YARN
Date Wed, 20 Mar 2013 17:39:21 GMT
Hi Kishore,

50010 is the datanode port. Does your lsof indicate that the sockets are in
CLOSE_WAIT?  I had come across an issue like this where that was a symptom.


On Wed, Mar 20, 2013 at 4:24 AM, Krishna Kishore Bonagiri <
write2kishore@gmail.com> wrote:

> Hi,
>  I am running a date command with YARN's distributed shell example in a
> loop of 1000 times in this way:
> yarn jar
> /home/kbonagir/yarn/hadoop-2.0.0-alpha/share/hadoop/mapreduce/hadoop-yarn-applications-distributedshell-2.0.0-alpha.jar
> org.apache.hadoop.yarn.applications.distributedshell.Client --jar
> /home/kbonagir/yarn/hadoop-2.0.0-alpha/share/hadoop/mapreduce/hadoop-yarn-applications-distributedshell-2.0.0-alpha.jar
> --shell_command date --num_containers 2
> Around 730th time or so, I am getting an error in node manager's log
> saying that it failed to launch container because there are "Too many open
> files" and when I observe through lsof command,I find that there is one
> instance of this kind of file is left for each run of Application Master,
> and it kept growing as I am running it in loop.
> node1:44871->node1:50010
> Is this a known issue? Or am I missing doing something? Please help.
> Note: I am working on hadoop--2.0.0-alpha
> Thanks,
> Kishore

View raw message