hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Kerzner <markkerz...@gmail.com>
Subject Re: "Too many open files" in 0.18.3
Date Thu, 12 Feb 2009 20:05:38 GMT
I once had "too many open files" when I was opening too many sockets and not
closing them...

On Thu, Feb 12, 2009 at 1:56 PM, Sean Knapp <sean@ooyala.com> wrote:

> Hi all,
> I'm continually running into the "Too many open files" error on 18.3:
>
> DataXceiveServer: java.io.IOException: Too many open files
> >
>        at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> >
>        at
> >
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145)
> >
>        at
> > sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:96)
> >
>        at
> > org.apache.hadoop.dfs.DataNode$DataXceiveServer.run(DataNode.java:997)
> >
>        at java.lang.Thread.run(Thread.java:619)
> >
>
>
> I'm writing thousands of files in the course of a few minutes, but nothing
> that seems too unreasonable, especially given the numbers below. I begin
> getting a surge of these warnings right as I hit 1024 files open by the
> DataNode:
>
> hadoop@u10:~$ ps ux | awk '/dfs\.DataNode/ { print $2 }' | xargs -i ls
> > /proc/{}/fd | wc -l
> >
> 1023
> >
>
>
> This is a bit unexpected, however, since I've configured my open file limit
> to be 16k:
>
> hadoop@u10:~$ ulimit -a
> >
> core file size          (blocks, -c) 0
> >
> data seg size           (kbytes, -d) unlimited
> >
> scheduling priority             (-e) 0
> >
> file size               (blocks, -f) unlimited
> >
> pending signals                 (-i) 268288
> >
> max locked memory       (kbytes, -l) 32
> >
> max memory size         (kbytes, -m) unlimited
> >
> open files                      (-n) 16384
> >
> pipe size            (512 bytes, -p) 8
> >
> POSIX message queues     (bytes, -q) 819200
> >
> real-time priority              (-r) 0
> >
> stack size              (kbytes, -s) 8192
> >
> cpu time               (seconds, -t) unlimited
> >
> max user processes              (-u) 268288
> >
> virtual memory          (kbytes, -v) unlimited
> >
> file locks                      (-x) unlimited
> >
>
>
> Note, I've also set dfs.datanode.max.xcievers to 8192 in hadoop-site.xml.
>
> Thanks in advance,
> Sean
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message