hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Raghu Angadi <rang...@yahoo-inc.com>
Subject Re: "Too many open files" in 0.18.3
Date Thu, 12 Feb 2009 22:07:44 GMT

You are most likely hit by 
https://issues.apache.org/jira/browse/HADOOP-4346 . I hope it gets back 
ported. There is a 0.18 patch posted there.

btw, does 16k help in your case?

Ideally 1k should be enough (with small number of clients). Please try 
the above patch with 1k limit.

Raghu.

Sean Knapp wrote:
> Hi all,
> I'm continually running into the "Too many open files" error on 18.3:
> 
> DataXceiveServer: java.io.IOException: Too many open files
>         at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
>         at
>> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145)
>>
>         at
>> sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:96)
>>
>         at
>> org.apache.hadoop.dfs.DataNode$DataXceiveServer.run(DataNode.java:997)
>>
>         at java.lang.Thread.run(Thread.java:619)
> 
> 
> I'm writing thousands of files in the course of a few minutes, but nothing
> that seems too unreasonable, especially given the numbers below. I begin
> getting a surge of these warnings right as I hit 1024 files open by the
> DataNode:
> 
> hadoop@u10:~$ ps ux | awk '/dfs\.DataNode/ { print $2 }' | xargs -i ls
>> /proc/{}/fd | wc -l
>>
> 1023
> 
> 
> This is a bit unexpected, however, since I've configured my open file limit
> to be 16k:
> 
> hadoop@u10:~$ ulimit -a
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 268288
> max locked memory       (kbytes, -l) 32
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 16384
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 268288
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 
> 
> Note, I've also set dfs.datanode.max.xcievers to 8192 in hadoop-site.xml.
> 
> Thanks in advance,
> Sean
> 


Mime
View raw message