hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From André Martin (JIRA) <j...@apache.org>
Subject [jira] Commented: (HADOOP-3051) DataXceiver: java.io.IOException: Too many open files
Date Thu, 20 Mar 2008 22:03:24 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12580942#action_12580942

André Martin commented on HADOOP-3051:

I increased the fd limit to 4096 :-) It performs way better now...

Anyway I would recommend to set the default to "not to use extra fds" since I assume that
a bunch of other users will have the same experience when upgrading from earlier releases...?!?

> DataXceiver: java.io.IOException: Too many open files
> -----------------------------------------------------
>                 Key: HADOOP-3051
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3051
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.0
>            Reporter: André Martin
>            Assignee: Raghu Angadi
> I just ran an experiment with the latest nightly build hadoop-2008-03-15 available and
after 2 minutes I'm getting a tons of "java.io.IOException: Too many open files" exceptions
as shown here:
> {noformat} 2008-03-19 20:08:09,303 ERROR org.apache.hadoop.dfs.DataNode: 
> 141.30.xxx.xxx:50010:DataXceiver: java.io.IOException: Too many open files
>      at sun.nio.ch.IOUtil.initPipe(Native Method)
>      at sun.nio.ch.EPollSelectorImpl.<init>(Unknown Source)
>      at sun.nio.ch.EPollSelectorProvider.openSelector(Unknown Source)
>      at sun.nio.ch.Util.getTemporarySelector(Unknown Source)
>      at sun.nio.ch.SocketAdaptor.connect(Unknown Source)
>      at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1114)
>      at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:956)
>      at java.lang.Thread.run(Unknown Source){noformat}
> I ran the same experiment with same high workload (50 dfs clients with 40 streams each
writing concurrently files on a 8 nodes DFS cluster) with the 0.16.1 release and no exception
is thrown. So it looks like a bug to me...

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message