Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 94735 invoked from network); 19 Mar 2008 22:38:18 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 19 Mar 2008 22:38:18 -0000 Received: (qmail 64298 invoked by uid 500); 19 Mar 2008 22:38:12 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 64273 invoked by uid 500); 19 Mar 2008 22:38:12 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 64232 invoked by uid 99); 19 Mar 2008 22:38:12 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Mar 2008 15:38:12 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Mar 2008 22:37:42 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 57FD7234C0AF for ; Wed, 19 Mar 2008 15:36:24 -0700 (PDT) Message-ID: <45056518.1205966184359.JavaMail.jira@brutus> Date: Wed, 19 Mar 2008 15:36:24 -0700 (PDT) From: "Raghu Angadi (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Issue Comment Edited: (HADOOP-3051) DataXceiver: java.io.IOException: Too many open files In-Reply-To: <2082866017.1205964984801.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-3051?page=3Dcom.atlassia= n.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D125= 80590#action_12580590 ]=20 rangadi edited comment on HADOOP-3051 at 3/19/08 3:35 PM: --------------------------------------------------------------- What is the fd limit you have for your JVM? 0.17 uses NIO sockets. Looks li= ke JVM uses selector to wait for connect, and each selector in Java eats up= 3 extra fds :(. DataNode will also use selectors if it needs to wait on so= ckets.. > (50 dfs clients with 40 streams each writing concurrently files on a 8 no= des DFS cluster)=20 2000 writes across 8 datanodes?=20 yes, 0.17 eats more file descriptors, especially with loads that result in = a lot of threads waiting on sockets (as opposed to disk i/o). If this is bi= g problem we might default to not using extra fds and provide a config opti= on to enable it. One would enable such an option if 'write timout' is requi= red on sockets (HADOOP-2346). was (Author: rangadi): What is the fd limit you have for your JVM? 0.17 uses NIO sockets. Look= s like JVM uses selector to wait for connect, and each selector in Java eat= s up 3 extra fds :(. Also DataNode will also use selectors if it needs to w= ait on sockets.. and while writing data, there=20 > (50 dfs clients with 40 streams each writing concurrently files on a 8 no= des DFS cluster)=20 2000 writes across 8 datanodes?=20 yes, 0.17 eats more file descriptors, especially with loads that result in = a lot of threads waiting on sockets (as opposed to disk i/o). If this is bi= g problem we might default to not using extra fds and provide a config opti= on to enable it. One would enable such an option if 'write timout' is requi= red on sockets (HADOOP-2346). =20 > DataXceiver: java.io.IOException: Too many open files > ----------------------------------------------------- > > Key: HADOOP-3051 > URL: https://issues.apache.org/jira/browse/HADOOP-3051 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.17.0 > Reporter: Andr=C3=A9 Martin > > I just ran an experiment with the latest nightly build hadoop-2008-03-15 = available and after 2 minutes I'm getting a tons of "java.io.IOException: T= oo many open files" exceptions as shown here: > {noformat} 2008-03-19 20:08:09,303 ERROR org.apache.hadoop.dfs.DataNode:= =20 > 141.30.xxx.xxx:50010:DataXceiver: java.io.IOException: Too many open file= s > at sun.nio.ch.IOUtil.initPipe(Native Method) > at sun.nio.ch.EPollSelectorImpl.(Unknown Source) > at sun.nio.ch.EPollSelectorProvider.openSelector(Unknown Source) > at sun.nio.ch.Util.getTemporarySelector(Unknown Source) > at sun.nio.ch.SocketAdaptor.connect(Unknown Source) > at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.ja= va:1114) > at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:956) > at java.lang.Thread.run(Unknown Source){noformat} > I ran the same experiment with same high workload (50 dfs clients with 40= streams each writing concurrently files on a 8 nodes DFS cluster) with the= 0.16.1 release and no exception is thrown. So it looks like a bug to me... --=20 This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.