Return-Path: Delivered-To: apmail-hadoop-hbase-dev-archive@locus.apache.org Received: (qmail 3721 invoked from network); 5 Feb 2008 23:21:30 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 5 Feb 2008 23:21:30 -0000 Received: (qmail 19621 invoked by uid 500); 5 Feb 2008 23:21:21 -0000 Delivered-To: apmail-hadoop-hbase-dev-archive@hadoop.apache.org Received: (qmail 19576 invoked by uid 500); 5 Feb 2008 23:21:21 -0000 Mailing-List: contact hbase-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-dev@hadoop.apache.org Delivered-To: mailing list hbase-dev@hadoop.apache.org Received: (qmail 19552 invoked by uid 99); 5 Feb 2008 23:21:21 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 05 Feb 2008 15:21:21 -0800 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 05 Feb 2008 23:21:00 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id E862571403B for ; Tue, 5 Feb 2008 15:21:07 -0800 (PST) Message-ID: <6025935.1202253667919.JavaMail.jira@brutus> Date: Tue, 5 Feb 2008 15:21:07 -0800 (PST) From: "Jim Kellerman (JIRA)" To: hbase-dev@hadoop.apache.org Subject: [jira] Updated: (HBASE-24) [hbase] Scaling: Too many open file handles to datanodes MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HBASE-24?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Kellerman updated HBASE-24: ------------------------------- Priority: Critical (was: Major) Changing priority to Critical to emphasize that this is one of the major roadblocks to scalability that we have. And the problem is not only on the number of connections the region servers have open to the dfs, but also include datanode connections for each open file. > [hbase] Scaling: Too many open file handles to datanodes > -------------------------------------------------------- > > Key: HBASE-24 > URL: https://issues.apache.org/jira/browse/HBASE-24 > Project: Hadoop HBase > Issue Type: Bug > Components: regionserver > Reporter: stack > Priority: Critical > > We've been here before (HADOOP-2341). > Today the rapleaf gave me an lsof listing from a regionserver. Had thousands of open sockets to datanodes all in ESTABLISHED and CLOSE_WAIT state. On average they seem to have about ten file descriptors/sockets open per region (They have 3 column families IIRC. Per family, can have between 1-5 or so mapfiles open per family -- 3 is max... but compacting we open a new one, etc.). > They have thousands of regions. 400 regions -- ~100G, which is not that much -- takes about 4k open file handles. > If they want a regionserver to server a decent disk worths -- 300-400G -- then thats maybe 1600 regions... 16k file handles. If more than just 3 column families..... then we are in danger of blowing out limits if they are 32k. > We've been here before with HADOOP-2341. > A dfsclient that used non-blocking i/o would help applications like hbase (The datanode doesn't have this problem as bad -- CLOSE_WAIT on regionserver side, the bulk of the open fds in the rapleaf log, don't have a corresponding open resource on datanode end). > Could also just open mapfiles as needed, but that'd kill our random read performance and its bad enough already. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.