Return-Path: Delivered-To: apmail-hadoop-hbase-dev-archive@minotaur.apache.org Received: (qmail 58219 invoked from network); 27 Mar 2009 22:57:13 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 27 Mar 2009 22:57:13 -0000 Received: (qmail 30774 invoked by uid 500); 27 Mar 2009 22:57:13 -0000 Delivered-To: apmail-hadoop-hbase-dev-archive@hadoop.apache.org Received: (qmail 30729 invoked by uid 500); 27 Mar 2009 22:57:13 -0000 Mailing-List: contact hbase-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-dev@hadoop.apache.org Delivered-To: mailing list hbase-dev@hadoop.apache.org Received: (qmail 30719 invoked by uid 99); 27 Mar 2009 22:57:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Mar 2009 22:57:13 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Mar 2009 22:57:11 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 9BFAC234C044 for ; Fri, 27 Mar 2009 15:56:50 -0700 (PDT) Message-ID: <912016106.1238194610637.JavaMail.jira@brutus> Date: Fri, 27 Mar 2009 15:56:50 -0700 (PDT) From: "ryan rawson (JIRA)" To: hbase-dev@hadoop.apache.org Subject: [jira] Created: (HBASE-1293) hfile doesn't recycle decompressors MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org hfile doesn't recycle decompressors ----------------------------------- Key: HBASE-1293 URL: https://issues.apache.org/jira/browse/HBASE-1293 Project: Hadoop HBase Issue Type: Bug Affects Versions: 0.20.0 Environment: - all - Reporter: ryan rawson Fix For: 0.20.0 The Compression codec stuff from hadoop has the concept of recycling compressors and decompressors - this is because a compression codec uses "direct buffers" which reside outside the JVM regular heap space. There is a risk that under heavy concurrent load we could run out of that 'direct buffer' heap space in the JVM. HFile does not call algorithm.returnDecompressor and returnCompressor. We should fix that. I found this bug via OOM crashes under jdk 1.7 - it appears to be partially due to the size of my cluster (200gb, 800 regions, 19 servers) and partially due to weaknesses in JVM 1.7. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.