Return-Path: X-Original-To: apmail-hive-dev-archive@www.apache.org Delivered-To: apmail-hive-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B7A8CE7AA for ; Wed, 9 Jan 2013 10:28:56 +0000 (UTC) Received: (qmail 36694 invoked by uid 500); 9 Jan 2013 10:28:47 -0000 Delivered-To: apmail-hive-dev-archive@hive.apache.org Received: (qmail 36369 invoked by uid 500); 9 Jan 2013 10:28:47 -0000 Mailing-List: contact dev-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list dev@hive.apache.org Received: (qmail 36157 invoked by uid 500); 9 Jan 2013 10:28:47 -0000 Delivered-To: apmail-hadoop-hive-dev@hadoop.apache.org Received: (qmail 36075 invoked by uid 99); 9 Jan 2013 10:28:47 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 Jan 2013 10:28:46 +0000 Date: Wed, 9 Jan 2013 10:28:46 +0000 (UTC) From: "Hudson (JIRA)" To: hive-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HIVE-3098) Memory leak from large number of FileSystem instances in FileSystem.CACHE MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HIVE-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13548112#comment-13548112 ] Hudson commented on HIVE-3098: ------------------------------ Integrated in Hive-trunk-hadoop2 #54 (See [https://builds.apache.org/job/Hive-trunk-hadoop2/54/]) HIVE-3098 : Memory leak from large number of FileSystem instances in FileSystem.CACHE (Mithun R via Ashutosh Chauhan) (Revision 1382040) Result = ABORTED hashutosh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1382040 Files : * /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/TUGIBasedProcessor.java * /hive/trunk/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java * /hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java * /hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java * /hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java > Memory leak from large number of FileSystem instances in FileSystem.CACHE > ------------------------------------------------------------------------- > > Key: HIVE-3098 > URL: https://issues.apache.org/jira/browse/HIVE-3098 > Project: Hive > Issue Type: Bug > Components: Shims > Affects Versions: 0.9.0 > Environment: Running with Hadoop 20.205.0.3+ / 1.0.x with security turned on. > Reporter: Mithun Radhakrishnan > Assignee: Mithun Radhakrishnan > Fix For: 0.10.0, 0.9.1 > > Attachments: Hive-3098_(FS_closeAllForUGI()).patch, hive-3098.patch, Hive_3098.patch > > > The problem manifested from stress-testing HCatalog 0.4.1 (as part of testing the Oracle backend). > The HCatalog server ran out of memory (-Xmx2048m) when pounded by 60-threads, in under 24 hours. The heap-dump indicates that hadoop::FileSystem.CACHE had 1000000 instances of FileSystem, whose combined retained-mem consumed the entire heap. > It boiled down to hadoop::UserGroupInformation::equals() being implemented such that the "Subject" member is compared for equality ("=="), and not equivalence (".equals()"). This causes equivalent UGI instances to compare as unequal, and causes a new FileSystem instance to be created and cached. > The UGI.equals() is so implemented, incidentally, as a fix for yet another problem (HADOOP-6670); so it is unlikely that that implementation can be modified. > The solution for this is to check for UGI equivalence in HCatalog (i.e. in the Hive metastore), using an cache for UGI instances in the shims. > I have a patch to fix this. I'll upload it shortly. I just ran an overnight test to confirm that the memory-leak has been arrested. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira