Return-Path: X-Original-To: apmail-hive-dev-archive@www.apache.org Delivered-To: apmail-hive-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 936B210AB5 for ; Thu, 3 Oct 2013 03:46:53 +0000 (UTC) Received: (qmail 55813 invoked by uid 500); 3 Oct 2013 03:46:46 -0000 Delivered-To: apmail-hive-dev-archive@hive.apache.org Received: (qmail 55744 invoked by uid 500); 3 Oct 2013 03:46:45 -0000 Mailing-List: contact dev-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list dev@hive.apache.org Received: (qmail 55716 invoked by uid 500); 3 Oct 2013 03:46:44 -0000 Delivered-To: apmail-hadoop-hive-dev@hadoop.apache.org Received: (qmail 55706 invoked by uid 99); 3 Oct 2013 03:46:44 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 03 Oct 2013 03:46:44 +0000 Date: Thu, 3 Oct 2013 03:46:44 +0000 (UTC) From: "Hudson (JIRA)" To: hive-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HIVE-5296) Memory leak: OOM Error after multiple open/closed JDBC connections. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HIVE-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13784792#comment-13784792 ] Hudson commented on HIVE-5296: ------------------------------ FAILURE: Integrated in Hive-trunk-h0.21 #2375 (See [https://builds.apache.org/job/Hive-trunk-h0.21/2375/]) HIVE-5296: Memory leak: OOM Error after multiple open/closed JDBC connections. (Kousuke Saruta via Thejas Nair) (thejas: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1528678) * /hive/trunk/service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java > Memory leak: OOM Error after multiple open/closed JDBC connections. > -------------------------------------------------------------------- > > Key: HIVE-5296 > URL: https://issues.apache.org/jira/browse/HIVE-5296 > Project: Hive > Issue Type: Bug > Components: HiveServer2 > Affects Versions: 0.12.0, 0.13.0 > Environment: Hive 0.12.0, Hadoop 1.1.2, Debian. > Reporter: Douglas > Assignee: Kousuke Saruta > Labels: hiveserver > Fix For: 0.12.0, 0.13.0 > > Attachments: HIVE-5296.1.patch, HIVE-5296.2.patch, HIVE-5296.patch, HIVE-5296.patch, HIVE-5296.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > Multiple connections to Hiveserver2, all of which are closed and disposed of properly show the Java heap size to grow extremely quickly. > This issue can be recreated using the following code > {code} > import java.sql.DriverManager; > import java.sql.Connection; > import java.sql.ResultSet; > import java.sql.SQLException; > import java.sql.Statement; > import java.util.Properties; > import org.apache.hive.service.cli.HiveSQLException; > import org.apache.log4j.Logger; > /* > * Class which encapsulates the lifecycle of a query or statement. > * Provides functionality which allows you to create a connection > */ > public class HiveClient { > > Connection con; > Logger logger; > private static String driverName = "org.apache.hive.jdbc.HiveDriver"; > private String db; > > > public HiveClient(String db) > { > logger = Logger.getLogger(HiveClient.class); > this.db=db; > > try{ > Class.forName(driverName); > }catch(ClassNotFoundException e){ > logger.info("Can't find Hive driver"); > } > > String hiveHost = GlimmerServer.config.getString("hive/host"); > String hivePort = GlimmerServer.config.getString("hive/port"); > String connectionString = "jdbc:hive2://"+hiveHost+":"+hivePort +"/default"; > logger.info(String.format("Attempting to connect to %s",connectionString)); > try{ > con = DriverManager.getConnection(connectionString,"",""); > }catch(Exception e){ > logger.error("Problem instantiating the connection"+e.getMessage()); > } > } > > public int update(String query) > { > Integer res = 0; > Statement stmt = null; > try{ > stmt = con.createStatement(); > String switchdb = "USE "+db; > logger.info(switchdb); > stmt.executeUpdate(switchdb); > logger.info(query); > res = stmt.executeUpdate(query); > logger.info("Query passed to server"); > stmt.close(); > }catch(HiveSQLException e){ > logger.info(String.format("HiveSQLException thrown, this can be valid, " + > "but check the error: %s from the query %s",query,e.toString())); > }catch(SQLException e){ > logger.error(String.format("Unable to execute query SQLException %s. Error: %s",query,e)); > }catch(Exception e){ > logger.error(String.format("Unable to execute query %s. Error: %s",query,e)); > } > > if(stmt!=null) > try{ > stmt.close(); > }catch(SQLException e){ > logger.error("Cannot close the statment, potentially memory leak "+e); > } > > return res; > } > > public void close() > { > if(con!=null){ > try { > con.close(); > } catch (SQLException e) { > logger.info("Problem closing connection "+e); > } > } > } > > > > } > {code} > And by creating and closing many HiveClient objects. The heap space used by the hiveserver2 runjar process is seen to increase extremely quickly, without such space being released. -- This message was sent by Atlassian JIRA (v6.1#6144)