Return-Path: X-Original-To: apmail-hive-dev-archive@www.apache.org Delivered-To: apmail-hive-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6F17710A5C for ; Mon, 30 Sep 2013 16:28:32 +0000 (UTC) Received: (qmail 62991 invoked by uid 500); 30 Sep 2013 16:28:30 -0000 Delivered-To: apmail-hive-dev-archive@hive.apache.org Received: (qmail 62789 invoked by uid 500); 30 Sep 2013 16:28:29 -0000 Mailing-List: contact dev-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list dev@hive.apache.org Received: (qmail 62705 invoked by uid 500); 30 Sep 2013 16:28:27 -0000 Delivered-To: apmail-hadoop-hive-dev@hadoop.apache.org Received: (qmail 62596 invoked by uid 99); 30 Sep 2013 16:28:25 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 30 Sep 2013 16:28:25 +0000 Date: Mon, 30 Sep 2013 16:28:25 +0000 (UTC) From: "Douglas (JIRA)" To: hive-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HIVE-5296) Memory leak: OOM Error after multiple open/closed JDBC connections. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HIVE-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13781985#comment-13781985 ] Douglas commented on HIVE-5296: ------------------------------- Once again, thanks for your interest in this. I've pushed the latest patch to my production system and will keep an eye on heap space over the next couple of days. > Memory leak: OOM Error after multiple open/closed JDBC connections. > -------------------------------------------------------------------- > > Key: HIVE-5296 > URL: https://issues.apache.org/jira/browse/HIVE-5296 > Project: Hive > Issue Type: Bug > Components: HiveServer2 > Affects Versions: 0.12.0, 0.13.0 > Environment: Hive 0.12.0, Hadoop 1.1.2, Debian. > Reporter: Douglas > Labels: hiveserver > Fix For: 0.12.0 > > Attachments: HIVE-5296.1.patch, HIVE-5296.2.patch, HIVE-5296.patch, HIVE-5296.patch, HIVE-5296.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > This error seems to relate to https://issues.apache.org/jira/browse/HIVE-3481 > However, on inspection of the related patch and my built version of Hive (patch carried forward to 0.12.0), I am still seeing the described behaviour. > Multiple connections to Hiveserver2, all of which are closed and disposed of properly show the Java heap size to grow extremely quickly. > This issue can be recreated using the following code > {code} > import java.sql.DriverManager; > import java.sql.Connection; > import java.sql.ResultSet; > import java.sql.SQLException; > import java.sql.Statement; > import java.util.Properties; > import org.apache.hive.service.cli.HiveSQLException; > import org.apache.log4j.Logger; > /* > * Class which encapsulates the lifecycle of a query or statement. > * Provides functionality which allows you to create a connection > */ > public class HiveClient { > > Connection con; > Logger logger; > private static String driverName = "org.apache.hive.jdbc.HiveDriver"; > private String db; > > > public HiveClient(String db) > { > logger = Logger.getLogger(HiveClient.class); > this.db=db; > > try{ > Class.forName(driverName); > }catch(ClassNotFoundException e){ > logger.info("Can't find Hive driver"); > } > > String hiveHost = GlimmerServer.config.getString("hive/host"); > String hivePort = GlimmerServer.config.getString("hive/port"); > String connectionString = "jdbc:hive2://"+hiveHost+":"+hivePort +"/default"; > logger.info(String.format("Attempting to connect to %s",connectionString)); > try{ > con = DriverManager.getConnection(connectionString,"",""); > }catch(Exception e){ > logger.error("Problem instantiating the connection"+e.getMessage()); > } > } > > public int update(String query) > { > Integer res = 0; > Statement stmt = null; > try{ > stmt = con.createStatement(); > String switchdb = "USE "+db; > logger.info(switchdb); > stmt.executeUpdate(switchdb); > logger.info(query); > res = stmt.executeUpdate(query); > logger.info("Query passed to server"); > stmt.close(); > }catch(HiveSQLException e){ > logger.info(String.format("HiveSQLException thrown, this can be valid, " + > "but check the error: %s from the query %s",query,e.toString())); > }catch(SQLException e){ > logger.error(String.format("Unable to execute query SQLException %s. Error: %s",query,e)); > }catch(Exception e){ > logger.error(String.format("Unable to execute query %s. Error: %s",query,e)); > } > > if(stmt!=null) > try{ > stmt.close(); > }catch(SQLException e){ > logger.error("Cannot close the statment, potentially memory leak "+e); > } > > return res; > } > > public void close() > { > if(con!=null){ > try { > con.close(); > } catch (SQLException e) { > logger.info("Problem closing connection "+e); > } > } > } > > > > } > {code} > And by creating and closing many HiveClient objects. The heap space used by the hiveserver2 runjar process is seen to increase extremely quickly, without such space being released. -- This message was sent by Atlassian JIRA (v6.1#6144)