Return-Path: X-Original-To: apmail-hive-dev-archive@www.apache.org Delivered-To: apmail-hive-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 07867104D9 for ; Mon, 23 Sep 2013 19:02:26 +0000 (UTC) Received: (qmail 84660 invoked by uid 500); 23 Sep 2013 19:02:18 -0000 Delivered-To: apmail-hive-dev-archive@hive.apache.org Received: (qmail 84591 invoked by uid 500); 23 Sep 2013 19:02:15 -0000 Mailing-List: contact dev-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list dev@hive.apache.org Received: (qmail 83854 invoked by uid 500); 23 Sep 2013 19:02:13 -0000 Delivered-To: apmail-hadoop-hive-dev@hadoop.apache.org Received: (qmail 83652 invoked by uid 99); 23 Sep 2013 19:02:08 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 23 Sep 2013 19:02:08 +0000 Date: Mon, 23 Sep 2013 19:02:08 +0000 (UTC) From: "Kousuke Saruta (JIRA)" To: hive-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HIVE-5296) Memory leak: OOM Error after multiple open/closed JDBC connections. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HIVE-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kousuke Saruta updated HIVE-5296: --------------------------------- Attachment: HIVE-5296.patch I've create a patch of first idea. > Memory leak: OOM Error after multiple open/closed JDBC connections. > -------------------------------------------------------------------- > > Key: HIVE-5296 > URL: https://issues.apache.org/jira/browse/HIVE-5296 > Project: Hive > Issue Type: Bug > Components: HiveServer2 > Affects Versions: 0.12.0 > Environment: Hive 0.12.0, Hadoop 1.1.2, Debian. > Reporter: Douglas > Labels: hiveserver > Fix For: 0.12.0 > > Attachments: HIVE-5296.patch > > Original Estimate: 168h > Remaining Estimate: 168h > > This error seems to relate to https://issues.apache.org/jira/browse/HIVE-3481 > However, on inspection of the related patch and my built version of Hive (patch carried forward to 0.12.0), I am still seeing the described behaviour. > Multiple connections to Hiveserver2, all of which are closed and disposed of properly show the Java heap size to grow extremely quickly. > This issue can be recreated using the following code > {code} > import java.sql.DriverManager; > import java.sql.Connection; > import java.sql.ResultSet; > import java.sql.SQLException; > import java.sql.Statement; > import java.util.Properties; > import org.apache.hive.service.cli.HiveSQLException; > import org.apache.log4j.Logger; > /* > * Class which encapsulates the lifecycle of a query or statement. > * Provides functionality which allows you to create a connection > */ > public class HiveClient { > > Connection con; > Logger logger; > private static String driverName = "org.apache.hive.jdbc.HiveDriver"; > private String db; > > > public HiveClient(String db) > { > logger = Logger.getLogger(HiveClient.class); > this.db=db; > > try{ > Class.forName(driverName); > }catch(ClassNotFoundException e){ > logger.info("Can't find Hive driver"); > } > > String hiveHost = GlimmerServer.config.getString("hive/host"); > String hivePort = GlimmerServer.config.getString("hive/port"); > String connectionString = "jdbc:hive2://"+hiveHost+":"+hivePort +"/default"; > logger.info(String.format("Attempting to connect to %s",connectionString)); > try{ > con = DriverManager.getConnection(connectionString,"",""); > }catch(Exception e){ > logger.error("Problem instantiating the connection"+e.getMessage()); > } > } > > public int update(String query) > { > Integer res = 0; > Statement stmt = null; > try{ > stmt = con.createStatement(); > String switchdb = "USE "+db; > logger.info(switchdb); > stmt.executeUpdate(switchdb); > logger.info(query); > res = stmt.executeUpdate(query); > logger.info("Query passed to server"); > stmt.close(); > }catch(HiveSQLException e){ > logger.info(String.format("HiveSQLException thrown, this can be valid, " + > "but check the error: %s from the query %s",query,e.toString())); > }catch(SQLException e){ > logger.error(String.format("Unable to execute query SQLException %s. Error: %s",query,e)); > }catch(Exception e){ > logger.error(String.format("Unable to execute query %s. Error: %s",query,e)); > } > > if(stmt!=null) > try{ > stmt.close(); > }catch(SQLException e){ > logger.error("Cannot close the statment, potentially memory leak "+e); > } > > return res; > } > > public void close() > { > if(con!=null){ > try { > con.close(); > } catch (SQLException e) { > logger.info("Problem closing connection "+e); > } > } > } > > > > } > {code} > And by creating and closing many HiveClient objects. The heap space used by the hiveserver2 runjar process is seen to increase extremely quickly, without such space being released. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira