From issues-return-137924-archive-asf-public=cust-asf.ponee.io@hive.apache.org Fri Sep 28 07:16:06 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 81221180718 for ; Fri, 28 Sep 2018 07:16:05 +0200 (CEST) Received: (qmail 20957 invoked by uid 500); 28 Sep 2018 05:16:04 -0000 Mailing-List: contact issues-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list issues@hive.apache.org Received: (qmail 20912 invoked by uid 99); 28 Sep 2018 05:16:04 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 28 Sep 2018 05:16:04 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 1F813C2151 for ; Fri, 28 Sep 2018 05:16:04 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -109.501 X-Spam-Level: X-Spam-Status: No, score=-109.501 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id WWDNn0gCqcKJ for ; Fri, 28 Sep 2018 05:16:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id C71D25F572 for ; Fri, 28 Sep 2018 05:16:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 359A5E2634 for ; Fri, 28 Sep 2018 05:16:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 7C05B23FB0 for ; Fri, 28 Sep 2018 05:16:00 +0000 (UTC) Date: Fri, 28 Sep 2018 05:16:00 +0000 (UTC) From: "ASF GitHub Bot (JIRA)" To: issues@hive.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HIVE-20627) Concurrent async queries intermittently fails with LockException and cause memory leak. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HIVE-20627?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D1663= 1363#comment-16631363 ]=20 ASF GitHub Bot commented on HIVE-20627: --------------------------------------- Github user sankarh closed the pull request at: https://github.com/apache/hive/pull/435 > Concurrent async queries intermittently fails with LockException and caus= e memory leak. > -------------------------------------------------------------------------= -------------- > > Key: HIVE-20627 > URL: https://issues.apache.org/jira/browse/HIVE-20627 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Transactions > Affects Versions: 4.0.0 > Reporter: Sankar Hariappan > Assignee: Sankar Hariappan > Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-20627.01.patch > > > When multiple async queries are executed from same session, it leads to m= ultiple async query execution DAGs share the same Hive object which is set = by caller for all threads. In case of loading dynamic partitions, it create= s MoveTask which=C2=A0re-creates the Hive object=C2=A0and=C2=A0closes the s= hared Hive object which causes metastore connection issues for=C2=A0other a= sync execution thread who still access it. This is also seen if=C2=A0ReplDu= mpTask and ReplLoadTask are part of the DAG. > *Call Stack:* > {code:java} > 2018-09-16T04:38:04,280 ERROR [load-dynamic-partitions-7]: metadata.Hive = (Hive.java:call(2436)) - Exception when loading partition with parameters p= artPath=3Dhdfs://mycluster/warehouse/tablespace/managed/hive/tbl_3bcvvdubni= /.hive-staging_hive_2018-09-16_04-35-50_708_7776079613819042057-1147/-ext-1= 0000/age=3D55, table=3Dtbl_3bcvvdubni, partSpec=3D{age=3D55}, loadFileType= =3DKEEP_EXISTING, listBucketingLevel=3D0, isAcid=3Dtrue, hasFollowingStatsT= ask=3Dtrue > org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with= the metastore > at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getValidWriteIds(DbTxnM= anager.java:714) ~[hive-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0-184] > at org.apache.hadoop.hive.ql.io.AcidUtils.getTableValidWriteIdListWithTxn= List(AcidUtils.java:1791) ~[hive-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0-1= 84] > at org.apache.hadoop.hive.ql.io.AcidUtils.getTableSnapshot(AcidUtils.java= :1756) ~[hive-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0-184] > at org.apache.hadoop.hive.ql.io.AcidUtils.getTableSnapshot(AcidUtils.java= :1714) ~[hive-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0-184] > at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1976) = ~[hive-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0-184] > at org.apache.hadoop.hive.ql.metadata.Hive$5.call(Hive.java:2415) [hive-e= xec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0-184] > at org.apache.hadoop.hive.ql.metadata.Hive$5.call(Hive.java:2406) [hive-e= xec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0-184] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_171] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j= ava:1149) [?:1.8.0_171] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.= java:624) [?:1.8.0_171] > at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171] > Caused by: org.apache.thrift.protocol.TProtocolException: Required field = 'validTxnList' is unset! Struct:GetValidWriteIdsRequest(fullTableNames:[def= ault.tbl_3bcvvdubni], validTxnList:null) > at org.apache.hadoop.hive.metastore.api.GetValidWriteIdsRequest.validate(= GetValidWriteIdsRequest.java:396) ~[hive-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3= .0.1.0-184] > at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_valid_wri= te_ids_args.validate(ThriftHiveMetastore.java) ~[hive-exec-3.1.0.3.0.1.0-18= 4.jar:3.1.0.3.0.1.0-184] > at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_valid_wri= te_ids_args$get_valid_write_ids_argsStandardScheme.write(ThriftHiveMetastor= e.java) ~[hive-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0-184] > at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_valid_wri= te_ids_args$get_valid_write_ids_argsStandardScheme.write(ThriftHiveMetastor= e.java) ~[hive-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0-184] > at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_valid_wri= te_ids_args.write(ThriftHiveMetastore.java) ~[hive-exec-3.1.0.3.0.1.0-184.j= ar:3.1.0.3.0.1.0-184] > at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:71) ~[hi= ve-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0-184] > at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62) ~[hi= ve-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0-184] > at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.send_g= et_valid_write_ids(ThriftHiveMetastore.java:5443) ~[hive-exec-3.1.0.3.0.1.0= -184.jar:3.1.0.3.0.1.0-184] > at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_va= lid_write_ids(ThriftHiveMetastore.java:5435) ~[hive-exec-3.1.0.3.0.1.0-184.= jar:3.1.0.3.0.1.0-184] > at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getValidWriteIds(= HiveMetaStoreClient.java:2589) ~[hive-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.= 1.0-184] > at sun.reflect.GeneratedMethodAccessor125.invoke(Unknown Source) ~[?:?] > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess= orImpl.java:43) ~[?:1.8.0_171] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_171] > at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(Retryi= ngMetaStoreClient.java:212) ~[hive-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0= -184] > at com.sun.proxy.$Proxy57.getValidWriteIds(Unknown Source) ~[?:?] > at sun.reflect.GeneratedMethodAccessor125.invoke(Unknown Source) ~[?:?] > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess= orImpl.java:43) ~[?:1.8.0_171] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_171] > at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandl= er.invoke(HiveMetaStoreClient.java:2934) ~[hive-exec-3.1.0.3.0.1.0-184.jar:= 3.1.0.3.0.1.0-184] > at com.sun.proxy.$Proxy57.getValidWriteIds(Unknown Source) ~[?:?] > at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.getValidWriteIds(DbTxnM= anager.java:712) ~[hive-exec-3.1.0.3.0.1.0-184.jar:3.1.0.3.0.1.0-184] > ... 10 more{code} > *Root cause:* > For Async query execution from SQLOperation.runInternal, we set the Thre= ad local Hive object for all the child threads as parentHive (parentSession= .getSessionHive()) > {code:java} > @Override > public void run() { > PrivilegedExceptionAction doAsAction =3D new PrivilegedException= Action() { > @Override > public Object run() throws HiveSQLException { > Hive.set(parentHive); // Setting parentHive for all async operations. > // TODO: can this result in cross-thread reuse of session state? > SessionState.setCurrentSessionState(parentSessionState); > PerfLogger.setPerfLogger(parentPerfLogger); > LogUtils.registerLoggingContext(queryState.getConf()); > try { > if (asyncPrepare) { > prepare(queryState); > } > runQuery(); > } catch (HiveSQLException e) { > // TODO: why do we invent our own error path op top of the one from Futu= re.get? > setOperationException(e); > LOG.error("Error running hive query: ", e); > } finally { > LogUtils.unregisterLoggingContext(); > } > return null; > } > }; > {code} > Now, when async execution in progress and if one of the thread re-creates= the Hive object, it closes the parentHive object first which impacts other= threads using it and hence conf object it refers too gets cleaned up and h= ence we get null for VALID_TXNS_KEY value. > {code:java} > private static Hive create(HiveConf c, boolean needsRefresh, Hive db, boo= lean doRegisterAllFns) > throws HiveException { > if (db !=3D null) { > LOG.debug("Creating new db. db =3D " + db + ", needsRefresh =3D " + need= sRefresh + > ", db.isCurrentUserOwner =3D " + db.isCurrentUserOwner()); > db.close(); > } > closeCurrent(); > if (c =3D=3D null) { > c =3D createHiveConf(); > } > c.set("fs.scheme.class", "dfs"); > Hive newdb =3D new Hive(c, doRegisterAllFns); > hiveDB.set(newdb); > return newdb; > } > {code} > *Fix:* > We shouldn't clean the old Hive object if it is shared by multiple threa= ds. Shall use a flag to know this. > *Memory leak issue:* > Memory leak is found if one of the threads from Hive.loadDynamicPartitio= ns throw exception. rawStoreMap is used to store rawStore objects which has= to be cleaned. In this case, it is populated only in success flow but if t= here are exceptions, it is not and hence there is a leak. > {code:java} > futures.add(pool.submit(new Callable() { > @Override > public Void call() throws Exception { > try { > // move file would require session details (needCopy() invokes SessionSt= ate.get) > SessionState.setCurrentSessionState(parentSession); > LOG.info("New loading path =3D " + partPath + " with partSpec " + fullPa= rtSpec); > // load the partition > Partition newPartition =3D loadPartition(partPath, tbl, fullPartSpec, lo= adFileType, > true, false, numLB > 0, false, isAcid, hasFollowingStatsTask, writeId, s= tmtId, > isInsertOverwrite); > partitionsMap.put(fullPartSpec, newPartition); > if (inPlaceEligible) { > synchronized (ps) { > InPlaceUpdate.rePositionCursor(ps); > partitionsLoaded.incrementAndGet(); > InPlaceUpdate.reprintLine(ps, "Loaded : " + partitionsLoaded.get() + "/" > + partsToLoad + " partitions."); > } > } > // Add embedded rawstore, so we can cleanup later to avoid memory leak > if (getMSC().isLocalMetaStore()) { > if (!rawStoreMap.containsKey(Thread.currentThread().getId())) { > rawStoreMap.put(Thread.currentThread().getId(), HiveMetaStore.HMSHandler= .getRawStore()); > } > } > return null; > } catch (Exception t) { > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)