From common-issues-return-160766-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Thu Nov 8 22:54:40 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id B2BC9180671 for ; Thu, 8 Nov 2018 22:54:39 +0100 (CET) Received: (qmail 30916 invoked by uid 500); 8 Nov 2018 21:54:38 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 30905 invoked by uid 99); 8 Nov 2018 21:54:38 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 08 Nov 2018 21:54:38 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 471BCCE1F0 for ; Thu, 8 Nov 2018 21:54:38 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -110.301 X-Spam-Level: X-Spam-Status: No, score=-110.301 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id AMz9pN-XuvUh for ; Thu, 8 Nov 2018 21:54:35 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 1C95875DD3 for ; Thu, 8 Nov 2018 15:02:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 58E40E006D for ; Thu, 8 Nov 2018 15:02:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 27D37266AF for ; Thu, 8 Nov 2018 15:02:00 +0000 (UTC) Date: Thu, 8 Nov 2018 15:02:00 +0000 (UTC) From: "Yeliang Cang (JIRA)" To: common-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HADOOP-15913) xml parsing error in a heavily multi-threaded environment MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HADOOP-15913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16679875#comment-16679875 ] Yeliang Cang commented on HADOOP-15913: --------------------------------------- We have already applied https://issues.apache.org/jira/browse/HADOOP-12404 and still see the error. Based on comments in https://github.com/mikiobraun/jblas/issues/103, it suggests that it is misuse ZipFile object between multiple threads > xml parsing error in a heavily multi-threaded environment > --------------------------------------------------------- > > Key: HADOOP-15913 > URL: https://issues.apache.org/jira/browse/HADOOP-15913 > Project: Hadoop Common > Issue Type: Bug > Components: common > Affects Versions: 2.7.3 > Reporter: Yeliang Cang > Priority: Critical > > We met this problem in a production environment, the stack trace like this: > {code}ERROR org.apache.hadoop.hive.ql.exec.Task: Ended Job = job_1541600895081_0580 with exception 'java.lang.NullPointerException(Inflater has been closed)' > java.lang.NullPointerException: Inflater has been closed > at java.util.zip.Inflater.ensureOpen(Inflater.java:389) > at java.util.zip.Inflater.inflate(Inflater.java:257) > at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:152) > at java.io.FilterInputStream.read(FilterInputStream.java:133) > at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) > at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) > at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) > at java.io.InputStreamReader.read(InputStreamReader.java:184) > at java.io.BufferedReader.fill(BufferedReader.java:154) > at java.io.BufferedReader.readLine(BufferedReader.java:317) > at java.io.BufferedReader.readLine(BufferedReader.java:382) > at javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:319) > at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255) > at javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121) > at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2524) > at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2501) > at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2407) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:983) > at org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007) > at org.apache.hadoop.mapred.JobConf.(JobConf.java:479) > at org.apache.hadoop.mapred.JobConf.(JobConf.java:469) > at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:188) > at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:601) > at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:599) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) > at org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:599) > at org.apache.hadoop.mapred.JobClient.getJobInner(JobClient.java:609) > at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:639) > at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:294) > at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:558) > at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:457) > at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:141) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197){code} > and can reproduce it in our test environment by steps below: > 1. set configs: > {code} > hive.server2.async.exec.threads = 50 > hive.server2.async.exec.wait.queue.size = 100 > {code} > 2. open 4 beeline terminates in 4 different nodes. > 3. create 30 queries in each beeline terminate. Each query include "add jar xxx.jar" like this: > {code} > add jar mykeytest-1.0-SNAPSHOT.jar; > create temporary function ups as 'com.xxx.manager.GetCommentNameOrId'; > insert into test partition(tjrq = ${my_no}, ywtx = '${my_no2}' ) > select dt.d_year as i_brand > ,item.i_brand_id as i_item_sk > ,ups(item.i_brand) as i_product_name > ,sum(ss_ext_sales_price) as i_category_id > from date_dim dt > ,store_sales > ,item > where dt.d_date_sk = store_sales.ss_sold_date_sk > and store_sales.ss_item_sk = item.i_item_sk > and item.i_manufact_id = 436 > and dt.d_moy=12 > group by dt.d_year > ,item.i_brand > ,item.i_brand_id > order by dt.d_year > {code} > and all these 120 queries connect to one hiveserver2 > Run all the query concurrently, and will see the stack trace abover in hiveserver2 log -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: common-issues-help@hadoop.apache.org