hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "frank luo (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-13809) hive: 'java.lang.IllegalStateException(zip file closed)'
Date Thu, 27 Jul 2017 17:08:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-13809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16103509#comment-16103509
] 

frank luo commented on HADOOP-13809:
------------------------------------

I believe hive-11681 and this one all related to https://bugs.openjdk.java.net/browse/JDK-6947916,
which hasn't been released.

I am able to recreate it with oracle jdk 1.8.0_131.

> hive: 'java.lang.IllegalStateException(zip file closed)'
> --------------------------------------------------------
>
>                 Key: HADOOP-13809
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13809
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: conf
>    Affects Versions: 2.8.0, 3.0.0-alpha1
>            Reporter: Adriano
>
> Randomly some of the hive queries are failing with the below exception on HS2: 
> {code}
> 2016-11-07 02:36:40,996 ERROR org.apache.hadoop.hive.ql.exec.Task: [HiveServer2-Background-Pool:
Thread-1823748]: Ended Job = job_1478336955303_31030 with exception 'java.lang.IllegalStateException(zip
file 
>  closed)' 
> java.lang.IllegalStateException: zip file closed 
>         at java.util.zip.ZipFile.ensureOpen(ZipFile.java:634) 
>         at java.util.zip.ZipFile.getEntry(ZipFile.java:305) 
>         at java.util.jar.JarFile.getEntry(JarFile.java:227) 
>         at sun.net.www.protocol.jar.URLJarFile.getEntry(URLJarFile.java:128) 
>         at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:132)

>         at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150)

>         at java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:233) 
>         at javax.xml.parsers.SecuritySupport$4.run(SecuritySupport.java:94) 
>         at java.security.AccessController.doPrivileged(Native Method) 
>         at javax.xml.parsers.SecuritySupport.getResourceAsStream(SecuritySupport.java:87)

>         at javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:283)

>         at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255) 
>         at javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121)

>         at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2526)

>         at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2503)

>         at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2409) 
>         at org.apache.hadoop.conf.Configuration.get(Configuration.java:982) 
>         at org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2032)

>         at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:484) 
>         at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:474) 
>         at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:210) 
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:596) 
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:594) 
>         at java.security.AccessController.doPrivileged(Native Method) 
>         at javax.security.auth.Subject.doAs(Subject.java:415) 
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)

>         at org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:594)

>         at org.apache.hadoop.mapred.JobClient.getTaskReports(JobClient.java:665) 
>         at org.apache.hadoop.mapred.JobClient.getReduceTaskReports(JobClient.java:689)

>         at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:272)

>         at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:549)

>         at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:435)

>         at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)

>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) 
>         at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)

>         at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1770) 
>         at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1527) 
>         at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1306) 
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1115) 
>         at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1108) 
>         at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:178)

>         at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72)

>         at org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232)

>         at java.security.AccessController.doPrivileged(Native Method) 
>         at javax.security.auth.Subject.doAs(Subject.java:415) 
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)

>         at org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:245)

>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

>         at java.lang.Thread.run(Thread.java:745) 
> {code}
> Most probably HADOOP-12404 merged on:
> {code}
> cdh5-2.6.0_5.7.4/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L2520
> {code}
> hasn't fully fixed the issue:
> HADOOP_12404 only fixes the case when the jar file close during calling the parse, but
for the code link above and customer's stack, the close happens during 
> DocumentBuilderFactory docBuilderFactory 
> = DocumentBuilderFactory.newInstance();
> It is on line 2526 which happens before parse is called. So the fix of HADOOP-12404 which
setUseCache to false happens too late for the customer's case. 
> In a heavy loaded system, the close can happen at any time. So the setUseCache to false
should be set before following connect is called.
> http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7u40-b43/sun/net/www/protocol/jar/JarURLConnection.java#119



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message