hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Peter Vary (JIRA)" <>
Subject [jira] [Commented] (HIVE-20695) HoS Query fails with hive.exec.parallel=true
Date Mon, 08 Oct 2018 12:14:00 GMT


Peter Vary commented on HIVE-20695:

[~ychena]: I am ok with removing the "half working" locks and setting the refreshLocalResources
to synchronized. If I understand correctly this called once per query so this most probably
not a problem. On the other hand there are several static methods which are suffering from
this synchronization problem. If we found them - I think - it would be good idea to document
these issues by adding at least a javadoc taking note of this "feature" :)

Otherwise LGTM +1

PS: I would like to keep the habit of committing only after green results. If there are known
issues please file a jira for them and disable those tests.


> HoS Query fails with hive.exec.parallel=true
> --------------------------------------------
>                 Key: HIVE-20695
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 1.2.1
>            Reporter: Yongzhi Chen
>            Assignee: Yongzhi Chen
>            Priority: Major
>         Attachments: HIVE-20695.1.patch, HIVE-20695.2.patch
> Hive queries which fail when running a HiveOnSpark job:
> {noformat}
> ERROR : Failed to execute spark task, with exception 'java.lang.Exception(Failed to submit
Spark work, please retry later)'
> java.lang.Exception: Failed to submit Spark work, please retry later
>         at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.execute(
>         at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.submit(
>         at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(
>         at org.apache.hadoop.hive.ql.exec.Task.executeTask(
>         at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(
>         at
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on /tmp/hive/dbname/_spark_session_dir/e202c452-8793-4e4e-ad55-61e3d4965c69/somename.jar
(inode 725730760): File does not exist. [Lease.  Holder: DFSClient_NONMAPREDUCE_-1981084042_486659,
pending creates: 7]
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(
> {noformat}

This message was sent by Atlassian JIRA

View raw message