hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rui Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-16456) Kill spark job when InterruptedException happens or driverContext.isShutdown is true.
Date Tue, 02 May 2017 03:09:04 GMT

    [ https://issues.apache.org/jira/browse/HIVE-16456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15992120#comment-15992120
] 

Rui Li commented on HIVE-16456:
-------------------------------

Thanks [~zxu] for working on this. One question is could you explain in what situation will
InterruptedException happen in the monitors? I think one case is the sleep during check intervals.
In other cases however, e.g. {{sparkJobStatus.getState()}}, the InterruptedException may be
wrapped and thrown as a HiveException, which your patch doesn't handle.
And a minor improvement is we can check {{if (jobRef != null && !jobKilled)}} before
we enter the synchronized block right?
{code}
	  private void killJob() {
	    boolean needToKillJob = false;
	    synchronized(this) {
	      if (jobRef != null && !jobKilled) {
	        jobKilled = true;
	        needToKillJob = true;
	      }
	    }
{code}

> Kill spark job when InterruptedException happens or driverContext.isShutdown is true.
> -------------------------------------------------------------------------------------
>
>                 Key: HIVE-16456
>                 URL: https://issues.apache.org/jira/browse/HIVE-16456
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: zhihai xu
>            Assignee: zhihai xu
>            Priority: Minor
>         Attachments: HIVE-16456.000.patch
>
>
> Kill spark job when InterruptedException happens or driverContext.isShutdown is true.
If the InterruptedException happened in RemoteSparkJobMonitor and LocalSparkJobMonitor, it
will be better to kill the job. Also there is a race condition between submit the spark job
and query/operation cancellation, it will be better to check driverContext.isShutdown right
after submit the spark job. This will guarantee the job being killed no matter when shutdown
is called. It is similar as HIVE-15997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message