hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rui Li (JIRA)" <>
Subject [jira] [Commented] (HIVE-12650) Increase default value of hive.spark.client.server.connect.timeout to exceeds
Date Wed, 03 Feb 2016 01:56:39 GMT


Rui Li commented on HIVE-12650:

bq. Regarding your last question, I tried submitting application when no container is available.
Spark-submit will wait until timeout (90s).
Sorry this comment is misleading. Actually I mean hive will timeout after 90s. But after this,
we'll interrupt the driver thread:
    try {
      // The RPC server will take care of timeouts here.
      this.driverRpc = rpcServer.registerClient(clientId, secret, protocol).get();
    } catch (Throwable e) {
      LOG.warn("Error while waiting for client to connect.", e);
      try {
      } catch (InterruptedException ie) {
        // Give up.
        LOG.debug("Interrupted before driver thread was finished.");
      throw Throwables.propagate(e);
which in turn will destroy the SparkSubmit process:
        public void run() {
          try {
            int exitCode = child.waitFor();
            if (exitCode != 0) {
              rpcServer.cancelClient(clientId, "Child process exited before connecting back");
              LOG.warn("Child process exited with code {}.", exitCode);
          } catch (InterruptedException ie) {
            LOG.warn("Waiting thread interrupted, killing child process.");
          } catch (Exception e) {
            LOG.warn("Exception while waiting for child process.", e);
So on my machine, after the timeout, SparkSubmit is terminated.
I think the {{Client closed before SASL negotiation finished.}} exception is worth investigating
and should be root cause here.

> Increase default value of hive.spark.client.server.connect.timeout to exceeds
> ----------------------------------------------------------------------------------------------------
>                 Key: HIVE-12650
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 1.1.1, 1.2.1
>            Reporter: JoneZhang
>            Assignee: Xuefu Zhang
> I think hive.spark.client.server.connect.timeout should be set greater than
The default value for 
> is 100s, and the default value for hive.spark.client.server.connect.timeout
is 90s, which is not good. We can increase it to a larger value such as 120s.

This message was sent by Atlassian JIRA

View raw message