hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xuefu Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-16071) Spark remote driver misuses the timeout in RPC handshake
Date Tue, 07 Mar 2017 14:08:37 GMT

    [ https://issues.apache.org/jira/browse/HIVE-16071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15899483#comment-15899483
] 

Xuefu Zhang commented on HIVE-16071:
------------------------------------

[~lirui], did you see above my hypothetic example where we may need cancelTask? I'm not convinced
that we should remove cancelTask. server.connect.timeout is usually much larger than client.connect.timeout.
In our case, 1hr for former and 1s for the latter. If the cluster resource is available, while
the network is super busy at the moment, then Sasl handshake may have issues. In such case,
we don't want HiveServer wait for 1 hour before declaring a failure. The same thing on the
remote driver side. In HIVE-15671 we fixed the problem on driver side. Of course we may rely
on remove driver to disconnect and Hive detects the connection loss. However, this is again
unreliable in a busy network. It's good for the server side to has its own Sasl timeout regardless.

My proposal is to change the timeout value for cancelTask to client.connect.timeout. Please
share your concern about this.

Also, ping [~vanzin] for comments.

> Spark remote driver misuses the timeout in RPC handshake
> --------------------------------------------------------
>
>                 Key: HIVE-16071
>                 URL: https://issues.apache.org/jira/browse/HIVE-16071
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Chaoyu Tang
>            Assignee: Chaoyu Tang
>         Attachments: HIVE-16071.patch
>
>
> Based on its property description in HiveConf and the comments in HIVE-12650 (https://issues.apache.org/jira/browse/HIVE-12650?focusedCommentId=15128979&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15128979),
hive.spark.client.connect.timeout is the timeout when the spark remote driver makes a socket
connection (channel) to RPC server. But currently it is also used by the remote driver for
RPC client/server handshaking, which is not right. Instead, hive.spark.client.server.connect.timeout
should be used and it has already been used by the RPCServer in the handshaking.
> The error like following is usually caused by this issue, since the default hive.spark.client.connect.timeout
value (1000ms) used by remote driver for handshaking is a little too short.
> {code}
> 17/02/20 08:46:08 ERROR yarn.ApplicationMaster: User class threw exception: java.util.concurrent.ExecutionException:
javax.security.sasl.SaslException: Client closed before SASL negotiation finished.
> java.util.concurrent.ExecutionException: javax.security.sasl.SaslException: Client closed
before SASL negotiation finished.
>         at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:37)
>         at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:156)
>         at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:556)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
> Caused by: javax.security.sasl.SaslException: Client closed before SASL negotiation finished.
>         at org.apache.hive.spark.client.rpc.Rpc$SaslClientHandler.dispose(Rpc.java:453)
>         at org.apache.hive.spark.client.rpc.SaslHandler.channelInactive(SaslHandler.java:90)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message