hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Lee <>
Subject Hard Coded 0 to assign RPC Server port number when hive.execution.engine=spark
Date Mon, 19 Oct 2015 14:20:55 GMT
Hi All,

I notice that in


The port number is assigned with 0 which means it will be a random port every time when the
RPC Server is created

to talk to Spark in the same session.

Any reason why this port number is not a property to be configured and follow the same rule
as +1 if the port is taken?

Just like Spark's configuration for Spark Driver, etc.?  Because of this, this is causing
problems to configure firewall between the

HiveCLI RPC Server and Spark due to unpredictable port numbers here. In other word, users
need to open all hive ports range

from Data Node => HiveCLI (edge node). = new ServerBootstrap()
      .childHandler(new ChannelInitializer<SocketChannel>() {
          public void initChannel(SocketChannel ch) throws Exception {
            SaslServerHandler saslHandler = new SaslServerHandler(config);
            final Rpc newRpc = Rpc.createServer(saslHandler, config, ch, group);
            saslHandler.rpc = newRpc;

            Runnable cancelTask = new Runnable() {
                public void run() {
                  LOG.warn("Timed out waiting for hello from client.");
            saslHandler.cancelTask = group.schedule(cancelTask,

      .option(ChannelOption.SO_BACKLOG, 1)
      .option(ChannelOption.SO_REUSEADDR, true)
      .childOption(ChannelOption.SO_KEEPALIVE, true)
    this.port = ((InetSocketAddress) channel.localAddress()).getPort();

Appreciate any feedback, and if a JIRA is required to keep track of this conversation. Thanks.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message