zookeeper-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] ivmaykov commented on a change in pull request #753: ZOOKEEPER-3204: Reconfig tests are constantly failing on 3.5 after applying Java 11 fix
Date Wed, 06 Feb 2019 20:30:13 GMT
ivmaykov commented on a change in pull request #753: ZOOKEEPER-3204: Reconfig tests are constantly
failing on 3.5 after applying Java 11 fix
URL: https://github.com/apache/zookeeper/pull/753#discussion_r254438888
 
 

 ##########
 File path: zookeeper-server/src/main/java/org/apache/zookeeper/server/NettyServerCnxnFactory.java
 ##########
 @@ -324,30 +260,42 @@ public void operationComplete(ChannelFuture future)
     
     CnxnChannelHandler channelHandler = new CnxnChannelHandler();
 
-    NettyServerCnxnFactory() {
-        bootstrap = new ServerBootstrap(
-                new NioServerSocketChannelFactory(
-                        Executors.newCachedThreadPool(),
-                        Executors.newCachedThreadPool()));
-        // parent channel
-        bootstrap.setOption("reuseAddress", true);
-        // child channels
-        bootstrap.setOption("child.tcpNoDelay", true);
-        /* set socket linger to off, so that socket close does not block */
-        bootstrap.setOption("child.soLinger", -1);
-        bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
-            @Override
-            public ChannelPipeline getPipeline() throws Exception {
-                ChannelPipeline p = Channels.pipeline();
-                if (secure) {
-                    initSSL(p);
-                }
-                p.addLast("servercnxnfactory", channelHandler);
+    private ServerBootstrap configureBootstrapAllocator(ServerBootstrap bootstrap) {
+        ByteBufAllocator testAllocator = TEST_ALLOCATOR.get();
+        if (testAllocator != null) {
+            return bootstrap
+                    .option(ChannelOption.ALLOCATOR, testAllocator)
+                    .childOption(ChannelOption.ALLOCATOR, testAllocator);
+        } else {
+            return bootstrap;
+        }
+    }
 
-                return p;
-            }
-        });
+    NettyServerCnxnFactory() {
         x509Util = new ClientX509Util();
+
+        EventLoopGroup bossGroup = NettyUtils.newNioOrEpollEventLoopGroup();
+        EventLoopGroup workerGroup = NettyUtils.newNioOrEpollEventLoopGroup();
 
 Review comment:
   I think we probably want a separate accept group since ZK could have a lot of clients (tens
of thousands) reconnecting within a very short time after a machine fails and a new leader
is elected.
   
   Maybe I could set the accept group to use 4 threads for now? I think it's unlikely that
a ZK server will bind on more than 4 network interfaces. It's a bit more conservative than
using 1 thread, and if we only bind on 1 address then we end up "wasting" 3 threads instead
of dozens. What do you think?
   
   I guess we could also make the number of bind threads configurable. Or maybe there's a
dynamic way to detect the number of network interfaces present?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message