Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 817D7200CD2 for ; Thu, 27 Jul 2017 18:30:58 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 8001316B0EE; Thu, 27 Jul 2017 16:30:58 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id C5E6616B0EC for ; Thu, 27 Jul 2017 18:30:57 +0200 (CEST) Received: (qmail 75977 invoked by uid 500); 27 Jul 2017 16:30:57 -0000 Mailing-List: contact reviews-help@impala.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@impala.incubator.apache.org Received: (qmail 75966 invoked by uid 99); 27 Jul 2017 16:30:56 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 27 Jul 2017 16:30:56 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 37E06C2BAE for ; Thu, 27 Jul 2017 16:30:56 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 0.363 X-Spam-Level: X-Spam-Status: No, score=0.363 tagged_above=-999 required=6.31 tests=[RDNS_DYNAMIC=0.363, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id vKiVoN9A-bNN for ; Thu, 27 Jul 2017 16:30:46 +0000 (UTC) Received: from ip-10-146-233-104.ec2.internal (ec2-75-101-130-251.compute-1.amazonaws.com [75.101.130.251]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id ECFFC5F243 for ; Thu, 27 Jul 2017 16:30:45 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by ip-10-146-233-104.ec2.internal (8.14.4/8.14.4) with ESMTP id v6RGUhwh029557; Thu, 27 Jul 2017 16:30:43 GMT Message-Id: <201707271630.v6RGUhwh029557@ip-10-146-233-104.ec2.internal> Date: Thu, 27 Jul 2017 16:30:43 +0000 From: "John Sherman (Code Review)" To: impala-cr@cloudera.com, reviews@impala.incubator.apache.org CC: Sailesh Mukil , Henry Robinson , Matthew Jacobs , Dan Hecht Reply-To: jfs@arcadiadata.com X-Gerrit-MessageType: comment Subject: =?UTF-8?Q?=5BImpala-ASF-CR=5D_IMPALA-5394=3A_Handle_blocked_HS2_connections=0A?= X-Gerrit-Change-Id: I56a5f3d9cf931cff14eae7f236fea018236a6255 X-Gerrit-ChangeURL: X-Gerrit-Commit: c9a5f4ba59ce1797b2e274120e0554598235c3f7 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Content-Disposition: inline User-Agent: Gerrit/2.12.7 archived-at: Thu, 27 Jul 2017 16:30:58 -0000 John Sherman has posted comments on this change. Change subject: IMPALA-5394: Handle blocked HS2 connections ...................................................................... Patch Set 3: I'd like some advice/feedback on the approach for enforcing fe_service_threads. In my local diff, I've added a maxTasks, numTasks and a taskLimitMonitor to the TAcceptQueueServer class. From there it seems like I have two choices: 1) a) inside the Task constructor, synchronize with the taskLimitMonitor, check the passed in server's numTasks against maxTasks and wait if we've reached the limit or increment the server's numTasks. b) inside the Task destructor synchronize with the taskLimitMonitor, decrement the server's numTasks and notifyAll if I'm transistioning from maxTasks to < maxTasks. -- I like this because the code is somewhat symmetrical, but I do not know if it is a good idea to do these blocking operations inside a constructor/destructor. 2) a) inside the SetupConnection function before I create a new task, synchronize with the tasklimitMonitor and check/block/increment numTasks. b) inside the Tasks run method, decrement/notify where the Task removes itself from the TAcceptQueueServer's tasks set. My local diff uses approach 1 currently but I'm a bit unsure about the best practices around that approach. -- To view, visit http://gerrit.cloudera.org:8080/7061 To unsubscribe, visit http://gerrit.cloudera.org:8080/settings Gerrit-MessageType: comment Gerrit-Change-Id: I56a5f3d9cf931cff14eae7f236fea018236a6255 Gerrit-PatchSet: 3 Gerrit-Project: Impala-ASF Gerrit-Branch: master Gerrit-Owner: John Sherman Gerrit-Reviewer: Dan Hecht Gerrit-Reviewer: Henry Robinson Gerrit-Reviewer: John Sherman Gerrit-Reviewer: Matthew Jacobs Gerrit-Reviewer: Sailesh Mukil Gerrit-HasComments: No