Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id DB3BB200B2A for ; Sat, 25 Jun 2016 20:59:12 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id D9C4F160A66; Sat, 25 Jun 2016 18:59:12 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 0992A160A49 for ; Sat, 25 Jun 2016 20:59:11 +0200 (CEST) Received: (qmail 87294 invoked by uid 500); 25 Jun 2016 18:59:06 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 87284 invoked by uid 99); 25 Jun 2016 18:59:05 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 25 Jun 2016 18:59:05 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 89A47C25AE for ; Sat, 25 Jun 2016 18:59:05 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.821 X-Spam-Level: X-Spam-Status: No, score=-0.821 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx2-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id k1rCxDRBxIFF for ; Sat, 25 Jun 2016 18:59:04 +0000 (UTC) Received: from mail-qt0-f175.google.com (mail-qt0-f175.google.com [209.85.216.175]) by mx2-lw-eu.apache.org (ASF Mail Server at mx2-lw-eu.apache.org) with ESMTPS id E5DC95F4E4 for ; Sat, 25 Jun 2016 18:59:03 +0000 (UTC) Received: by mail-qt0-f175.google.com with SMTP id w59so7360196qtd.3 for ; Sat, 25 Jun 2016 11:59:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=ElY2BzEcDsFrXZlJQWh5YpBXjNDuKWON3KL+CECWwuQ=; b=rpIVX5zuGgWUgqjduWrsZhbJdkEjRbQIV+NuSu7S15c7mrkQyDr89ZjlMtL048oJRi xmcG6mMMEYn4NVgnL6g6i3qHl4ZWzwVHgNhAjx1F2E33ONeBa0sDKG7cG43FQR43SpoL FW/WiNV+tyCz9efYOlvLt2D67R/M9pdZlUO9wqElgD2aZ605oPzHA/jpTfQlE4wYuoyT tSnXr31crECnEHN1g/EkuMhmV7hZvc6ghCVd1A0ZhdP9rThERCMAzxlLDmTADxL2OZ3i TmZ9ff0Ln7wg7hJ2L4SXNYap4+Dqx4w2Y8xdKRMtTUmAcEI0J0bHEtQJli0MAUGnnnJy z3JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=ElY2BzEcDsFrXZlJQWh5YpBXjNDuKWON3KL+CECWwuQ=; b=eeaJt93xkWcpAK5iBhTcvnkX/UmCVv1zrswPZ26nbYte4rzBv6LEPkMa04p/G5EbaR 6ZNFetBiYBNwSUkk3dk2O409sHLE3VVYEpuJU4fl7PCpBsa32eqMNrUD/nvl5BoeQ4OV 1FH6OtZd5DlN7l9VJ3BaqLyObx9vCM4Zc3h/aoXttxhe86adoLOgRW8MYgcRZxTDqPoO ppJv/Mn2qhvBg66Ktau9P5FgjJ9JJmKd3n61pcGLF6pRlnOuYGSmaZG4EQyFuU/2X0Xs HWXwJTNvXblmlsZoSI6eXYWweTi4wN5DogvE3LrGYOI/VaMn5KyJUZpvYN1/lGDWPOXW SK7w== X-Gm-Message-State: ALyK8tKJbqmlNgAtJmWUcKscNnBBsLASvbH+jUjQYvbwZ9pVlNUGNH7alP67+XcPu2Y3I6FeoYtsEYXP/nFiew== X-Received: by 10.237.50.199 with SMTP id z65mr13168825qtd.24.1466881137235; Sat, 25 Jun 2016 11:58:57 -0700 (PDT) MIME-Version: 1.0 Received: by 10.55.10.5 with HTTP; Sat, 25 Jun 2016 11:58:56 -0700 (PDT) In-Reply-To: References: From: Yann Ylavic Date: Sat, 25 Jun 2016 20:58:56 +0200 Message-ID: Subject: Re: MinSpareThreads lower bound calculation and docs To: httpd-dev Content-Type: text/plain; charset=UTF-8 archived-at: Sat, 25 Jun 2016 18:59:13 -0000 On Sat, Jun 18, 2016 at 11:53 AM, Luca Toscano wrote: > Hi Apache devs! > > I have a question for you about the following users@ email thread: > > - > https://lists.apache.org/thread.html/ba26440a53773426e29296569bec17692c77a4a3bd07e8b5331474c4@1464703063@%3Cusers.httpd.apache.org%3E > > This one is about Yann's fix for the MinSpareThreads lower bound calculation > for worker/event: > http://svn.apache.org/viewvc?view=revision&revision=1737447 > > I was able to explain how it works in case of bucket = 1, but I have doubts > when buckets > 1. I've read event/worker code and IIUC: > > 1) min_spare_threads = threads_per_child * (num_buckets - 1) + num_buckets > 2) max_spare_threads = min_spare_threads + threads_per_child * num_buckets > 3) idle_spawn_rate controls the amount of new children created in each > perform_idle_server_maintenance run. > 4) perform_idle_server_maintenance is called per bucket and calculates > max|min_spare_threads accordingly. > > Rick's question in the email thread is about what happens when, with two > buckets and two processes for example, httpd reaches 50% of idle threads. I > made some calculations and an extra process should be indeed created, but > then killed because of the max_spare_threads limit. Is it going to keep > creating/destroying child processes or does idle_spawn_rate prevents it? Indeed. StartServers = 1 ThreadsPerChild = 4 MinSpareThreads = 1 MaxSpareThreads = 1 ListenCoresBucketRatio = 1 with 2 CPU cores => num_buckets = 2 1. Startup: min_spare_threads = 4 * (2 - 1) + 2 = 6 max_spare_threads = 6 + 4 * 2 = 14 min_spare_threads_per_bucket = min_spare_threads / num_buckets = 3 max_spare_threads_per_bucket = max_spare_threads / num_buckets = 7 idle_thread_count = StartServers * ThreadsPerChild = 4 idle_thread_count_per_bucket = idle_thread_count / num_buckets = 4 / 2 = 2 => idle_thread_count_per_bucket < min_spare_threads_per_bucket => create 1 child => idle_thread_count += ThreadsPerChild => idle_thread_count_per_bucket += ThreadsPerChild / num_buckets => idle_thread_count_per_bucket = 4 => min_spare_threads_per_bucket <= idle_thread_count_per_bucket <= max_spare_threads_per_bucket => 2 children (1 per bucket), fine 2. Two new connections (SO_REUSEPORT garanties even distribution hence one connection per bucket/child): => idle_thread_count_per_bucket -= 2 / num_buckets => idle_thread_count_per_bucket = 4 - 1 = 3 => min_spare_threads_per_bucket <= idle_thread_count_per_bucket <= max_spare_threads_per_bucket => 2 children (1 per bucket), fine 3. Two new connections (total 4): => idle_thread_count_per_bucket -= 2 / num_buckets => idle_thread_count_per_bucket = 3 - 1 = 2 => idle_thread_count_per_bucket < min_spare_threads_per_bucket => create 2 children (the above is true for the 2 existing children) => idle_thread_count += ThreadsPerChild * num_buckets => idle_thread_count_per_bucket += ThreadsPerChild * num_buckets / num_buckets => idle_thread_count_per_bucket = 4 + 4 = 8 => 4 children (2 per bucket), fine 4. Next perform_idle_server_maintenance() round (still 4 connections) => idle_thread_count_per_bucket > max_spare_threads_per_bucket => kill 2 children (the above is true for the 2 existing children) => same state as step 3 with the same number of connections, not fine... So I think Rick is right, we should set max_spare_threads at least to: min_spare_threads + ThreadsPerChild * num_buckets + num_buckets. For the example above, this would give max_spare_threads_per_bucket = 8, so step 3 would remain valid until the number of connections changes. Actually I first implemented this in r1737447 (for consistency with min_spare_threads changes), but finally kept the original formula (can't remember why, should have run the maths as above...). Anyway, fixed in trunk (r1750218, with a comment), thanks Luca and Rick for the follow up! Regards, Yann.