httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yann Ylavic <ylavic....@gmail.com>
Subject Re: MinSpareThreads lower bound calculation and docs
Date Sat, 25 Jun 2016 18:58:56 GMT
On Sat, Jun 18, 2016 at 11:53 AM, Luca Toscano <toscano.luca@gmail.com> wrote:
> Hi Apache devs!
>
> I have a question for you about the following users@ email thread:
>
> -
> https://lists.apache.org/thread.html/ba26440a53773426e29296569bec17692c77a4a3bd07e8b5331474c4@1464703063@%3Cusers.httpd.apache.org%3E
>
> This one is about Yann's fix for the MinSpareThreads lower bound calculation
> for worker/event:
> http://svn.apache.org/viewvc?view=revision&revision=1737447
>
> I was able to explain how it works in case of bucket = 1, but I have doubts
> when buckets > 1. I've read event/worker code and IIUC:
>
> 1) min_spare_threads = threads_per_child * (num_buckets - 1) + num_buckets
> 2) max_spare_threads = min_spare_threads + threads_per_child * num_buckets
> 3) idle_spawn_rate controls the amount of new children created in each
> perform_idle_server_maintenance run.
> 4) perform_idle_server_maintenance is called per bucket and calculates
> max|min_spare_threads accordingly.
>
> Rick's question in the email thread is about what happens when, with two
> buckets and two processes for example, httpd reaches 50% of idle threads. I
> made some calculations and an extra process should be indeed created, but
> then killed because of the max_spare_threads limit. Is it going to keep
> creating/destroying child processes or does idle_spawn_rate prevents it?

Indeed.

StartServers = 1
ThreadsPerChild = 4
MinSpareThreads = 1
MaxSpareThreads = 1
ListenCoresBucketRatio = 1 with 2 CPU cores => num_buckets = 2

1. Startup:
min_spare_threads = 4 * (2 - 1) + 2 = 6
max_spare_threads = 6 + 4 * 2 = 14
min_spare_threads_per_bucket = min_spare_threads / num_buckets = 3
max_spare_threads_per_bucket = max_spare_threads / num_buckets = 7
idle_thread_count = StartServers * ThreadsPerChild = 4
idle_thread_count_per_bucket = idle_thread_count / num_buckets = 4 / 2 = 2
=> idle_thread_count_per_bucket < min_spare_threads_per_bucket
=> create 1 child
=> idle_thread_count += ThreadsPerChild
=> idle_thread_count_per_bucket += ThreadsPerChild / num_buckets
=> idle_thread_count_per_bucket = 4
=> min_spare_threads_per_bucket <= idle_thread_count_per_bucket <=
max_spare_threads_per_bucket
=> 2 children (1 per bucket), fine

2. Two new connections (SO_REUSEPORT garanties even distribution hence
one connection per bucket/child):
=> idle_thread_count_per_bucket -= 2 / num_buckets
=> idle_thread_count_per_bucket = 4 - 1 = 3
=> min_spare_threads_per_bucket <= idle_thread_count_per_bucket <=
max_spare_threads_per_bucket
=> 2 children (1 per bucket), fine

3. Two new connections (total 4):
=> idle_thread_count_per_bucket -= 2 / num_buckets
=> idle_thread_count_per_bucket = 3 - 1 = 2
=> idle_thread_count_per_bucket < min_spare_threads_per_bucket
=> create 2 children (the above is true for the 2 existing children)
=> idle_thread_count += ThreadsPerChild * num_buckets
=> idle_thread_count_per_bucket += ThreadsPerChild * num_buckets / num_buckets
=> idle_thread_count_per_bucket = 4 + 4 = 8
=> 4 children (2 per bucket), fine

4. Next perform_idle_server_maintenance() round (still 4 connections)
=> idle_thread_count_per_bucket > max_spare_threads_per_bucket
=> kill 2 children (the above is true for the 2 existing children)
=> same state as step 3 with the same number of connections, not fine...

So I think Rick is right, we should set max_spare_threads at least to:
min_spare_threads + ThreadsPerChild * num_buckets + num_buckets.
For the example above, this would give max_spare_threads_per_bucket =
8, so step 3 would remain valid until the number of connections
changes.
Actually I first implemented this in r1737447 (for consistency with
min_spare_threads changes), but finally kept the original formula
(can't remember why, should have run the maths as above...).

Anyway, fixed in trunk (r1750218, with a comment), thanks Luca and
Rick for the follow up!

Regards,
Yann.

Mime
View raw message