Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id D59EB200AE4 for ; Wed, 11 May 2016 06:36:37 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id D43FA160A11; Wed, 11 May 2016 04:36:37 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id A6C1916098A for ; Wed, 11 May 2016 06:36:36 +0200 (CEST) Received: (qmail 39600 invoked by uid 500); 11 May 2016 04:36:35 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 39590 invoked by uid 99); 11 May 2016 04:36:35 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 11 May 2016 04:36:35 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 33C1A18056C for ; Wed, 11 May 2016 04:36:35 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.298 X-Spam-Level: * X-Spam-Status: No, score=1.298 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=rowe-clan-net.20150623.gappssmtp.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id SIwqByUXDwwJ for ; Wed, 11 May 2016 04:36:32 +0000 (UTC) Received: from mail-ig0-f176.google.com (mail-ig0-f176.google.com [209.85.213.176]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 167425FBB9 for ; Wed, 11 May 2016 04:36:31 +0000 (UTC) Received: by mail-ig0-f176.google.com with SMTP id s8so26597300ign.0 for ; Tue, 10 May 2016 21:36:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rowe-clan-net.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:date:message-id:subject:from:to; bh=2eDHOShiiTgsTVGnxQSUxAuQbzGKOo4Gk8ubuhaHWaE=; b=VDN6N2pgFRPFYvNuiqvxEHnR7FehB4PVaIJygxe5doPHtYCsUQa9aNo48Mk0NcGtZv w16rUL1W0K1VsmuHhFVfXtb48fdv1LjA8NHet+gkux956H0Yqz22flD6SAZyn+OKOoyH rdHZS3f9LX+h6Yz3K1ln8SVhcZ5lTYfT9JCuPFongdqjaD4Wm/PgiviBDT259X1iF0h3 gH2rlhx1JcLzU9+cNq74IJO+ywmljwTC6Y//UsgvtTvrHAjp5kZ34m7GxrQl+yKY0e8h PuSj05A3Yh5OeEjJwDRG6nvzBl5DmUJD9iyKu+rtYSH7m637a46h+G0bk3fUywfUG+gY FTvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to; bh=2eDHOShiiTgsTVGnxQSUxAuQbzGKOo4Gk8ubuhaHWaE=; b=ifzFZtlaHFG/YyAQ0tgkU3Yo9PkqWgx5UPQe/WQrSfQ49CS1OIPryxXEI6fkrlW7AW T3lCRU3bqACIZMvxYb3L6V1xQ7Ni3iJ1W546xGQDvkGZbLFMTzwxZ3edbPLZ2gMt+GJQ UTPyjjJDyAo9qQUtItfc2gbigtKCk0V6L3z7SxJTBjoQm83h9HNIhz92AvDKDYvza6IF pNEa4G+qzJLiqHDGz4ropP2F0t5yWQrtePShLsyJ3gTiB6v5NPd2JkZHbRjU/DIrhzbh xlR/2og55hhp5nzEFYODvOF0aPjFnCJb3LC1KykLMk3faUyo50SycUG9rTN8m+GbxCTC XbXQ== X-Gm-Message-State: AOPr4FUsTaBaK5CMmTc35FOOq2vmDE/k6t63IgroxzpcNmqDM7jyEy+z7tJwweWq7/itZgf3nANfA55McVYw0Ebh MIME-Version: 1.0 X-Received: by 10.50.228.175 with SMTP id sj15mr1601777igc.35.1462941383852; Tue, 10 May 2016 21:36:23 -0700 (PDT) Received: by 10.107.3.218 with HTTP; Tue, 10 May 2016 21:36:23 -0700 (PDT) In-Reply-To: References: <20160508103039.2F7433A0336@svn01-us-west.apache.org> Date: Tue, 10 May 2016 23:36:23 -0500 Message-ID: Subject: Re: svn commit: r1742794 - /httpd/httpd/branches/2.4.x/STATUS From: William A Rowe Jr To: httpd Content-Type: multipart/alternative; boundary=f46d04289c77832775053289953c archived-at: Wed, 11 May 2016 04:36:38 -0000 --f46d04289c77832775053289953c Content-Type: text/plain; charset=UTF-8 On Tue, May 10, 2016 at 7:26 PM, Yann Ylavic wrote: > On Tue, May 10, 2016 at 5:23 PM, William A Rowe Jr > wrote: > > On Sun, May 8, 2016 at 7:29 AM, Yann Ylavic > wrote: > >> > >> ISTM that the request line has already been clobbered (blanked) by the > >> previous call to ap_update_child_status_from_conn(SERVER_BUSY_READ) in > >> ap_process_http_[a]sync_connection(), where we already lost the > >> previous request's line. > > > > That's FINE. In the previous call we are already on to another thread, > > this thread did not perform work on the previous request (or if it had, > > it had surrendered the thread and accidentally ended back up there.) > > Right, that's why/when the new thread clobbers the request-line. > We are talking about MPM event only here, with sync MPMs we do > SERVER_BUSY_READ for the first request on the connection only, and my > goal is to have consistent reporting for all MPMs. > But that's not what the third suggested backport does, it toggles the request state to "W"rite when there is actually no request received (either it was actually not ready, or had spuriously disconnected) > > The scoreboard reports what work this *thread* has performed. So > > far, if it is a wakeup to read a new request, on the event MPM in 2.4 > > this is not request-based work. > > The scoreboard reports that the thread is now working on a new > connection (possibly kept alive), but clears half of the data > associated with that connection (i.e. the last request/vhost) > altogether. > Of course it should, because the old request is still parked on the old worker thread until it is reclaimed, and the newly activated thread for this connection never processed work for the old thread... > >> That's why I proposed to save the previous request line and host in > >> the conn_rec, and use them in the scoreboard until a new (real) > >> request is available (it didn't work as expected because the > >> scoreboard was clobbered in multiple places before Bill's changes, but > >> it should work better/easily now). > > > > > > It isn't workable because every time we hack around this issue, we > > mask the actual errors in the calls to the scoreboard API. In 2.4, > > the API design is DSS. > > DSS? Dirt simple stupid. > > Complicating it is a really bad idea, and > > misleads, if not actually breaking existing consumers of the scoreboard. > > My proposed patch (not that complicated IMHO) shouldn't break anything > (if working as expected), it is meant for consistency between event > and other MPMs (now that Rainer fixed the blanks there due to > keepalive/close), see below. > I still need to further review, but you toggle a writing state for a request with no inbound request, which is clearly wrong. > >> That would be: when no r is available but c is, use the saved > >> request-line/host from c; when r is available, use the ones from r and > >> update them in c. > >> > >> That way there would be no blank at all (IOW, the scoreboard entry > >> would be left with the latest request handled on the connection, not a > >> blank which inevitably ends every connection after keepalive timeout > >> or close, and shouldn't be taken into account as an empty request > >> IMHO). > > > > Which is all wrong IMHO. If the request completed, that thread still > > reports the previous request in an idle or keepalive state until that > thread > > is woken up to do new work. > > Right, for MPM event we can indeed consider that the reporting of the > previous request for a thread is dead once this thread is given > another connection, but when yet another thread will take care of the > former connection, the previous request could show up in its reporting > (that's how my patch works, faithfully reporting how each MPM works, > without clobbering data unnecessarily). > > The case where this happens is for keepalive/lingering-close handling. > Suppose thread t1 handles request r1 on connection c1, and t1 is > released after r1 (when c1 is kept alive). > Same for t2 with r2 on c2. > > Now the following cases (not exhaustive): > 1. r3 arrives on c1, t2 handles the event (read). > 2. c2 is closed/reset remotely, t1 handles the event (read). > 2. c1 is closed/reset remotely, t2 handles the event (read). > > Without my patch: > 1. t2 reports r3 on c1, "losing" r2 on c2 (no more reported by the > scoreboard), while t1 still reports r1 on c1 (although "superseded" by > r3). > 2. t1 reads/closes c2 and clobbers r1 (blank). > 3. t2 reads/closes c1 and clobbers r3 (blank). > => t1 and t2 scoreboard's request/vhost are blank. > > With my patch: > 1. t2 reports r3 on c1, "losing" r2 on c2 (no more reported by the > scoreboard), while t1 still reports r1 on c1 (although "superseded" by > r3). > 2. t1 reads/closes c2 and restores r2. > 3. t2 reads/closes c1 and restores r3. > => t1 and t2 scoreboard's request/vhost contain relevent (last) info. > Thanks for all of the additional details, it would be helpful throughout the rest of this discussion if we summarized the before-and-after examples of the actual scoreboard lines we propose, so we make sure we aren't talking past one another. I'll study these examples in the morning and check back in after I've evaluated, but I'm pretty sure that setting a "W" state for a request with no request is an oxymoron. > >> How about the attached patch? > > > > This is the right direction for - we will be waking up threads > > to resume work on an already in-flight request, correct? That request > > needs to be displayed because that is the work this thread is performing. > > This would indeed also work in full event mode (i.e. future/trunk), > but it's legitimate in 2.4.x too, IMHO, since requests already cross > threads at keepalive/lingering-close time (with MPM event). > No, we don't care. The only state in 2.4 that we might care about is write-completion if we have fully satisfied the request, that the request was logged, but the data wasn't fully transmitted (e.g. a sendfile completion state.) In that case alone we would want to report what that thread is continuing to write. In any other case, keepalive/lingering close completion, the previous request did *not happen on our thread*, so the previous work for that request is still noted (until superceeded) on the actual **work thread** and this thread is only alive to shutter/continue the connection, with zero interest in the request which preceeded it. My 2c, which seems to beat all of the other regressions introduced into the 2.4.x maintenance branch, but I'm happy to be convinced of the value of introducing other behaviors. I'm more likely to encourage us to shove these at trunk for a new behavior for the next generation of htttpd. --f46d04289c77832775053289953c Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
On T= ue, May 10, 2016 at 7:26 PM, Yann Ylavic <ylavic.dev@gmail.com><= /span> wrote:
On Tue, Ma= y 10, 2016 at 5:23 PM, William A Rowe Jr <wrowe@rowe-clan.net> wrote:
> On Sun, May 8, 2016 at = 7:29 AM, Yann Ylavic <ylavic.dev= @gmail.com> wrote:
>>
>> ISTM that the request line has already been clobbered (blanked) by= the
>> previous call to ap_update_child_status_from_conn(SERVER_BUSY_READ= ) in
>> ap_process_http_[a]sync_connection(), where we already lost the >> previous request's line.
>
> That's FINE.=C2=A0 In the previous call we are already on to anoth= er thread,
> this thread did not perform work on the previous request (or if it had= ,
> it had surrendered the thread and accidentally ended back up there.)
Right, that's why/when the new thread clobbers the request-line.=
We are talking about MPM event only here, with sync MPMs we do
SERVER_BUSY_READ for the first request on the connection only, and my
goal is to have consistent reporting for all MPMs.

But that's not= what the third suggested backport does, it toggles the request
s= tate to "W"rite when there is actually no request received (eithe= r it was
actually not ready, or had spuriously disconnected)
=C2=A0
> The scoreboard reports what work this *thread* has performed.=C2=A0 So=
> far, if it is a wakeup to read a new request, on the event MPM in 2.4<= br> > this is not request-based work.

The scoreboard reports that the thread is now working on a new
connection (possibly kept alive), but clears half of the data
associated with that connection (i.e. the last request/vhost)
altogether.

Of course it shoul= d, because the old request is still parked on the old
worker thre= ad until it is reclaimed, and the newly activated thread for
this= connection never processed work for the old thread...
=C2=A0
>> That's why I proposed to save the previous request line and ho= st in
>> the conn_rec, and use them in the scoreboard until a new (real) >> request is available (it didn't work as expected because the >> scoreboard was clobbered in multiple places before Bill's chan= ges, but
>> it should work better/easily now).
>
>
> It isn't workable because every time we hack around this issue, we=
> mask the actual errors in the calls to the scoreboard API.=C2=A0 In 2.= 4,
> the API design is DSS.

DSS?

Dirt simple stupid.
= =C2=A0
> Complicating it is a really bad idea, and
> misleads, if not actually breaking existing consumers of the scoreboar= d.

My proposed patch (not that complicated IMHO) shouldn't break an= ything
(if working as expected), it is meant for consistency between event
and other MPMs (now that Rainer fixed the blanks there due to
keepalive/close), see below.
<= br>
I still need to further review, but you toggle a writing stat= e for a request
with no inbound request, which is clearly wrong.<= /div>
=C2=A0
>> That would be: when no r is available but c is, use the saved
>> request-line/host from c; when r is available, use the ones from r= and
>> update them in c.
>>
>> That way there would be no blank at all (IOW, the scoreboard entry=
>> would be left with the latest request handled on the connection, n= ot a
>> blank which inevitably ends every connection after keepalive timeo= ut
>> or close, and shouldn't be taken into account as an empty requ= est
>> IMHO).
>
> Which is all wrong IMHO. If the request completed, that thread still > reports the previous request in an idle or keepalive state until that = thread
> is woken up to do new work.

Right, for MPM event we can indeed consider that the reporting of th= e
previous request for a thread is dead once this thread is given
another connection, but when yet another thread will take care of the
former connection, the previous request could show up in its reporting
(that's how my patch works, faithfully reporting how each MPM works, without clobbering data unnecessarily).

The case where this happens is for keepalive/lingering-close handling.
Suppose thread t1 handles request r1 on connection c1, and t1 is
released after r1 (when c1 is kept alive).
Same for t2 with r2 on c2.

Now the following cases (not exhaustive):
1. r3 arrives on c1, t2 handles the event (read).
2. c2 is closed/reset remotely, t1 handles the event (read).
2. c1 is closed/reset remotely, t2 handles the event (read).

Without my patch:
1. t2 reports r3 on c1, "losing" r2 on c2 (no more reported by th= e
scoreboard), while t1 still reports r1 on c1 (although "superseded&quo= t; by
r3).
2. t1 reads/closes c2 and clobbers r1 (blank).
3. t2 reads/closes c1 and clobbers r3 (blank).
=3D> t1 and t2 scoreboard's request/vhost are blank.

With my patch:
1. t2 reports r3 on c1, "losing" r2 on c2 (no more reported by th= e
scoreboard), while t1 still reports r1 on c1 (although "superseded&quo= t; by
r3).
2. t1 reads/closes c2 and restores r2.
3. t2 reads/closes c1 and restores r3.
=3D> t1 and t2 scoreboard's request/vhost contain relevent (last) in= fo.

Thanks for all of the additional de= tails, it would be helpful throughout the
rest of this discussion= if we summarized the before-and-after examples
of the actual sco= reboard lines we propose, so we make sure we aren't
talking p= ast one another. I'll study these examples in the morning and
check back in after I've evaluated, but I'm pretty sure that setti= ng a "W"
state for a request with no request is an oxym= oron.
=C2=A0
>> How about the attached patch?
>
> This is the right direction for <trunk> - we will be waking up t= hreads
> to resume work on an already in-flight request, correct?=C2=A0 That re= quest
> needs to be displayed because that is the work this thread is performi= ng.

This would indeed also work in full event mode (i.e. future/trunk),<= br> but it's legitimate in 2.4.x too, IMHO, since requests already cross threads at keepalive/lingering-close time (with MPM event).

No, we don't ca= re.

Th= e only state in 2.4 that we might care about is write-completion if we
have fully satisfied the request, that the requ= est was logged, but the data
wasn't ful= ly transmitted (e.g. a sendfile completion state.) =C2=A0In that case=C2=A0=
alone we would want to report what that th= read is continuing to write.

In any other case, keepalive/lingering close complet= ion, the previous
request did *not happen o= n our thread*, so the previous work for that
request is still noted (until superceeded) on the actual **work thread**<= /div>
and this thread is only alive to shutter/co= ntinue the connection, with
zero interest i= n the request which preceeded it.

My 2c, which seems to beat all of the other reg= ressions introduced
into the 2.4.x maintena= nce branch, but I'm happy to be convinced of=C2=A0
the value of introducing other behaviors.=C2=A0 I'm more li= kely to encourage
us to shove these at trun= k for a new behavior for the next generation
of htttpd.



--f46d04289c77832775053289953c--