Return-Path: Delivered-To: apmail-httpd-dev-archive@www.apache.org Received: (qmail 71970 invoked from network); 7 Dec 2005 09:21:41 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur.apache.org with SMTP; 7 Dec 2005 09:21:41 -0000 Received: (qmail 33630 invoked by uid 500); 7 Dec 2005 09:21:35 -0000 Delivered-To: apmail-httpd-dev-archive@httpd.apache.org Received: (qmail 33585 invoked by uid 500); 7 Dec 2005 09:21:34 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 33572 invoked by uid 99); 7 Dec 2005 09:21:34 -0000 Received: from asf.osuosl.org (HELO asf.osuosl.org) (140.211.166.49) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 07 Dec 2005 01:21:34 -0800 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: apache.org Received-SPF: pass (asf.osuosl.org: local policy) Received: from [195.233.129.142] (HELO rat01037.dc-ratingen.de) (195.233.129.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 07 Dec 2005 01:21:33 -0800 Received: from rat01047.dc-ratingen.de (rat01047_e0 [195.233.128.119]) by rat01037.dc-ratingen.de (Switch-3.1.4/Switch-3.1.0) with ESMTP id jB79LBY7002466 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO) for ; Wed, 7 Dec 2005 10:21:11 +0100 (MET) Received: from vis01-mx02.vis.internal.vodafone.com (vis01-mx02.dc-ratingen.de [145.230.71.230]) by rat01047.dc-ratingen.de (Switch-3.1.4/Switch-3.1.0) with ESMTP id jB79LBFg025606 for ; Wed, 7 Dec 2005 10:21:11 +0100 (MET) X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Subject: Re: 2.2 mod_http_proxy and "partial" pages Date: Wed, 7 Dec 2005 10:21:10 +0100 Message-ID: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: Re: 2.2 mod_http_proxy and "partial" pages Thread-Index: AcX7D5ABzX/5VA9nQyCUWos2fqov7g== From: =?iso-8859-1?Q?Pl=FCm=2C_R=FCdiger=2C_VIS?= To: X-Virus-Checked: Checked by ClamAV on apache.org X-Spam-Rating: minotaur.apache.org 1.6.2 0/1000/N > -On December 7, 2005 2:00:19 AM +0100 Ruediger Pluem = =20 > wrote: > >>> The patches to mod_proxy_http we identified here on list do indeed = work >>> and are in as r354628. >> >> Sorry for stepping in that late into the discussion, but wouldn't it = be >> better to fix that after the return from proxy_run_scheme_handler in >> mod_proxy? > > The error has to be flagged inside the HTTP scheme before the error is = > lost. Without this patch, mod_proxy_http returns 'success'=20 > unconditionally. That is clearly wrong and that's what I changed. Yes, of course the scheme handler must signal the proxy handler that the = backend broke. Just returning 'success' in this case is of course plain wrong. > >> I fear that mod_proxy_ajp is affected by the same problem that >> is now fixed in mod_proxy_http. This means we put the burden of = handling >> this in a unified way on each proxy backend module. How about letting = the >> schema_handler simply return a specific value (DONE or whatever) to >> signal that the backend broke in the middle of sending the response = and >> let mod_proxy handle the dirty work. > > That's what it does right now. What would you like to change? I would like to set the c->aborted in mod_proxy's proxy_handler after = the run_scheme_handler. Reason: 1. We can define a clear interface for the scheme handlers here: If the backend broke before you sent headers just return BAD_GATEWAY and send nothing, if it broke afterwards just return BROKEN_BACKEND (or whatever you like that should be defined for this case). The proxy handler would handle this BROKEN_BACKEND return code and do the 'right' thing (currently setting c->aborted). Thus we do not have the need to put the burden of the details on the schema handler (why I regard it as a burden see 2.) 2. I am not 100% percent happy with the c->aborted approach as the = original intention of c->aborted was another one (The connection to the = *client* broke not to the *backend*). I admit that I do not see any other approach currently, so we should stick with this, but if we decide to change = this later on and we follow 1. then it is much easier to change as we have = this code only in *one* location and not in every scheme handler. [..cut..] > An error bucket is already sent down the chain when the specific = connection=20 > error I hit with the chunked line occurs through HTTP_IN, but that=20 > accomplishes little because the HTTP filters which understand the = error=20 > buckets have already gone as the headers have been sent. > > FWIW, an error bucket, by itself, would not be enough; the connection = close=20 > logic is only implemented well outside of the filter logic. At best, = it=20 > has to be an error bucket combined with a returned status code that = can be=20 > returned all the way up. -- justin Ahh, ok. Thanks for clarification. Regards R=FCdiger