Return-Path: Delivered-To: apmail-httpd-dev-archive@www.apache.org Received: (qmail 39231 invoked from network); 10 Nov 2010 11:09:04 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 10 Nov 2010 11:09:04 -0000 Received: (qmail 81106 invoked by uid 500); 10 Nov 2010 11:09:35 -0000 Delivered-To: apmail-httpd-dev-archive@httpd.apache.org Received: (qmail 80773 invoked by uid 500); 10 Nov 2010 11:09:32 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 80765 invoked by uid 99); 10 Nov 2010 11:09:31 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 10 Nov 2010 11:09:31 +0000 X-ASF-Spam-Status: No, hits=-2.3 required=10.0 tests=RCVD_IN_DNSWL_MED,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [195.232.224.71] (HELO mailout02.vodafone.com) (195.232.224.71) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 10 Nov 2010 11:09:21 +0000 Received: from mailint02 (localhost [127.0.0.1]) by mailout02 (Postfix) with ESMTP id 78ADC2148B6 for ; Wed, 10 Nov 2010 12:09:00 +0100 (CET) Received: from avoexs01.internal.vodafone.com (unknown [145.230.4.134]) by mailint02 (Postfix) with ESMTP id 6C3EC2148A7 for ; Wed, 10 Nov 2010 12:09:00 +0100 (CET) Received: from VF-MBX11.internal.vodafone.com ([145.230.5.20]) by avoexs01.internal.vodafone.com with Microsoft SMTPSVC(6.0.3790.3959); Wed, 10 Nov 2010 12:09:01 +0100 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Subject: RE: Proxy regressions Date: Wed, 10 Nov 2010 12:09:00 +0100 Message-ID: <99EA83DCDE961346AFA9B5EC33FEC08B051D3D62@VF-MBX11.internal.vodafone.com> In-Reply-To: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: Proxy regressions Thread-Index: AcuAxKoDWUt0v48UTcOpjR8T89GJtAAAFeLw References: <201011032112.06345.sf@sfritsch.de> <201011032128.12775.sf@sfritsch.de> <6224351D-B10A-43C0-AC9C-5349C3BE4DDB@sharp.fm> <201011092154.19384.sf@sfritsch.de> <4CDA437E.1050804@apache.org> <99EA83DCDE961346AFA9B5EC33FEC08B051D3C8E@VF-MBX11.internal.vodafone.com> From: =?iso-8859-1?Q?=22Pl=FCm=2C_R=FCdiger=2C_VF-Group=22?= To: X-OriginalArrivalTime: 10 Nov 2010 11:09:01.0793 (UTC) FILETIME=[AE194510:01CB80C7] X-Virus-Checked: Checked by ClamAV on apache.org =20 > -----Original Message----- > From: Graham Leggett=20 > Sent: Mittwoch, 10. November 2010 11:47 > To: dev@httpd.apache.org > Subject: Re: Proxy regressions >=20 > On 10 Nov 2010, at 11:49 AM, Pl=FCm, R=FCdiger, VF-Group wrote: >=20 > >> Have we not created a pool lifetime problem for ourselves here? > >> > >> In theory, any attempt to read from the backend connection should > >> create buckets allocated from the r->connection->bucket_alloc > >> allocator, which should be removed from the backend connection when > >> the backend connection is returned to the pool. > > > > I guess we need a dedicated bucket_allocator at least in=20 > the beginning > > as we cannot guarantee that anyone in the create_connection=20 > hook uses > > the bucket_allocator to create an object that should=20 > persist until the > > connrec of the backend connection dies. > > > > Exchanging the allocator later each time we get the connection from > > the conn pool might create similar risks. But I admit that the later > > is only a gut feeling and I simply do not feel well with=20 > exchanging =20 > > the > > allocator. I have no real hard facts why this cannot be done. >=20 > The proxy currently creates the allocator in =20 > ap_proxy_connection_create(), and then passes the allocator to the =20 > various submodules via the ap_run_create_connection() hook, so it =20 > looks like we just passing the wrong allocator. The problem is that we keep the connrec structure in the conn pool. It is not created each time we fetch a connection from the conn pool. This is required to enable keepalives with SSL backends. As said if we pass the bucket allocator from the front end connection we possibly end up with other pool lifetime issues and as I speak of it SSL comes to my mind. >=20 > > So without trying to offend anyone, can we see the use case=20 > for the =20 > > asap returning > > again? >=20 > Right now, we are holding backend connections open for as long as it =20 > takes for a frontend connection to acknowledge the request. A=20 > typical =20 > backend could be finished within milliseconds, while the=20 > connection to =20 > the frontend often takes hundreds, sometimes thousands of =20 > milliseconds. While the backend connection is being held open, that =20 > slot cannot be used by anyone else. Used by whom? As said if you put it back in the pool and your pool has = the same max size as the number of threads in the process then there is some chance that this connection will idle in the pool until the actual = thread sent data to the client and fetches the connection from the pool again. As said I can only follow if the max pool size is configured to be = smaller than the number of threads in the process. Do you do this? Another possibility would be that depending on the request behaviour on your frontend and the distribution between locally handled requests (e.g. static content, cache) and backend content it might happen that the number of actual backend connections in the pool does not increase = that much (aka. to its max size) if the connection is returned to the pool = asap. Do you intend to get this effect? >=20 > In addition, when backend keepalives are kept short (as ours=20 > are), the =20 > time it takes to serve a frontend request can exceed the keepalive =20 > timeout, creating unnecessary errors. Why does this create errors? The connection is released by the backend because it has delivered all data to the frontend server and has not = received a new request within the keepalive timeframe. So the backend is actually free to reuse these resources. And the frontend will notice that the = backend has disconnected the next time it fetches the connection again from the = pool and will establish a new connection. >=20 > This issue is a regression that was introduced in httpd v2.2, httpd =20 > 2.0 released the connection as soon as it was done. Because it had a completly different architecture and the released = connection was not usable by anyone else but the same frontend connection because = it was stored in the conn structure of the frontend request. So the result with 2.0 is = the same as with 2.2. Regards R=FCdiger