Return-Path: Delivered-To: apmail-httpd-dev-archive@www.apache.org Received: (qmail 25700 invoked from network); 6 May 2009 19:23:35 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 6 May 2009 19:23:35 -0000 Received: (qmail 36502 invoked by uid 500); 6 May 2009 19:23:34 -0000 Delivered-To: apmail-httpd-dev-archive@httpd.apache.org Received: (qmail 36416 invoked by uid 500); 6 May 2009 19:23:33 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 36406 invoked by uid 99); 6 May 2009 19:23:33 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 May 2009 19:23:33 +0000 X-ASF-Spam-Status: No, hits=-4.0 required=10.0 tests=RCVD_IN_DNSWL_MED,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [195.227.30.149] (HELO mailserver.kippdata.de) (195.227.30.149) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 May 2009 19:23:24 +0000 Received: from [195.227.30.209] (notebook-rj [195.227.30.209]) by mailserver.kippdata.de (8.13.5/8.13.5) with ESMTP id n46JN2YX022797 for ; Wed, 6 May 2009 21:23:02 +0200 (CEST) Message-ID: <4A01E321.8090907@kippdata.de> Date: Wed, 06 May 2009 21:21:05 +0200 From: Rainer Jung User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.1b3pre) Gecko/20090223 Lightning/1.0pre Thunderbird/3.0b2 MIME-Version: 1.0 To: dev@httpd.apache.org Subject: Re: mod_proxy hooks for clustering and load balancing References: <4239a4320905061126n6b851f1g8f61dbb6b0f84756@mail.gmail.com> In-Reply-To: <4239a4320905061126n6b851f1g8f61dbb6b0f84756@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 8bit X-Virus-Checked: Checked by ClamAV on apache.org On 06.05.2009 20:26, Paul Querna wrote: > There is lots of discussion about fixing mod_proxy and > mod_proxy_balancer, to try to make it do things that the APIs are just > broken for, and right now, it seems from the outside to be turning > into a ball of mud. > > I think the right way to frame the discussion is, how should the API > optimally be structured -- then change the existing one to be closer > to it, rather than the barrage of incremental changes that seem to be > creating lots of cruft, and ending up with something that still > doesn't do what we want. > > I think mod_proxy's decisions on what to proxy to, and where, should > be designed as a series of hooks/providers, specifically: > > 1) Provider for a list of backends -- This provider does nothing with > balancing, just provides a list of Backend Definition (preferably just > keep it apr_sockaddr_t?) that a Connection is able to use. -- Backend > status via multicast or other methods go here. > 2) Provider that _sorts_ the list of backends. Input is a list, > output is a new ordered list. -- Sticky sesions go here, along with > any load based balancing. > 3) Provider that given a Backend Definition, returns a connection. > (pools connections, or open new one, whatever) -- Some of the > proxy_util and massive worker objects go here. > > Using this structure, you can implement a dynamic load balancer > without having to modify the core. I think the key is to _stop_ > passing around the gigantic monolithic proxy_worker structures, and go > to having providers that do simple operations: get a list, sort the > list, get me a connection. > > Thoughts? Sounds good. The provider in 2) needs a second functionality/API to feed back the results of a request: - whether the backend was detected as being broken - when using piggybacked load data or (as we do today) locally generated load data the updates must be done against the provider 2) after the response has been received (or in the case of busyness once before forwarding the request and once after receiving the response) Those providers look similar to what I called "topology manager" and "state manager" and you want to include the balancing/stickyness decision into the state manager. My remark above somehow indicates, that the provider 2) needs to make the decision *and* to update the data �n which the decision is based. This data update could happen behind the scenes, but in most cases it will need an API to be driven by the request handling component. Regards, Rainer