Return-Path: Delivered-To: apmail-httpd-dev-archive@www.apache.org Received: (qmail 16450 invoked from network); 22 Jun 2006 13:54:17 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur.apache.org with SMTP; 22 Jun 2006 13:54:17 -0000 Received: (qmail 13296 invoked by uid 500); 22 Jun 2006 13:54:13 -0000 Delivered-To: apmail-httpd-dev-archive@httpd.apache.org Received: (qmail 13219 invoked by uid 500); 22 Jun 2006 13:54:13 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 13205 invoked by uid 99); 22 Jun 2006 13:54:13 -0000 Received: from asf.osuosl.org (HELO asf.osuosl.org) (140.211.166.49) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 22 Jun 2006 06:54:13 -0700 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (asf.osuosl.org: domain of darryl-mailinglists@netbauds.net designates 217.158.188.240 as permitted sender) Received: from [217.158.188.240] (HELO server3.netbauds.net) (217.158.188.240) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 22 Jun 2006 06:54:12 -0700 Received: from host86-128-21-90.range86-128.btcentralplus.com ([86.128.21.90]:60847 "EHLO [172.16.32.4]" smtp-auth: "darryl" TLS-CIPHER: "DHE-RSA-AES256-SHA keybits 256/256 version TLSv1/SSLv3" TLS-PEER-CN1: ) by mail-3.netbauds.net with ESMTPSA id S4523178AbWFVNxv (ORCPT ); Thu, 22 Jun 2006 14:53:51 +0100 Message-ID: <449AA0EE.7050606@netbauds.net> Date: Thu, 22 Jun 2006 14:53:50 +0100 From: Darryl Miles User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-GB; rv:1.8.0.4) Gecko/20060614 SeaMonkey/1.0.2 MIME-Version: 1.0 To: dev@httpd.apache.org Subject: Re: mod_proxy_balancer/mod_proxy_ajp TODO References: <200606192105.k5JL5wN21683@devsys.jaguNET.com> <6291fc850606192226w3b4c3c2cm885a7f2eaa55dc16@mail.gmail.com> <44979D94.4050406@apache.org> <6291fc850606220302w1f4d47ban1552b06eeccf6ae7@mail.gmail.com> <449A932B.9040007@netbauds.net> <6291fc850606220603x2c495008w9a1fae1b7f4c5d63@mail.gmail.com> In-Reply-To: <6291fc850606220603x2c495008w9a1fae1b7f4c5d63@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org X-Spam-Rating: minotaur.apache.org 1.6.2 0/1000/N Henri Gomez wrote: > The TomcatoMips indicator was just something to tell that it's not the > raw CPU power which is important, but the estimated LOAD capacity of > an instance. But its still apache working out TomcatoMips. I think that approach is still flawed. I'm saying only the server end of the AJP knows the true situation. The current setup presumes that the running apache instance has all the facts necessary to determine balancing. When all it knows about is the work it has given the backend and the work rate its betting it back. I'm thinking both ends apache and tomcat should make load calculations based on that they know at hand. As far as I know there is no provision in the AJP to announce "Willingness to serve". Both ends should feed their available information and configuration biases it into their respective algorithm and come out with a result that can be compared against each other. The worker would then announce as necessary (there maybe a minimum % change to damper information flap) down the connector the info to apache. There probably need to be a random magic number and wrapping sequence number in the packet to help the apache end spot obvious problems. This would allow kernel load avg/io load (and anything else) to be periodically taken into account at the tomcat end. It would be expected that each member of the backend tomcat cluster is using the same algorithm to announce willingness. Otherwise you get disparity when apache comes to make a decision. So I suppose its just the framework to allow an LB worker to announce its willingness to serve I am calling for here. Not any specific algorithm, that issue can be toyed with until the end of time. An initial implementation would need to experiment and work out: * How that willingess value impacts/biases the existing apache LB calculations. * Guidelines on how to configure algorithm at each end up based on known factors (like CPUs, average background workload, relative IO performance). I'm thinking with that you can hit the widest audience to make a usable default without giving much thought to configuration. The type of approach kernels make these days, you only have to tweak and think about configuration in extreme scenarios but for the most it works well out of the box. Darryl