Return-Path: Delivered-To: new-httpd-archive@hyperreal.org Received: (qmail 25590 invoked by uid 6000); 1 Mar 1999 12:15:15 -0000 Received: (qmail 25574 invoked from network); 1 Mar 1999 12:15:13 -0000 Received: from ecpport2.ecopost.com.au (HELO ecpport2.midcoast.com.au) (203.28.64.15) by taz.hyperreal.org with SMTP; 1 Mar 1999 12:15:13 -0000 Received: from midcoast.com.au (kmidc65-61.ecopost.com.au [203.28.65.61]) by ecpport2.midcoast.com.au (8.9.1/8.8.5) with ESMTP id XAA14378 for ; Mon, 1 Mar 1999 23:14:47 +1100 Message-ID: <36DA893D.31139DED@midcoast.com.au> Date: Mon, 01 Mar 1999 12:34:05 +0000 From: "Michael H. Voase" X-Mailer: Mozilla 4.06 [en] (X11; I; Linux 2.0.36 i486) MIME-Version: 1.0 To: new-httpd@apache.org Subject: Re: Apache 2 mutiple pools feature request References: Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: new-httpd-owner@apache.org Precedence: bulk Reply-To: new-httpd@apache.org Can I make a segestion at this point in the discussion . The topic of running multiple interpreters using seperately configured http servers was something that spurred me into kludging mod_cgi into mod_cgisock . When I looked into the situation of running mod_perl with apache , it sturck me that seperating out the interpreter from the http server would prevent the interpreter from wandering around in apache's memory space possibly causing problems . Also if one is using a language that is capable of implementing unix domain sockets ( tcl and java notable exceptions ) one can write seperate deamons to service http cgi requests . Before anyone bites my head off , I am aware that cgisock places a lot of limitations on what a cgi programmer can and cannot do with their script but it does provide an insulating layer between apache and the interpreter such that it is difficult for a wayward script to cause apache much grief . I am currently working on a cgisock server wrapper for a perl interpreter written in C to hide the more gruesome details of cgisock services from the cgi programmer if they want that , however it is not a far stretch of the imagination to implement a simmilar wrapper for tcl,java,PHP,basic or whatever . The end result is that one can run a http server without the need to configure dynamically loaded modules or seperate builds of apache and still retain a fairly modest memory footprint . ( the other bonus is a finer grain of control of how many threads / processes per URL you want , independent of how many http servers one is running ) While I have your ear for the moment , I am also looking for comments on the concept of an interactive http connection that can serve multiple sequential requests overs the single http connection . It is an idea I am still trying to determine wether it is worth persuing or dropping . The idea was raised a couple of weeks ago on apache-modules and it is something that I think can be implemented using using code from mod_cgi , mod_perl , mod_cgisock and/or http_core . The original segestion was for an interactive 'shell' and I am wary of having a http connection wait around for user input , however the idea of a browser polling serveral requests down the same socket could provide interesting results . If anyone has any comments on this topic or wishes to tell me I am 're-inventing' the wheel again , all comments would be gratefully welcome . just a couple of thoughts anyway.. Cheers Mik Voase. Brian Behlendorf wrote: > > For what it's worth, the jserv developers have the same problem. Of > course, they have an advantage, in that their model is to run a backend > daemon for java requests, and it's not difficult to configure more than > one daemon running as different UID's. > > Formalizing this as something that Apache does makes some sense to me; > e.g., having a standard interface to back-end daemons for launching (as > root, setting to a particular UID) / restarting / killing, distributing > requests, etc. Maybe even a standard back-end protocol, like what jserv > uses today. > > I'm sure the Perl folks would prefer to write their daemon in > multithreaded Perl, and of course the jserv developers are already using a > JVM for this, so I don't think we'd need to go so far as create daemon > stub code or anything. This would be a real boon to those people being > asked to run PHP *and* jserv *and* mod_perl on their systems (like me at > taz :) - instead of a mod_perl, mod_php and mod_jserv, we just have > mod_relay which relays a compiled version of the request (or subrequest) > to the backend daemon based on request parameters, and awaits the > response. > > Yes, you have the IPC overhead, but on modern architectures that's less > significant. Or so I've been led to believe - the benchmarks out of > mod_jserv are pretty nice. > > This is still relevant when we go multithreaded in 2.0, too. It'll be > nice to insulate the raw HTTP engine from dangerous code in dynamic > content engines which may be less robust. > > Rasmus, what do you think? I bet you could get started by writing > something using the jserv protocol already :) > > Brian > -- ---------------------------------------------------------------------------- /~\ /~\ CASTLE INDUSTRIES PTY. LTD. | |_____| | Incorporated 1969. in N.S.W., Australia | | Phone +612 6562 1345 Fax +612 6567 1449 | /~\ | Web http://www.midcoast.com.au/~mvoase | [ ] | Michael H. Voase. Director. ~~~~~~~~~~~~~~ I wouldn't have Windoze for *nix . ----------------------------------------------------------------------------