Delivered-To: new-httpd-archive@hyperreal.org Received: (qmail 29586 invoked by uid 6000); 8 Feb 2000 19:31:09 -0000 Received: (qmail 29482 invoked from network); 8 Feb 2000 19:31:04 -0000 Received: from red.csi.cam.ac.uk (exim@131.111.8.70) by taz.hyperreal.org with SMTP; 8 Feb 2000 19:31:04 -0000 Received: from dax.joh.cam.ac.uk ([131.111.237.83] ident=noone) by red.csi.cam.ac.uk with esmtp (Exim 3.13 #1) id 12IGLh-0002Eo-00 for new-httpd@apache.org; Tue, 08 Feb 2000 19:30:57 +0000 Received: from localhost (noone@dax.joh.cam.ac.uk [127.0.0.1]) by dax.joh.cam.ac.uk (8.9.3/8.9.3) with ESMTP id TAA19438 for ; Tue, 8 Feb 2000 19:30:57 GMT Date: Tue, 8 Feb 2000 19:30:57 +0000 (GMT) From: James Sutherland X-Sender: jas88@dax.joh.cam.ac.uk To: new-httpd@apache.org Subject: Re: mod_proxy: proposal for v2.0 In-Reply-To: <38A05F16.7C75E452@sharp.fm> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: new-httpd-owner@apache.org Precedence: bulk Reply-To: new-httpd@apache.org Status: O On Tue, 8 Feb 2000, Graham Leggett wrote: > James Sutherland wrote: > > > I must admit, it's not a solution I would consider ideal... > > > > What about using Squid as a frontend? It could (with a rewriter) handle > > both the load-balancing and URL-based proxying easily enough, while Apache > > would handle all the backend work. Squid can also produce full logs. > > > > That's probably the way I would do it (depending, of course, on the exact > > circumstances: how many servers? What bandwidth? etc.) > > When I last checked squid couldn't write > split logfiles, Split by vhost (one entry per file, in a different file depending on URL) or by content (different entries in multiple files)? In either case, I'd use a cron script to split up log files as needed, rather than imposing the extra overhead directly. Or use the backend server logs, with a suitable script to merge entries as needed. > handle URL redirects, It can pass different URLs to different servers in any way you care to define (all URLs containing cgi-bin to one server, all .shtml files to another...) easily enough using a rewriter, as I suggested. > password protect arbitrary URLs This is done by the backend server as normal. > or support SSL (direct SSL, not SSL tunneling), True; depending on requirements, I would either use one [group of] servers in a round-robin serving secure content directly, or have SSL front end boxes running Apache+SSL. > these are all webserver functions. Squid is a forward proxy first, > with an http accelerator bolted on. Apache is a webserver first with a > reverse_proxy bolted in. This to us makes a world of difference. Depending on what "us" requires, I would be more inclined toward the Squid solution; I would have thought the overhead of multiple extra Apache layers handling every request would more than outweigh the benefits of having the log files divided up realtime, rather than only every few minutes... James.