httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nathan Ollerenshaw <>
Subject Re: Advanced Mass Hosting Module
Date Fri, 14 Mar 2003 06:18:46 GMT
On Friday, March 14, 2003, at 09:55 AM, David Burry wrote:

> These are neat ideas.  At a few companies I've worked for we already do
> similar things but we have scripts that generate the httpd.conf files
> and distribute them out to the web servers and gracefully restart.
> Adding a new web server machine to the mix is as simple as adding the
> host name to the distribution script.

Yup. Not too dissimilar to what we use right now. We have a shared NFS 
filesystem mounted on all the apache servers with a single level tree 
of config files, one per domain. Apache just includes the base 

This sucks, performance wise. Convenience wise, it's great.

The NFS server is a High Availability setup, so thats cool. And even if 
I was worried about the NFS going away and the server not being able to 
read it's configs, the point is mute - the NFS server also holds the 

> What you're talking about doing sounds like a lot more complexity to
> achieve a similar thing, and more complexity means there's a lot more
> that can go wrong.  For instance, what are you going to do if the LDAP

Normally, I'd agree. But like what was mentioned before, you have to 
load thousands, or if you're really lucky, tens of thousands of virtual 
hosts into your apache daemon. Eventually what happens is the apache 
daemon starts using an inordinate amount of ram just to load all those 
configurations into memory, and reloading takes an age.

At least with 1.3, I saw a massive memory usage when loading 5,000 
virtualhosts in a test. I am not sure about 2.0.

Besides. I don't want to have to keep restarting my apache daemon 
*every time* someone wants to enable/disable php on their site. It 
ruins the uptime! ;)

> server is down, are many not-yet-cached virtual hosts just going to
> fail?  In our scenario it's solved simply and easily by the generation
> script simply failing and nothing being copied (but at least the web
> servers keep working fine with the last config revision, so not 
> many/any
> end user web surfers will notice the outage).

Have more than one LDAP server :) This is easy to do, LDAP allows for 
it, and as long as the client software is smart (stops trying to use a 
borked LDAP server) you won't even notice the failure of a back-end 
LDAP slave.

Besides, LDAP is much-maligned. I've been running LDAP in production 
systems for a long time now, and I've never had one just up and die on 

The ability to store all your configuration data in one place overrides 
the inconvenience of having to manage another set of servers.


Nathan Ollerenshaw - Systems Engineer - Shared Hosting
ValueCommerce Japan -

In the days, When we were swinging form the trees
I was a monkey, Stealing honey from a swarm of bees
I could taste, I could taste you even then
And I would chase you down the wind

View raw message