httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thom May <>
Subject Re: Mass Vhosting SuExec (was Re: [PATCH] remove hardcoding of suexec log location)
Date Fri, 03 Jan 2003 14:27:58 GMT
* Colm MacCárthaigh (colmmacc@Redbrick.DCU.IE) wrote :
> On Wed, Jan 01, 2003 at 10:43:18PM +0000, Thom May wrote:
> > * Aaron Bannert ( wrote :
> > > The log is generated from the suexec binary, not httpd, right?
> > > Then we can't use a directive to control it and it needs to be
> > > hardcoded for safety.
> >
> > The other issue for suexec is mass vhosting; this has somewhat different
> > needs, and mostly results in ISPs patching suexec to do what they need,
> > which seems like a bad thing unless the ISPs can sucessfully audit the
> > resulting codebase.
> It's a very bad thing, because in 99.99% of cases it's completely 
> unneccessary!
I'd specifically disagree with you there. Virtually every SA at any ISP i've
spoken to plus numerous webhosts, including the one I work at, has patches
on suexec.

> > The real problem is that mass vhosting generates large numbers of document
> > roots; covering them all with one docroot compiled into suexec can result
> > in, eg, /home being set as the docroot.
> /home is a very bad docroot, and I'd question the reasons behind
> hosting virtual hosts in /home, the usual reason is for FTP/shell
> access, but that can be solved with symlinks, or just setting
> the homedir elsewhere. 
You're answering specific points, not solving the major problem. Whether or
not /home is a good docroot is not the question I was asking, i was using it
as an example.
> In general for sites with virtual hosts that need SuexecUserGroup
> I set docroot to $prefix/vhosts, and put them all in there, problem
> solved :)
Wonderful when setting up a new system from scratch, not so useful for
legacy systems and places that already have multiple machines run this way.

> > Compiling with a list of document roots sounds good in principal, but 
> > we on average add a site an hour, recompiling suexec every hour 
> > isn't particularily practical, and the
> > configure args would be several miles long :-)
> Every hour! Youch, but are you adding a VirtualHost and restarting
> apache every hour ? If not, how are you mapping those URI's and
> how are you associating them with a username/group ?

Adding vhost and restarting apache.

> If you are, and if this is common, there is some limited justification
> for getting suexec to support such situations. But against that is
> the reality that in order to support it suexec would have to parse
> every single configuration file, determine which VirtualHost blocks
> have SuexecUserGroup directives and remember their Docroot, that's
It's already having to read SuexecUserGroup; why not use SuexecDocroot where
necessary as well? (yes, it's duplication of data, but it wouldn't
necessarily be used for every vhost that was setting a SuexecUserGroup

> an awful lot of work for something that's exec'd for every CGI
> and is security critical.
Agreed, but in a lot of cases suexec is being run with reduced security to
work round this problem anyway.
> > It seems to me that a different binary would be the best path;
> > suexec-mass-vhost or whatever. it needs to be able to work correctly with
> > mod_vhost_alias, and it potentially needs to be able to take docroot
> > arguments from httpd.conf.
> suexec will never "work correctly" with vhost_alias, or mod_rewrite :)
> How would you tell it what username/group to use ?
Tony points out one methodology for this, I'm sure there are others.

View raw message