httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dean gaudet <>
Subject Re: woah, "GET /" with autoindex
Date Sun, 07 Jan 2001 21:05:22 GMT
On Sun, 7 Jan 2001 wrote:

> I agree this is a showstopper for the release, I believe it is fine to
> release a beta with this issue.  There are multiple attacks for solving
> this problem.

my main concern with calling it a showstopper is that i don't know
what the APIs look like yet... and maybe they'll need changing to fix
these bugs.  if you're cool with API changes during beta then it's not
a beta showstopper.

> #1, the sbrk's.  We need to keep a list of free buckets, and not allocate
> a new bucket each time we create one.  At some point, each process will
> create a maximum number of buckets, and we will stop allocating new ones.

are you allocating the buckets out of the request or connection pool?
or via malloc?  if it's malloc or otherwise shared across multiple
threads then we'll run into lock contention on the list/malloc (on
multi-CPU boxes only).

> This solves half the problem, but we are still allocating FAR too many
> buckets for auto-index's.  The problem is that auto-index was written for
> the old API, and we haven't tried to optimize that API yet.  The ap_r*
> functions should all buffer data, so that they don't create a bucket per
> char.

i don't really see ap_r* as the problem, i see ap_bucket_putstr/printf
as the problem.  fixing ap_r* just helps folks writing code within httpd,
it wouldn't help folks using buckets in other apps.

> We don't force zero-copy on anybody, but the core needs to figure out how
> to buffer the data, which is just going to take somebody putting the logic
> in.  It's not terribly complex, but nobody has had the time/inclination.

yeah, there's not much logic.  if a module uses ap_r{put,printf} then it
should be buffered (one-copied), if it uses ap_rwrite/sendmmap/sendfile
then it should be zero-copied.  that's essentially what 1.3 does... and
seems to work fine.

obviously someone could deliberately write code which does printfs of
massive data, but they deserve to break.


View raw message