apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From r..@covalent.net
Subject Re: buckets and rputs/rprintf
Date Sun, 07 Jan 2001 21:01:31 GMT

> as near as i can tell the correct fix to this problem is to fix the
> ap_brigade_putstr and _printf and related functions to do buffering a la
> apache-1.3.

Those functions aren't used anyplace, so modifying them won't help this
problem.  The solution is to modify ap_r* functions in Apache to buffer
correctly.  The problem is that Apache's mod_autoindex is using the old
ap_r* functions instead of using the buckets directly in a sane way.

> my suggestion is to allocate 4k buffers, and put them into the brigade
> only when they're full, and use a 1.3-style bprintf/bputs which write
> direct into these buffers.

Yes, but the buffer doesn't belong in the bucket code IMO, it belongs in
the Apache code.  The problem there, is that once we start to buffer, you
can't use a combination of the bucket functions and the ap_r*
functions.  The second question is where would the buffer live?  In the
old BUFF code, it was easy, because there was only one BUFF per
request.  Now, we have multiple buckets per request.  We could put the
buffer in the brigade itself, but that makes the brigade much more
heavy-weight than it is right now.  One of the very nice things about the
current brigades, is that they are so light-weight.  We can drop them on
the ground, and let the pool worry about them.  If we are allocating a 4k
buffer inside of them, we really have to take the brigade outside of the
pool and use malloc/free on it, which takes away some security.

When I originally wrote some of the bucket code, I had heap buckets always
allocate a 4k buffer, and if we were adding to the end of a brigade that
has a heap at the end, we just appended the new data into the last
bucket.  This ends up being a very difficult thing to do well.

We have a coalesce filter in Apache, that coalesces multiple buckets in
the same brigade into a single bucket.  This is one solution to not write
such small blocks of data.

I personally don't mind stupid generators performing badly.  I would much
rather just re-write them to work well, than to try to force the old API
to perform as well as the new API.

Ryan Bloom                        	rbb@apache.org
406 29th St.
San Francisco, CA 94131

View raw message