apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From r..@covalent.net
Subject Re: buckets and rputs/rprintf
Date Sun, 07 Jan 2001 21:59:24 GMT

> > The second question is where would the buffer live?  In the
> > old BUFF code, it was easy, because there was only one BUFF per
> > request.  Now, we have multiple buckets per request.  We could put the
> > buffer in the brigade itself, but that makes the brigade much more
> > heavy-weight than it is right now.  One of the very nice things about the
> > current brigades, is that they are so light-weight.  We can drop them on
> > the ground, and let the pool worry about them.  If we are allocating a 4k
> > buffer inside of them, we really have to take the brigade outside of the
> > pool and use malloc/free on it, which takes away some security.
> dude, buckets aren't "light-weight"... look at the malloc()s and the

No, brigades are light-weight buckets really aren't.  We have tried to
make the buckets more light-weight, but obviously we haven't gotten it
right yet.  But, you can't just put the buffer into a bucket, because
different buckets have different properties.

> writev() crud.  that's not light weight.  as far as you've explained to me
> so far you don't have a good solution for either of these problems.
> look at 2.0 ap_rputs:
>     bb = ap_brigade_create(r->pool);
>     b = ap_bucket_create_transient(str, len);
>     ap_pass_brigade(r->output_filters, bb);
> look at 1.3 ap_rputs:
>     rcode = ap_bputs(str, r->connection->client);
> and 1.3 ap_bputs:
>     int i, j = strlen(x);
>     i = ap_bwrite(fb, x, j);
> tada.  2.0 allocates a brigade, a bucket, a memory region, copies into the
> memory region, appends it to a list.

Which is completely wrong.  We should be buffering the data, and putting
the buffer into a heap bucket, so that everything is one-copy.  The
buffering goes into the ap_r* functions though, so it is do-able.

> > When I originally wrote some of the bucket code, I had heap buckets
> > always allocate a 4k buffer, and if we were adding to the end of a
> > brigade that has a heap at the end, we just appended the new data into
> > the last bucket.  This ends up being a very difficult thing to do
> > well.
> it's difficult, so?  look at the 1.3 BUFF and tell me it wasn't difficult
> to get right :)

But it's more than that.  Most filters aren't going to just create heap
buckets, they are going to create a bunch of different bucket types.  Take
a look at mod_include.  In that case, the 4k buffer just doesn't help

> > I personally don't mind stupid generators performing badly.  I would much
> > rather just re-write them to work well, than to try to force the old API
> > to perform as well as the new API.
> i don't consider mod_autoindex "stupid".  are you aware that PHP uses
> rputs/etc. as well?  i don't consider PHP stupid either.
> the old API was convenient for the module programmer.  the new API is
> convenient for you.  that's the wrong solution.

The thing is, the new API is the only API for filter writers, so PHP is
strictly a generator right now.  The old API just doesn't interact
correctly with the new model.  That is a lack of time/nudging, not a lack
of ability for it to.  There have been many people talking about making
the old API work much better with the underlying buckets, but nobody has
done it yet.  The buffer is the correct solution, but it belongs in
Apache's API, not the bucket API.  The bucket API just doesn't have a
place to put it that makes sense.


Ryan Bloom                        	rbb@apache.org
406 29th St.
San Francisco, CA 94131

View raw message