httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Graham Leggett <minf...@sharp.fm>
Subject Re: svn commit: r717867 - in /httpd/httpd/trunk: CHANGES docs/manual/mod/mod_buffer.xml docs/manual/mod/mod_buffer.xml.meta modules/filters/config.m4 modules/filters/mod_buffer.c
Date Sat, 15 Nov 2008 23:58:13 GMT
Ruediger Pluem wrote:

>> +    /* Do nothing if asked to filter nothing. */
>> +    if (APR_BRIGADE_EMPTY(bb)) {
>> +        return ap_pass_brigade(f->next, bb);
>> +    }
> 
> Hm. This changes the order in which we sent the brigades (provided we have already buffered
something).
> I am not sure if this causes any harm down the chain or not.
> Wouldn't it be more sane in this case to "add" the empty brigade to our buffer?

In theory no, because the brigade you are passing is empty. You can send 
empty brigades in any order, it will make no difference. Adding an empty 
brigade to our buffer is effectively a noop.

Thinking about this, sending an empty brigade down the filter stack 
might be pointless as well, could we just return APR_SUCCESS? (Unless 
there is an expectation that an empty brigade should hit all the filters 
in the stack, in which case the code is correct as it is.)

>> +            /* pass what we have down the chain */
>> +            rv = ap_pass_brigade(f->next, ctx->bb);
> 
> For sanity reasons I would add an apr_brigade_cleanup(ctx->bb) here.

Hmm... I am not sure. If the rest of the filter stack chose not to 
consume the whole brigade for whatever reason, the unconsumed content 
would remain in the buffer ready for the next pass, or would be cleaned 
up when the request is aborted.

I could find no precedent in any of the other filters for proactively 
emptying the brigade once it had been passed down the stack. In 99.9% of 
cases, you would be emptying an empty brigade, which is a waste.

>> +            /* pass what we have down the chain */
>> +            rv = ap_pass_brigade(f->next, ctx->bb);
> 
> For sanity reasons I would add an apr_brigade_cleanup(ctx->bb) here.
> BTW: Couldn't we handle EOS and flush buckets in the same block?

I am still looking at the possibility of optionally adding an Etag when 
missing to requests that fit in the buffer, and when that happens, it 
would go in the EOS path, and not the flush path.

> I think this is suboptimal in many ways:
> 
> 1. We should only do this on heap buckets. For other buckets we don't even know
>    that we waste memory and in this case we create unneeded memory copy operations.
>    This part gets extremely bad when we have a file bucket. Yes, I understand the
>    intentions of this module and have knowledge about buckets and brigades, but the
>    normal user has not. So the user might think it is a good idea to use this filter
>    in case of static files. So at least a warning should be added to the documentation
>    that this is counter productive when serving files.

I disagree.

The purpose of this module was to handle a page that consists of many 
(tens to hundreds) of mod_include files, in a nested fashion. The output 
on the wire consists of lots of short (< 10 bytes) file buckets, and 
lots of short file buckets that have been morphed into short heap 
buckets by mod_include (thank you indentation). This ends up on the wire 
as lots of chunks, each a few bytes long, taking up 8k * number of 
few-bytes-chunks per request in RAM, and wasting lots of bandwidth in 
chunk headers.

If file buckets were passed as is, much of the optimisation would be 
lost, because you would have a short-and-inefficient heap bucket, 
followed by a short file bucket, followed by a short-and-inefficient 
heap bucket. Using the current implementation, all three buckets are 
collapsed down to just one.

To be clear on what the filter does, it implements a buffer, and it 
works like a buffer according got the principle of least astonishment. 
It is completely optional, and not enabled by default.

The module does not pretend to be relevant in every use case, and is 
particularly not relevant for cases that are efficient already (simple 
static files + sendfile, for example).

> 2. If we have a large heap bucket (>= APR_BUCKET_BUFF_SIZE) or one that doesn't fit
into
>    the remaining buffer of the last heap heap bucket in ctx->bb apr_bucket_write creates
>    a new heap bucket and thus causes the memory to be copied over from the old bucket
to
>    the new bucket. We should set an intelligent flush function for apr_brigade_write
>    that prevents this and just moves over this bucket from bb to ctx->bb (the transient
>    bucket created by apr_brigade_write is not a big loss in this case).

Again, we are trying to second guess what the admin is trying to do.

If the response contained large heap buckets, then the correct course of 
action is for the admin to have not added the buffer filter in the first 
place.

The point behind the buffer is to convert whatever buckets are present 
into the smallest number of heap buckets as possible, and do this until 
the buffer is full, or the response is complete, whichever comes first.

Think of a potentially expensive bucket of known but long length - we 
certainly don't want to blindly move the expensive bucket over in this 
case, as the bucket will only get read at some point in the distance 
future when the slow client gets round to read the bucket, and the whole 
purpose of buffering is lost.

The contract of mod_buffer is simple: pack the response (whatever it is) 
into as few heap buckets as possible to guarantee that expensive buckets 
are read as soon as possible, and that as few chunks appear on the wire 
as possible. The purpose of the buffer is to save RAM and network 
bandwidth, not save CPU time.

I think trying to second guess the optimisation already present in 
apr_brigade_write is a bad idea.

> 3. If ctx->bb is empty we do copy the memory of the the first bucket in a new heap
bucket.
>    This is unnecessary. If ctx->bb is empty we should just move over the bucket from
bb
>    to ctx->bb.

Again, this only makes sense if the bucket is already a heap bucket. If 
it is anything else, then we want to read it in immediately.

>> +    c = ap_get_module_config(f->r->per_dir_config, &buffer_module);
>> +
>> +    tmp = apr_brigade_create(f->r->pool, f->c->bucket_alloc);
> 
> This is a memory leak if pass this input filter quite often.

This should go in a context, you're right. Will fix.

>> +
>> +    remaining = readbytes;
>> +    while (remaining > 0) {
>> +        const char *data;
>> +        apr_off_t len;
>> +        apr_size_t size;
>> +
>> +        rv = ap_get_brigade(f->next, tmp, mode, block, remaining);
>> +
>> +        /* if an error was received, bail out now */
>> +        if (rv != APR_SUCCESS) {
>> +            APR_BRIGADE_CONCAT(bb, tmp);
>> +            return rv;
>> +        }
>> +
>> +        apr_brigade_length(tmp, 1, &len);
> 
> This can lead to a blocking read even if we requested a non blocking mode.

This is a bug, will fix.

>> +        remaining -= len;
> 
> The current approach can create an endless loop and a CPU spin in non blocking
> mode as some input filters do not return EAGAIN, but an empty brigade and
> APR_SUCCESS in the case when no data was available.

Will fix.

> And even if they return EAGAIN we might do the wrong thing, because if a previous
> ap_get_brigade was successful, we would return EAGAIN as return code which is wrong.
> We should return APR_SUCCESS in this case as we have actually read some data.
> If we are in blocking mode we can get stuck here for quite a long time without
> giving any data back to the caller.

We are trying to be a buffer, if we just returned data as we read it, 
the buffer would be a pure passthrough and point behind buffering will 
be lost.

The EAGAIN case isn't currently handled properly though, I will fix this.

Regards,
Graham
--

Mime
View raw message