httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Querna <p...@querna.org>
Subject Re: Httpd 3.0 or something else
Date Mon, 09 Nov 2009 19:21:56 GMT
On Mon, Nov 9, 2009 at 11:06 AM, Greg Stein <gstein@gmail.com> wrote:
> On Mon, Nov 9, 2009 at 13:59, Graham Leggett <minfrin@sharp.fm> wrote:
>> Akins, Brian wrote:
>>
>>>>> It works really well for proxy.
>>>> Aka "static data" :)
>>>
>>> Nah, we proxy to fastcgi php stuff, http java stuff, some horrid HTTP perl
>>> stuff, etc (Full disclosure, I wrote the horrid perl stuff.)
>>
>> Doesn't matter, once httpd proxy gets hold of it, it's just shifting
>> static bits.
>>
>> Something I want to teach httpd to do is buffer up data for output, and
>> then forget about the output to focus on releasing the backend resources
>> ASAP, ready for the next request when it (eventually) comes. The fact
>> that network writes block makes this painful to achieve.
>>
>> Proxy had an optimisation that released proxied backend resources when
>> it detected EOS from the backend but before attempting to pass it to the
>> frontend, but someone refactored that away at some point. It would be
>> good if such an optimisation was available server wide.
>>
>> I want to be able to write something to the filter stack, and get an
>> EWOULDBLOCK (or similar) back if it isn't ready. I could then make
>> intelligent decisions based on this. For example, if I were a cache, I
>> would carry on reading from the backend and writing the data to the
>> cache, while the frontend was saying "not now, slow browser ahead". I
>> could have long since finished caching and closed the backend connection
>> and freed the resources, before the frontend returned "cool, ready for
>> you now", at which point I answer "no worries, have the cached content I
>> prepared earlier".
>
> These issues are already solved by moving to a Serf core. It is fully
> asynchronous.
>
> Backend handlers will no longer "push" bits towards the network. The
> core will "pull" them from a bucket. *Which* bucket is defined by a
> {URL,Headers}->Bucket mapping system.

I was talking to Aaron about this at ApacheCon.

I agree in general, a serf-based core does give us a good start.

But Serf Buckets and the event loop definitely do need some more work
-- simple things, like if the backend bucket is a socket, how do you
tell the event loop, that a would block rvalue maps to a file
descriptor talking to an origin server.   You don't want to just keep
looping over it until it returns data, you want to poll on the origin
socket, and only try to read when data is available.

I am also concerned about the patterns of sendfile() in the current
serf bucket archittecture, and making a whole pipeline do sendfile
correctly seems quite difficult.

-Paul

Mime
View raw message