httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Stein <gst...@lyra.org>
Subject Re: PLEASE READ: Filter I/O
Date Thu, 22 Jun 2000 12:59:24 GMT
On Wed, Jun 21, 2000 at 02:52:18PM -0700, rbb@covalent.net wrote:
> 
> I am including in this e-mail an up to date patch of the filtered I/O
> logic.  This is the same patch as was sent out last week, but it compiles
> and builds against the current HEAD.
> 
> These are the current issues with the patch, and my response to them.  I
> do not see any of the current outstanding issues as a reason this patch
> can not be committed.
> 
> 1)  complexity due to ioblock/ioqueue stuff
> A)  I believe that any filtering scheme requires something like
> this.

I will grant that the ioblock thing can be handy in either scheme. I won't
expand right here how it could be used in the link-based scheme, but it
could be used for certain optimizations. But it is only that: an
optimization technique that can be employed by advanced modules, or by
modules that have a very limited/specific form of content-processing. In the
hook-based scheme, the ioblock and ioqueue stuff is mandatory.

> This scheme allows us to handle sendfile implementations cleanly.

I have previously demonstrated how this can be done in the link scheme, in a
very simple fashion, without the use of ioblock/ioqueue type structures.

[ review my prior posts; I'll be working on a link-scheme patch which will
  also show how this is done ]

> It also makes the buffering logic a part of the layer, which is
> good IMHO.

Buffering is handled by BUFF and its tight relationship to the network.
ioqueue strings things together pending a flush to the lower layers. It has
no structured/designed buffering.

> Finally, having this logic lets the core maintain the state of
> the filters.

Filters have more state than "I haven't handled this data yet." There are
buffers, state machine states, open files, etc. The "unfiltered[]" concept
in this patch doesn't really handle filters' state.

> As far as having a context maintained by the module, the
> current scheme allows for this, because each ioqueue has a pool which by
> definition has user_data.

The pool is not per-filter-instance. If the same filter is installed twice,
and they drop state into user_data, then the two filters will collide.

The link-based scheme stores this on a per-instance basis in the ap_layer_t
structure.

> 2)  Sub requests and how they are joined
> A)  Sub requests in Apache do not return a stream of data to the original
> request, they write out directly to the same buff structure.  By
> definition, sub requests will share the iofilters, so this works the same
> way in all designs for iofilters.

There are two questions with sub-requests:

1) what filters does their content run though? where does that output go?

   A: I outlined this in a previous email. A sub-request becomes a
      content-generator for its own private set of content-processors.
      
      In addition, the output of the private content-processors needs to go
      to the *next* filter after the one that ran the sub-request. Let's say
      that your main request has processors P1 and P2, and encoders E1 and
      E2. The sub-request has processor P3. The main generates some content
      and shoves it into P1. That runs the sub-request which generators some
      content. This new content goes through: P3 P2 E1 E2 (it does not go
      through P1). When the sub-request finishes, P1 generates some
      additional text which goes through P2 E1 E2.

      Note that it *shares* the recoding/encoding/digesting/etc layers with
      the main request.
      
      In the hook-based scheme, this would appear to require running *part*
      of a set of hooks. Or possibly, the hooks could be rebuilt in some
      fashion (on the subreq?)

2) when the sub-request is *run* in a filter, how is the output coordinated
   with the filter_io() that is occuring for the filter?

   A: beats me


> 3)  If I fetch 100MB from a database for insertion into the content
> stream, how does that work in this scheme
> A)  The 100MB is passed on to the next filter until it reaches the bottom
> filter, and then it is sent to the network.  The hook scheme can allow for
> a configuration directive that would throttle how much of the 100MB is
> passed to subsequent filters at any one time.

With or without the throttle, it sounds like the 100Mb is loaded into
memory. That just isn't allowable (particularly because I have easily
demonstrated a simple alternative that doesn't have a similar working set).

> Using sibling pools, this
> would allow for early cleanup of some of the memory (although as I will
> describe later I think this is a bad idea).  Regardless, both of the
> current schemes require the same working set size, because the 100MB is
> allocated on the heap and it isn't cleared until the request is
> done.

I had thought my demos were clear: the link-based scheme does not need to
allocate the whole darn thing. Please see the other thread related to Greg
Marr's response. If it is still not clear how the link-based scheme avoids
large working sets, then I'll write a more detailed post addressing this
single issue.

> There is a minimal amount of data that can be allocated on the
> stack in a link based scheme, however as I will describe later, I think
> this is a bad idea for optimization reasons.
> 
> 4)  Flow control
> A)  Flow control can be controlled with configuration directives, by not
> allowing all of a chunk to be passed to subsequent filters.  Quite
> honestly, this is one place where this design does flounder a bit, but
> with all of the other optimizations that can be added on top of this
> design, I think this is ok.
> 
> 5)  Working set size
> A)  See answer 3

The other thread replies to these. I believe these two are still very
significant problems.

> Future Optimizations:
>...

I'm not sure whether about the viability of these optimizations, or whether
they are a good idea, but have no problem accepting they are possible.
Conversely, I believe it is quite possible to achieve similar functionality
with a link-based scheme. I'd say these are neither here nor there.

>...
> I realize that this patch isn't perfect, but it is a start.  The sooner we
> get real code into the server that people can play with, the sooner the
> code can be made perfect.
> 
> Are there are more outstanding technical arguments against this patch?

1) minor: per-instance state
2) Sub request handling
3) Working set
4) Network flow
5) Async-style requirement to solve (3) and (4)

There are some other nits in the patch (such as my prior note about
ap_rvputs and ap_bvputs not being equivalent), but they take backseat to the
above, fundamental issues.

As you stated above, this is effectively the same as last week's patch. I
vetoed that for the above reasons, and still do not see any particular
resolution to these problems. The problems are pretty inherent in the
design, unfortunately. After further analysis based on Greg Marr's email, I
can see how (3) and (4) can be solved, but that brings in (5). I do not see
a resolution at all for (2). (and (1) is easily solved; not veto material)

Sorry, but I have to give a -1 again.

I realize how this is really holding up the filtering concept, so I feel a
big reponsibility to provide real, working code for the alternative scheme.
I'm putting the mod_dav integration on hold until I code up a patch for the
link-based filtering. I feel it is a bit unfair to veto without coughing up
some code, so that'll be first priority for me.

Also, as I stated last week (before leaving for a few days of vacation), I'm
going to continue with some of the "common" items to simplify comparison and
review. I'll repost those items because I've got some questions/concerns for
people to comment on.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/

Mime
View raw message