httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "William A. Rowe, Jr." <>
Subject Re: [PATCH] mod_ssl input filtering...
Date Sat, 06 Oct 2001 01:08:23 GMT
From: "Justin Erenkrantz" <>
Sent: Friday, October 05, 2001 7:47 PM

> On Fri, Oct 05, 2001 at 05:34:41PM -0500, William A. Rowe, Jr. wrote:
> > Aaron and I were chatting about this in stream-of-consiouness mode,
> > let me boil down our collective lightbulb.
> The more, the merrier.  =)  If you could pass a few lightbulbs down
> here, I'd appreciate it...
> > We actually have two number;
> > 
> > * minimum required bytes (blocking) to return
> >   [could be 0 - don't care]
> Wouldn't this lead to returning more than you may be able to handle?  
> How I see it is that you provide the maximum number of bytes you are 
> willing to accept.

Never, if you specify the next argument;

> > * maximum bytes (non-blocking) that can be accepted
> >   [could be 0 - no limit]
> Yup.  But, I think there are reasons that we shouldn't be using 0.
> What would be a case?

You will never ask for "less than 1" so 0 here means "give me everything
you have in memory."  Lots of filters might not want to do that.

Lots of them would prefer to do so.

> (Do you mean this as two length parameters given to ap_get_brigade?
> What about bogus values - 0,0?)

0,0 isn't bogus - don't block for anything, give me everything on hand.

If min > max, then I'd suggest we simply block for max, return max.

> > Now PLEASE understand that maxbytes 0 (originally, the -1 idea) doesn't
> > say 'read everything from this socket' --- it leaves the best-fit for
> > the underlying filter to decide.  If core wants to give back an even
> > number of IP frames, then fine.  If SSL wants to give back an even
> > number of decoded SSL packets, also fine.  It will not mean read until
> > EOS, ever.  It's up to the underlying filter to decide what is optimal
> > for max 0, without allocating a bunch of otherwise useless frame buffers.
> This would be a real good time to change *readbytes == 0 to be a
> specific mode rather than an implicit "give me a LF line."  =)
> The question is when do we stop?  I'm about to commit a patch
> (feel free to revert) that will make blocking reads only do one
> socket read and return at most *readbytes.  I would say that 
> maxbytes 0 should be "one socket read" - are there any other 
> alternatives here?  

No.  That's exactly what it means.

Except for SSL, SSL would read until it could decode at least one packet,
And then return whatever it can successfully decoded.

> (And, I'd say this is only valid connection<->connection filters 
> since request filters shouldn't be reading indeterminately like this.)

Why not?  They have HTTP_IN to stop them at the end.  We give them whatever
happens to be available.  If intermediate filters have accumulated more
than the original 8kb packet, e.g. charset translation from sbcs to mbcs,
then they might have 16kb ready.  Doesn't matter really, unlimited max
doesn't start consuming memory, it just clears out what has already 
consumed memory.

> > And the more that I look at this, the more we need a push-back model,
> > because the scope of 'this' filter doesn't live as long as the parent
> > filter (with request and connection scopes, respectively.)
> The reasoning that I would say that we don't need a push-back model
> is that we can't remove a connection input-filter once it has been 
> inserted.  Also, you should not be returning more data than is 
> asked for.  Less than what you ask for should be fine.  

Thats another topic entirely that I will sleep on.

> There are also safeguards that should prevent the request scope from
> reading too much - HTTP_IN should ensure that we *never* read past
> the end of the request - it should be the bridge between request and
> connection scopes - it buffers no data.  And, we shouldn't worry about 
> connection filters *unless* we want to be able to remove them 
> mid-stream.  And, I think that will get awfully complicated.

You've got that right.  A multipart parser would have the same effect,
stop when it gets to the right place.

> I believe at one point Roy may have mentioned that all filters 
> should be in the chain at all times.  If they are interested, they 
> intercept the data and handle it.  If at some point they are told 
> to ignore the request, they can just pass along any buffered data 
> that they may have previously read.  Would that resolve the push-back 
> concern?  Was there an implementation detail that prevented this 
> from working?  -- justin

No, because we break the request chains jumping from request to request,
and I can see how other filters would break things up.

It's fair to say that a filter can only remove itself once it has
flushed out everything it has to offer.


View raw message