httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Erenkrantz <>
Subject Re: Baffled by the negotation fix...
Date Sun, 03 Mar 2002 00:22:20 GMT
On Sat, Mar 02, 2002 at 02:30:34PM -0800, Ryan Bloom wrote:
> I'm sorry, but this is BOGUS!  I want to see a 2.0 release, but adding
> code that is wrong just so that we can get a GA release is NOT the way
> to go about doing that.  The whole point of Open Source is that we don't
> have to cut corners just to meet release dates.  Do it right, it will
> take less time in the long run.
> I am really disappointed that the attitude on this list right now is
> "ship it soon regardless of quality or maintainability."

I believe that you are incorrectly trying to introduce stateful
protocols into an architecture that explicitly forbids it.  
Furthermore, I don't think you'd be addressing the real problem
here by adding this extra level of complexity.

Since our main goal is to produce an HTTP server, we should make
our code match best with the architecture of HTTP.  One of the
dictums (right or wrong) is that HTTP is stateless.  There is a
concept of a connection and a request.  That's it.  Introducing
anything else is contrary to the design of our main protocol.
Therefore, when faced with an architectural decision, we should
always favor HTTP-like protocols.  Other protocols are gravy, but
are on the whole, unimportant.

For the purposes of doing an internal redirect, I think the best
thing to do is to create a new request_rec independent of the
original request when we know that it is the correct request to
serve.  Ideally, when we see an internal redirect, we'd really like
to be sending a redirect to the client (302) with a pointer to the
new URL.

That's the idea, but since it saves us time and network bandwidth,
we can treat it like they had requested it originally.  Therefore,
I think we should stop treating these types of redirects as
sub-requests - they are not - they are separate and would be
independent if we would return the 302 to our client.

However, you are already in the middle of serving this original
request.  And, in fact, you've already run several steps for
processing the original request.  Since it is arbitrary where and
when we make this determination to redirect (say fixups hooks),
in my opinion, the only safe mechanism is to start anew.

fast_redirect() merely tries to pick up this new request where
we left the last one off.  I believe that is incorrect and we
miss calling certain hooks.  (Such as the ones that ensure the
filters are setup properly - i.e. ap_run_insert_filter - I am
incorrect in that we need to add a hook - it is already there!)

We can't call internal_redirect() from any hooks because we
can't abort the processing of the current request from a
hook.  It'll serve the redirected request correctly, but once
we return from the hook after serving the real request, we'll
serve the original request too.  Oops.

By adding a protocol-level semantic, we will be able to save the
protocol filters (and essentially duplicate ap_run_insert_filter),
but we'd still lose the ability to call all of the hooks correctly
via fast_redirect().

I'd rather we introduced a way to stop handling the current
request from a hook and let the new request start from the
beginning via a call to internal_redirect().  This will also
magically solve our filters problem since the internal_redirect()
code path calls ap_run_insert_filter() to ensure that the
protocol has inserted its necessary filters.

> > multiple independent request_rec's created for one connection.
> > I believe the concept of fast_redirect is bogus and broken.  But,
> > you and OtherBill seem intent on keeping that.
> It has some serious performance implications to remove that function,
> although both Bill and I started by saying that the function was
> completely bogus.

I believe that the performance implications aren't that major.
Where are your numbers to prove that it is slower?  I believe that
the benefits by having clearly separated and understandable code
would be worth it.  I'd rather that the code was more reliable
and understood than faster and hackish.

Remember, if we were true to the protocol, we'd require 2
round-trips.  In actuality, we can do it in a bit more than 1
full request (with the extra overhead coming from the discarded
original request that was never served).  However, there is still
only one network transaction and therefore, I doubt that it will
have that much of an impact due to the network being the primary
bottleneck.  -- justin

View raw message