httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From TOKI...@aol.com
Subject Re: chunked transfer-encoding (was: Re: PLEASE READ: Filter I/O)
Date Fri, 23 Jun 2000 12:38:49 GMT

In a message dated 00-06-23 06:34:33 EDT, Greg Stein writes.

> > * TRANSPORT LAYER EOD MISSING
>  > 
>  > What is missing is a THIRD option and that is an
>  > actual HTTP 'Transport' layer 'END OF DATA' signal
>  > that can be sent WITHOUT having to close the port
>  > and this does not currently exist.
>  
>  Sorry, but it DOES exist. It is the chunked transfer-encoding. End of 
story.

Maybe. Maybe not.

I guess I should have added 'Version wide HTTP EOD missing' but
that's neither here nor there.

I hear you. What you seem to be saying is...

"Why be concerned about preserving Content-Length or
other response header fields during a multi-layered
filtering scheme because we can always just fall back
on the 'Transfer-encoding: chunked' delivery mechanism,
or just close the connection and abandon any attempt
at doing Keep-Alive if the user agent can't receive the
chunked stuff".

Well... someone called YAHOO has already tested those
waters for you. Let's see how they made out...

The following division of YAHOO is sending real-time
compressed data at all times to any browser that says
it can receive it ( even the ones that will blow up )
and it is using 'Transfer-encoding: chunked' to do it
and NEVER supplies Content-Length ( just as you imagine
it working when coming out of an intra-server multi-layered 
filtering scheme )...

http://sports.yahoo.com 

Hit it yourself with some capture on and you will see
the whole story.

MOST of the time, it works. ( Keyword MOST ).

Here are some times when it doesn't...

1. When you hit it with a version of a browser that
doesn't fully support Transfer-encoding and/or 
Content-encoding but 'says' that it is HTTP 1.1 
compliant. They are most assuredly 'out there'.

2. When you hit it with a robot that knows fully
well it can't handle Transfer-encoding but is 
'pretending' to support HTTP 1.1 for other reasons.
Some ( popular ) search engines do this.

3. When you 'pull the plug' on the session or actually
intercept the final buffers and don't send them in order
to imitate Network failure. This probably isn't a fair test
but what is interesting in these cases is that it shows
that while there might be a specification in place for
Transfer-encoding to 'know' the difference between a
GOOD EOD and a BAD EOD... the reality is that even
the latest browsers can't seem to figure it out. I've never
seen the 'blue screen of death' in these particular scenarios
but sometimes the result is definitely not pretty.

Does YAHOO even care that a lot of robots or user-agents
can't access the pages? 

Nope. 

Should Apache care about similar situations when 'filtering' data?

Not for me to say.

I suppose I left out a big 'pre-condition' to all my
previous comments which is that I was ASSUMING whatever
filtering scheme is designed was not going to willy-nilly
decide that it only works 'well' if the user-agent is
HTTP 1.1 or greater and correctly supports 'chunked' and
will never try to do 'Keep-Alive' for anything other than
those specific 'best client' user-agents.

If that is the case then go for it. It it isn't, then
is it worth some more thought?

BTW: Most existing standard benchmarking software isn't
going to qualify in the 'best client' category and maybe
that's something to consider as well. Your own benchmark 
program won't even give you the best results when it comes 
up against the new scheme unless Keep-Alive still works.

There still isn't one single MAJOR standard benchmarking 
suite that can accept Content-Encoding at all and the
support for Transfer-encoding in those same major
benchmarking suites is spotty at best.

I rewrote a number of them ( including ab.exe, WebStone,
ProxyCache, others... ) and added full Content-Encoding and
Transfer-Encoding support and tried to offer the changes 
back to all the source sites but the responses were either
non-existent or ugly. Usually the static centered around
the legalities of having to add ZLIB in order to support 
Content-Encoding.

Should the lack of support for the latest HTTP schemes
in benchmarking software ever stop someone from adding
features to a Server that will be tested and reviewed
using those same pieces of software?... Of course not... 
but I was just passing the info along.

Yours...
Kevin Kiley
CTO, Remote Communications, Inc.
http://www.RemoteCommunications.com
http://www.rctp.com - Online Internet Content Compression Server.
  

Mime
View raw message