httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Peter J. Cranstone" <Cranst...@worldnet.att.net>
Subject RE: Apache 164 percent speed increase
Date Wed, 13 Oct 1999 16:57:55 GMT
Mark...

Do you actually comprehend our statistics obtained using AB?

Your comments <begin> .  From what I can see, you have just hacked up a
proxy to be able to compress data on the fly if the browser claims it can
support it.  Not mind-boggling; it isn't that difficulty to make Apache do
that.  Unfortunately, what looks to be your current implementation violates
the HTTP/1.1 spec when it compresses the data, and will cause problems for
any scheme that attempts to provide integrity protection  for the headers or
body of a response. <end>

This is an amazingly inaccurate response from someone of your caliber.

1. Hacked up a proxy to compress data on the fly if the browser can support
it. Wrong! It's an application which communicates with either a web server
OR a proxy server, detects the capability of the users browser and
accordingly delivers content encoded data which conforms to both HTTP 1.0
and 1.1 standards. I'm going to say this ONE more time. We follow ALL RFC's
and ALL HTTP standards.
2. It isn't difficult to make Apache do that... Correct. But what's tough to
do is make Apache do it in real time without impacting performance.
3. If the browser can "accept content encoded data" we send content encoded
data using application /gzip if it cannot, we send the data the old
fashioned way, uncompressed. To detect the browsers capability we follow the
following RFC's. User Agent field ( if present ) and or 'Accept encoding'
field ( if present ).
4. We ran AB because we want to show using Apache's very own benchmarking
statistic how performance is improved on any standard Apache web server. I
welcome you to run the same program on your PC and deliver comparable
results. If you turn on content-encoding and ask a standard Apache web
server to deliver compressed content encoded data on the fly in response to
a users request I guarantee you will bring your server to it's knees. A year
ago this happened to us. We have perfected a NEW design which as the stats
show improve server performance.
5. It's really simple. We transmit fewer bytes. We spend CPU cycles to
compress because we are rewarded on the other side with a reduced number of
cycles to transmit. The smaller the number the faster we go.

Marc... the debate is easy to quell.

All comments and BS aside let's do a "bake off". We are willing to have an
outside expert benchmark our software using Apache, ab, and there own
equipment. Presently we are running on the following hardware:

CPU: Compaq Prolinea with 128MB RAM and 4GB SCSI HD
OS1: SlackWare Linux version 2.0.35
SRV: Apache 1.3.3

This should be simple to do. Content encoded gzip compressed HTML generated
on the fly or sent pre-compressed from an Apache web server using our
technology improves performance and bandwidth savings. End of story, either
it does or it doesn't. BTW, must follow standard RFC's and HTTP 1.1
compliant.

If the tests prove it works, great, debate is settled and people can decide
whether or not they want to use it.

Regards,

Peter J. Cranstone
Cranstone@RemoteCommunications.com
http://www.remotecommunications.com

-----Original Message-----
From: new-httpd-owner@apache.org [mailto:new-httpd-owner@apache.org]On
Behalf Of Marc Slemko
Sent: Wednesday, October 13, 1999 9:42 AM
To: new-httpd@apache.org
Subject: Re: Apache 164 percent speed increase

On Wed, 13 Oct 1999 TOKILEY@aol.com wrote:

>
> In a message dated 99-10-13 00:02:46 EDT, you write:
>
> >  > RCI's online compression servers work with ANY version of
> >  > ANY HTTP compliant server but when used in conjunction with
> >  > Apache there is a 164 ( ONE HUNDRED AND SIXTY FOUR ) percent
> >  > performance increase for Apache.
> >
> >  Whatever.  There has already been far too much attention given to
your...
> >  odd views about how compression should be done.
>
> Show me one other product in the world that is compressing Internet
> documents 'on the fly'... GZIP or anything else, based on actual sensing
> of the user agent capabilities and conforms to RFC standards. You
> won't find one. Everyone talks about doing it but talk is cheap.
> It is here now... we have already done it. We welcome any news about
> ANY other server product that can do the same so we can test against it.
> Competition is good. It's lonely being the 'only one in the world'.
>
> >  Hmm, I notice that your solution still only works on IE running on
Windows
>
> Sorry, wrong. Works on ALL platforms, ALL browsers... even LYNX.
> The benchmarking is for a true real-time online compression server.
> Runs on ANY Unix including Solais Sun Sparc Station. Works with ANY
> HTTP compliant server and any browser that supports Content-encoding.

Well, this must be a new thing that you have dreamed up then.  My comments
were in reference to your previous odd conecept of "C-HTML", which is
still what your website gushes about.  In fact, some of your documents
on "HyperSpace" and "C-HTML" say what a bad idea it would be for the server
to be compressing things.

I can't comment much on your new solution, since you don't actually
have any real information about it available.  From what I can see,
you have just hacked up a proxy to be able to compress data on the
fly if the browser claims it can support it.  Not mind-boggling;
it isn't that difficulty to make Apache do that.  Unfortunately,
what looks to be your current implementation violates the HTTP/1.1
spec when it compresses the data, and will cause problems for any
scheme that attempts to provide integrity protectino for the headers
or body of a response.

I can say:

        - sure, compression is a potentially useful thing
        - sure, some browsers have some level of support for accepting
          gzipped content.  Most (or all) Netscape versions, however, have
          difficulty when certain components of a page are sent with a
          content encoding.
        - sure, a bit of work could be usefully done to make Apache support
          it better.
        - the effects on a high traffic server are far from proven either
          way; you have demonstrated no benchmarks that show anything
          other than a very low traffic server, plus a large amount of
          content is dynamic which means it can't easily be precompressed.
        - sure, in some situations, sending pre-compressed content is
          better for the server since there is less on disk, less
          in memory, and less to send to the network.  If you have
          to have both versions though, for browsers that don't
          support compression, the benefit to the server drops a
          lot.
        - any attempt to create a magic bullet non-standard solution will
          fail.


Mime
View raw message