httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From TOKI...@aol.com
Subject Re: More thoughts on caching in v2.0 - filtering
Date Tue, 22 Aug 2000 13:16:53 GMT

In a message dated 00-08-22 12:30:27 EDT, Graham Legget writes...

> Caching different variations of the same object at the same time will
> use more RAM but will also be a lot simpler and less processor intensive
> than trying to uncompress or compress something on the fly. Simple =
> less bugs, less headaches.

I didn't say it was going to be easy.
I said it was a good idea.

All you gotta do is store a CRC on an entity with the compressed
data and then you can tell if a requested object is sitting in the
cache as a SINGLE COMPRESSED ENTITY. You don't need
2 copies of anything.

Which is less 'processor intensive'?....

1. Sending back CNN's home page pre-compressed in the cache to 8,000 
bytes ( That's what their 90,000 byte home page can reduce to and is 
only about 2 sends at 4k output buffer size ).

2. Asking the Server and the TCP I/P subsystem to hang tough and 
send all 90,000+ bytes back? You do the math on the number of sends
that would take and the amount of time the thread(s) are tied up doing it.

Believe me... I get this argument all the time and it is absolutely
based on a false assumtion. People think that when data has
been 'sent' by a Server loop it has somehow magically disappeared
and the CPU is now totally clean and totally available again.
Not so. The data has to be SENT by the TCP/IP subsystem.
It has not 'disappeared' (yet) just because the Server is done
firing some 'send()' calls. Less data to actually send = less load 
on the machine. End of story.

Yours...
Kevin Kiley
CTO, Remote Communications, Inc.
http://www.RemoteCommunications.com

Mime
View raw message