trafficserver-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alan Carroll <>
Subject Re: slow client triggered cache fill will slow down the subsequence fast clients when read_while_writer enabled
Date Thu, 01 Jun 2017 13:50:16 GMT
There's not really a work around, other than flow control, at this time. Changing it would
be tricky bit of work as at the time the first transaction occurs there is no cache producer
to which to attach. After the second client, the first transaction / HttpTunnel would somehow
need to be restructured in a thread safe manner to use a different CacheVC. That wouldn't
be simple.
On Thursday, June 1, 2017, 1:54:01 AM CDT, ri lu <> wrote:

Hi Team,

For a cache-miss content (let's say 100MB), if the first client is in slow
speed network, let's say 100kbps, then the cache fill will be limited to be
100kbps as well, this will restrict the subsequence clients's downloading
rate if *read_while_writer is enabled*. I think, correct me if I'm wrong,
this is due to the first slow client is attached to the HttpTunnel's http
server producer instead of the cache producer, so if the producer cache
fill too fast it will exhaust memory, while the site effect of this
behavior is the disk cache consumer is paced to the same speed of the slow
client, which in turn affect the subsequence clients' downloading bitrate.
I think this is really a bad thing for a caching proxy product, because the
cache fill speed is depend on the *first* client's speed (Be note, the
read_while_writer has to be enabled in my case.).

My question is: is there're any workaround (except for the flow control
feature) for this issue and any planing for enhancing this, e.g. attach the
first client onto the disk cache producer?


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message