Thanks Suheer it was an error in my plugin config. I have corrected it
but exclude Content-Length <1000 will exclude objects less than 1000.
What is the unit of 1000 bytes?
What if I add exclude Content-Length >100000000 (100MB)
--
Regards,
Faisal.
------ Original Message ------
From: "Sudheer Vinukonda" <sudheerv@yahoo-inc.com>
To: "users@trafficserver.apache.org" <users@trafficserver.apache.org>;
"Muhammad Faisal" <faisalusuf@yahoo.com>
Sent: 3/17/2016 1:28:44 AM
Subject: Re: High Upstream Utilization Background Fetch Plugin behavior
>Is there a typo in the below line?
>
>exclude Content -Length < 1000
>
>If there is, pls try changing it as below:
>
>exclude Content-Length <1000
>
>Once you change the config, you may want to functionally validate by
>sending a range request for a large ( > 1000 bytes) objects to see if
>it behaves correctly.
>
>
>"- Set Max object size to 100 or 200 MB"
>
>max object size only affects the size of the objects stored in the
>Cache. It does not prevent the downloading of larger objects from your
>upstream, so, it will not help with saving upstream bandwidth. In fact,
>if any, it may make it worse, since ATS will basically download the
>large objects for every request (however, long tail they may be).
>
>Thanks,
>
>Sudheer
>
>
>On Wednesday, March 16, 2016 12:15 PM, Muhammad Faisal
><faisalusuf@yahoo.com> wrote:
>
>
>Hi,
>After more than two months of testing and great support from entire ATS
>team, finally we have integrated ATS in production scenario as
>transparent caching . Currently we have put limited traffic to it
>approx 200mbps (1k plus users). After putting actual load on the server
>i'm observing the upstream utilization has been increased. Last stats
>that i have viewed are 130Mbps (upstream) and 41.6Mbps (to clients) so
>there is negative impact of caching instead.
>
>
>It is worth mentioning that I have used background fetch plugin to
>cache range requests (to improve cache performance specially
>streaming). The max object size currently is set to zero. During
>testing I have observed when downloading a large file the ATS were
>starting object download on available upstream capacity. The large file
>like ISO of 600 MB takes lots of bandwidth to fill the object while
>delivering to the user according to allocated speed on next object hit
>however the object was delivered from the cache.
>
>I production scenario the above behavior is causing increased upstream
>utilization while the hit probability on larger objects are rare.
>
>I need experts opinion to improve traffic saving. What comes to my mind
>is:
>
>- Set Max object size to 100 or 200 MB
>- Keep using background_fill plugin and exclude the larger objects (max
>object size)
>- How to exclude larger files to being background fill above e.g
>200Mbps
>
>Below are my background fill configs:
>
>exclude Content -Length < 1000 (this is to exclude small object less
>than 1000 bytes?)
>include Content-Type video/mp4
>exclude Content-Type text
>include Content-Type video/quicktime
>include Content-Type video/3gpp
>include Content-Type application/octet-stream
>
>
>--
>Regards,
>Faisal.
>
>
>
|
Mime |
- Unnamed multipart/related (inline, None, 0 bytes)
- Unnamed multipart/alternative (inline, None, 0 bytes)
|