trafficserver-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alan Carroll <>
Subject Re: Understanding ioBufAllocator behvaiour
Date Wed, 24 May 2017 15:39:42 GMT
That can certainly happen and is a known problem, but it looked like Nick's scenario was a
constant load via a test application and he saw unbounded growth in a single iobuf bucket.
For threads, there is a single global pool and each thread keeps a smaller pool from the global
one (via the ProxyAllocator instances). The ProxyAllocator has a high and low water mark -
when the # of items in the thread exceeds the high water mark they are released back to the
global pool until there are only low water mark items left. The values for these are in the
128-512 range, so not on the same scale as this memory growth.
There's been lots of discussion about jemalloc. What we lack is production performance data
to see what the impact would be. We're working on that. As far as I understand it (Phil and
Leif know more) we would keep the ProxyAllocators but instead of releasing to a global pool
the memory would be released to jemalloc for re-use, thereby strongly bounding the amount
of memory in a particular iobuf bucket.

On Wednesday, May 24, 2017, 10:29:32 AM CDT, Kapil Sharma (kapsharm) <>
wrote:On plateauing - not necessarily; we do see the memory consumption increasing continuously
in our deployments as well. It depends on the pattern of segment sizes over time. 
ATS uses power of 2 allocators for memory pool - there are 15 of those, ranging from 128bytes
to 2M if my memory serves me right - and these are per thread! ATS will choose an optimal
allocator for the segments.
As Alan mentioned, once chunk are allocated, they are never freed.
Here is a totally artificial example just to make the point (please correct if my understanding
is flawed):* the traffic pattern was such that initially only 2M allocators were used then
ATS will keep allocating 2M chunks until RAM cache limit (lets say it is 64GB) is reached.*
Now traffic pattern changed (smaller fragment requests), and only 1M allocators are used,
ATS will now keep allocating 1M chunks, again capping at 64GB. But in the end ATS would have
allocated 128GB well over RAM cache size limit….

In the past a there was some prototype of reclaimable buffer support added in ATS, but I believe
it was removed in 7.0? Also there is recent discussion of adding jmalloc?

On May 24, 2017, at 11:01 AM, Alan Carroll <> wrote:
One issue is that memory never moves between the iobuf sizes. Once a chunk of memory is used
for a specific iobuf slot, it's there forever. But unless something is leaking, the total
size should eventually plateau, certainly within less than a day if you have a basically constant
load. There will be some growth due to blocks being kept in thread local allocation pools,
but again that should level in less time than you've run.

On Wednesday, May 24, 2017, 9:50:39 AM CDT, Dunkin, Nick <> wrote:#yiv2668652937
-- filtered {panose-1:2 4 5 3 5 4 6 3 2 4;}#yiv2668652937 filtered {font-family:Calibri;panose-1:2
15 5 2 2 2 4 3 2 4;}#yiv2668652937 p.yiv2668652937MsoNormal, #yiv2668652937 li.yiv2668652937MsoNormal,
#yiv2668652937 div.yiv2668652937MsoNormal {margin:0in;margin-bottom:.0001pt;font-size:12.0pt;}#yiv2668652937
a:link, #yiv2668652937 span.yiv2668652937MsoHyperlink {color:blue;text-decoration:underline;}#yiv2668652937
a:visited, #yiv2668652937 span.yiv2668652937MsoHyperlinkFollowed {color:purple;text-decoration:underline;}#yiv2668652937
p.yiv2668652937msonormal, #yiv2668652937 li.yiv2668652937msonormal, #yiv2668652937 div.yiv2668652937msonormal
{margin-right:0in;margin-left:0in;font-size:12.0pt;}#yiv2668652937 p.yiv2668652937msochpdefault,
#yiv2668652937 li.yiv2668652937msochpdefault, #yiv2668652937 div.yiv2668652937msochpdefault
{margin-right:0in;margin-left:0in;font-size:12.0pt;}#yiv2668652937 span.yiv2668652937msohyperlink
{}#yiv2668652937 span.yiv2668652937msohyperlinkfollowed {}#yiv2668652937 span.yiv2668652937emailstyle17
{}#yiv2668652937 span.yiv2668652937msoins {}#yiv2668652937 p.yiv2668652937msonormal1, #yiv2668652937
li.yiv2668652937msonormal1, #yiv2668652937 div.yiv2668652937msonormal1 {margin:0in;margin-bottom:.0001pt;font-size:12.0pt;font-family:Calibri;}#yiv2668652937
span.yiv2668652937msohyperlink1 {color:#0563C1;text-decoration:underline;}#yiv2668652937 span.yiv2668652937msohyperlinkfollowed1
{color:#954F72;text-decoration:underline;}#yiv2668652937 span.yiv2668652937emailstyle171 {font-family:Calibri;color:windowtext;}#yiv2668652937
span.yiv2668652937msoins1 {color:teal;text-decoration:underline;}#yiv2668652937 p.yiv2668652937msochpdefault1,
#yiv2668652937 li.yiv2668652937msochpdefault1, #yiv2668652937 div.yiv2668652937msochpdefault1
{margin-right:0in;margin-left:0in;font-size:12.0pt;font-family:Calibri;}#yiv2668652937 span.yiv2668652937EmailStyle29
{font-family:Calibri;color:windowtext;}#yiv2668652937 span.yiv2668652937msoIns {text-decoration:underline;color:teal;}#yiv2668652937
.yiv2668652937MsoChpDefault {font-size:10.0pt;}#yiv2668652937 filtered {margin:1.0in 1.0in
1.0in 1.0in;}#yiv2668652937 div.yiv2668652937WordSection1 {}#yiv2668652937 
Hi Alan,

This is 7.0.0

I only see this behavior on ioBufAllocator[0], [4] and [5].  The other ioBufAllocators’
usage looks as I would expect (i.e. allocated goes up then flat), so I was thinking it was
more likely something to do with my configuration or use-case.

I’d also just like to understand, at a high level, how the ioBufAllocators are used.



From:Alan Carroll <>
Reply-To: "" <>
Date: Wednesday, May 24, 2017 at 10:33 AM
To: "" <>
Subject: Re: Understanding ioBufAllocator behvaiour

Honestly it sounds like a leak. Can you specify which version of Traffic Server this is?

On Wednesday, May 24, 2017, 8:22:46 AM CDT, Dunkin, Nick <> wrote:


I have a load test that I’ve been running for a number of days now.  I’m using the memory
dump logging in traffic.out and I’m trying to understand how Traffic Server allocates and
reuses memory.  I’m still quite new to Traffic Server.

Nearly all of the memory traces look as I would expect, i.e. memory is allocated and reused
over the lifetime of the test.  However my readings from ioBufAllocator[0] show a continual
increase in allocated AND used.  I am attaching a graph.  (FYI – This graph covers approximately
3 days of continual load test.)

I would have expected to start seeing reuse in ioBufAllocator by now, like I do in the other
ioBufAllocators.  Can someone help me understand what I’m seeing?

Many thanks,

Nick Dunkin

Nick Dunkin

Principal Engineer

o:   678.258.4071


4375 River Green Pkwy # 100, Duluth, GA 30096, USA

View raw message