apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Fritsch ...@sfritsch.de>
Subject Re: [PATCH] APR pool scalability and efficiency
Date Sun, 11 May 2014 19:20:20 GMT
On Sunday 11 May 2014 16:07:32, Stefan Fuhrmann wrote:
> On Sun, May 11, 2014 at 10:28 AM, Stefan Fritsch <sf@sfritsch.de> 
wrote:
> > > Its root cause was a large data
> > > structure (many small allocations) being built up in a single
> > > pool. That times multiple threads / concurrent requests
> > > exceeded the ~0.5GB limit (~64k regions x 8kB).
> > > [apr-pool-growth.*]
> > 
> > interesting. From my experience, Linux seems to merge adjacent
> > anonymous mappings. How does the process map look when it goes
> > OOM?
> > Does subversion create lots of file-backed mappings that are
> > interspersed with anon mappings? Or does the OOM happen when one
> > thread unmaps its memory, causing a lot of fragmentation?
> 
> Nope. Although the original code contains a few temporary
> subpools, the following code already exhibits the problem
> when run in the SVN test suite:
> 
> static void test_pools(apr_pool_t *pool)
> {
>   int i;
>   for (i = 0; i < 10000000; ++i)
>     {
>       if (i % 100000 == 0)
>         printf("%d\n", i);
>       apr_palloc(pool, 100);
>     }
> }
> 
> Output:
> 
> ...
> 5000000
> 5100000
> libsvn: Out of memory - terminating application.
> *** Program received signal SIGABRT (Aborted) ***
> 
> So, it seems that the individual regions do not get combined.
> IIRC from earlier debug output, they are actually adjacent, though.

Strange. With 32bit userspace, this works for me up to 39490775 for 
the upper bound, which is near the expected limit. With 64bit, higher 
values work, too.

$ cat /proc/sys/vm/max_map_count
65530
$ cat /proc/sys/kernel/randomize_va_space
2
$ uname -a
Linux k 3.14-1-amd64 #1 SMP Debian 3.14.2-1 (2014-04-28) x86_64 
GNU/Linux

I only see one region in /proc/$pid/maps

OTOH, if I enable the shiny new --enable-allocator-guard-pages, I get 
lots of fragmentation and failure at around 2500000, also as expected.

Maybe your kernel has some randomization feature that my kernel lacks. 
Though as far as I could find out, randomize_va_space == 2 means that 
heap+mmap randomization is enabled.


> -- Stefan^2.
> 
> [Once again showing that numbering one's name resolves ambiguity.]

Though counting to two is not sufficient to make it unique ;)

Mime
View raw message