apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Fuhrmann <stefan.fuhrm...@wandisco.com>
Subject Re: [PATCH] APR pool scalability and efficiency
Date Tue, 13 May 2014 16:28:00 GMT
On Sun, May 11, 2014 at 9:20 PM, Stefan Fritsch <sf@sfritsch.de> wrote:

> On Sunday 11 May 2014 16:07:32, Stefan Fuhrmann wrote:
> > On Sun, May 11, 2014 at 10:28 AM, Stefan Fritsch <sf@sfritsch.de>
> wrote:
> > > > Its root cause was a large data
> > > > structure (many small allocations) being built up in a single
> > > > pool. That times multiple threads / concurrent requests
> > > > exceeded the ~0.5GB limit (~64k regions x 8kB).
> > > > [apr-pool-growth.*]
> > >
> > > interesting. From my experience, Linux seems to merge adjacent
> > > anonymous mappings. How does the process map look when it goes
> > > OOM?
> > > Does subversion create lots of file-backed mappings that are
> > > interspersed with anon mappings? Or does the OOM happen when one
> > > thread unmaps its memory, causing a lot of fragmentation?
> >
> > Nope. Although the original code contains a few temporary
> > subpools, the following code already exhibits the problem
> > when run in the SVN test suite:
> >
> > static void test_pools(apr_pool_t *pool)
> > {
> >   int i;
> >   for (i = 0; i < 10000000; ++i)
> >     {
> >       if (i % 100000 == 0)
> >         printf("%d\n", i);
> >       apr_palloc(pool, 100);
> >     }
> > }
> >
> > Output:
> >
> > ...
> > 5000000
> > 5100000
> > libsvn: Out of memory - terminating application.
> > *** Program received signal SIGABRT (Aborted) ***
> >
> > So, it seems that the individual regions do not get combined.
> > IIRC from earlier debug output, they are actually adjacent, though.
>
> Strange. With 32bit userspace, this works for me up to 39490775 for
> the upper bound, which is near the expected limit. With 64bit, higher
> values work, too.
>
> $ cat /proc/sys/vm/max_map_count
> 65530
> $ cat /proc/sys/kernel/randomize_va_space
> 2
> $ uname -a
> Linux k 3.14-1-amd64 #1 SMP Debian 3.14.2-1 (2014-04-28) x86_64
> GNU/Linux
>

$ cat /proc/sys/vm/max_map_count
65530
$ cat /proc/sys/kernel/randomize_va_space
2
$ uname -a
Linux Maccie 3.13.0-5-generic #20-Ubuntu SMP Mon Jan 20 19:56:38 UTC 2014
x86_64 x86_64 x86_64 GNU/Linux

I only see one region in /proc/$pid/maps
>

I see adjacent regions (one per mmap call) with the same
attributes but not being combined. Playing around with
MAP_HUGETLB and others usually resulted in a failure
early on instead of helping in any way.


> OTOH, if I enable the shiny new --enable-allocator-guard-pages, I get
> lots of fragmentation and failure at around 2500000, also as expected.
>

The guard pages has not been enabled (not even available
on the older APR releases that I was using).


> Maybe your kernel has some randomization feature that my kernel lacks.
> Though as far as I could find out, randomize_va_space == 2 means that
> heap+mmap randomization is enabled.
>

I use the standard kernels that come with the respective
Ubuntu distribution. Sadly, neither my interwebs searches
nor looking at the kernel config brought up something obvious.

In the light of the new guard page feature, having fewer
mmap regions would be helpful independent of the Ubuntu
issue that I'm seeing. So, it seems that my patch would
already be a good addition to that need feature.

-- Stefan^2.

Mime
View raw message