stdcxx-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Teleman <stefan.tele...@gmail.com>
Subject Re: STDCXX-1056 [was: Re: STDCXX forks]
Date Wed, 12 Sep 2012 04:21:49 GMT
On Tue, Sep 11, 2012 at 10:18 PM, Liviu Nicoara <nikkoara@hates.ms> wrote:

> AFAICT, there are two cases to consider:
>
> 1. Using STDCXX locale database initializes the __rw_punct_t data in the
> first, properly synchronized pass through __rw_get_numpunct. All subsequent
> calls use the __rw_punct_t data to construct returned objects.
> 2. Using the C library locales does the same in the first pass, via
> setlocale and localeconv, but setlocale synchronization is via a per-process
> lock. The facet data, once initialized is used just like above.
>
> I probably missed this in the previous conversation, but did you detect a
> race condition in the tests if the facets are simply forwarding to the
> private virtual interface? I.e., did you detect that the facet
> initialization code is unsafe? I think the facet __rw_punct_t data is safely
> initialized in both cases, it's the caching that is done incorrectly.

I originally thought so too, but now I'm having doubts. :-) And I
haven't tracked it down with 100% accuracy yet. I saw today this
comment in src/facet.cpp, line 358:

// a per-process array of facet pointers sufficiently large
// to hold (pointers to) all standard facets for 8 locales
static __rw_facet*  std_facet_buf [__rw_facet::_C_last_type * 8];

this leads me to suspect that there is an upper limit of 8 locales +
their standard facets. If the locales (and their facets) are being
recycled in and out of this 8-limit cache, that would explain the
other thing I also noticed (which also answers your question): yes, i
have gotten the dreaded strcmp(3C) 'Assertion failed' in
22.locale.numpunct.mt when I test implemented 22.locale.numpunct.mt in
a similar way to your tests. which in theory shouldn't happen, but it
did. which means that there's something going on with
behind-the-scenes facet re-initialization that i haven't found yet.
which would partially explain your observation that MT-tests perform
much worse with caching than without.

this is all investigative stuff for tomorrow. :-)

and I agree with Martin that breaking ABI in a minor release is really
not an option. I'm trying to find the best way of making these facets
thread-safe while inflicting the least horrible performance hit.

i will run your tests tomorrow and let you know. :-)

--Stefan

-- 
Stefan Teleman
KDE e.V.
stefan.teleman@gmail.com

Mime
View raw message