directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Emmanuel Lécharny <>
Subject Re: TXN WORK: advice needed on how to deail with logical caches
Date Sun, 22 Apr 2012 06:39:56 GMT
Le 4/22/12 7:16 AM, Selcuk AYA a écrit :
> Regarding the caches, I have a question regarding the access control
> and other admin point caches maintained by the admin point
> interceptor. It seems that during modify operation add/remove of admin
> role attributes is processed on clones of admin point caches but these
> original caches are never modified. So this code piece doesnt seem to
> be working. Can you guys confirm this and let me know if turning on
> this code piece would be safe?

The AdministativePointInterceptor is currently being reworked deeply. 
This work has started last year, but we were blocked by other problems 
and this was hold until we get the needed fixes (decision to be made 
about how we evaluate the subentries, namely). I have no idea if 
activating those cache will work, or not, at this point.

Right now, AP are used to manage CollectiveAttributes and ACI, they are 
also intended to be used for subSchema and Triggers, but right now we 
only support one principal subschema associated to the rootDSE and 
triggers are pending, with tests ignored, until we get some time to 
review them too.

You have to consider that AP are not the simplest feature in the server, 
and need some love in the near feature.

If we focus on the cache, without adding the complexity of the AP 
management in the full picture, enough to say that concurrent 
modifications on subentries are very unlikely to occur, and we can even 
forbid such an action. AdminsitrativePoint are supposed to be handled by 
an administrator, not by a user...

Hope it helps.

> thanks
> Selcuk
> On Thu, Apr 12, 2012 at 9:11 AM, Selcuk AYA<>  wrote:
>> On Thu, Apr 12, 2012 at 6:50 AM, Emmanuel Lécharny<>  wrote:
>>> A bit late, but still, some more thoughts about the entry cache... Let me
>>> add some comments in this mail to be sure I understood what you have in
>>> mind...
>>> Le 4/8/12 9:16 PM, Selcuk AYA a écrit :
>>>> I am about to revisit the logical caches issue. My plan is to do the
>>>> following to handle all these caches in a generic way:
>>>> - a singe version number is kept for all caches.
>>> The latest, I guess.
>> yes.
>>>> - a thread starting a txn read locks an internal readwrite lock.
>>> fine.
>>>> - when a thread needs to modify a cache, it ugrades its lock to
>>>> exclusive lock.
>>> It will block all the read on the cache until the cache update is done,
>>> right ?
>>>> If it detects a version change during this time, it
>>>> throws a conflict exception. If no, it bumps up the version number and
>>>> changes the cache.
>>> as the write lock will be exclusive, I assume that the cache modification
>>> will be done by one single thread. Now, there is one race condition that can
>>> occur if the thread modifying the cache has a revision number lower than the
>>> current revision number. That means the cache has been changed by anothe
>>> rthred. The timeline for such a case would be :
>>> time arrow --->
>>> T(r1) o-------------[r1] modify cache
>>> T(r2)      o-----[r2] modify cache
>>> When t(r1) tries to modify the cache, the cache already has a higher revion
>>> in it (r2), even if the T(r1) thread has been started before.
>>> In this case, we will throw a conflict exception on T(r1)
>>> Is that what you have in mind ?
>> yes this is correct.
>>>> - After committing, thread releases the lock.
>>>> -If thread aborts its txn, then it notifies interceptors in its
>>>> interceptor chain of the abort. Any interceptor can then rebuild its
>>>> cache from what is on disk at this point. I am assuming this is
>>>> possible for all logical caches.
>>> What about aggregating all the cache update we do in all the interceptors in
>>> one single CacheInterceptor, responsible for the update of all the caches ?
>>> The idea would be to globally lock the cache one single time instead of
>>> doing so in many places. Accessing the caches will be done through an helper
>>> class masking the access to internal caches, with proper locks shared by all
>>> the threads.
>>> Sounds good ?
>> I would prefer to implement it as is today because I feel it is going
>> to be easier for me.
>>> --
>>> Regards,
>>> Cordialement,
>>> Emmanuel Lécharny

Emmanuel Lécharny

View raw message