db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Knut Anders Hatlen (JIRA)" <j...@apache.org>
Subject [jira] Commented: (DERBY-2911) Implement a buffer manager using java.util.concurrent classes
Date Wed, 19 Sep 2007 13:00:51 GMT

    [ https://issues.apache.org/jira/browse/DERBY-2911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12528750
] 

Knut Anders Hatlen commented on DERBY-2911:
-------------------------------------------

Some thoughts about replacement algorithm and synchronization:

In the old cache manager, one basically had a global synchronization lock that every user
of the cache needed to obtain before entering the manager. One advantage of that approach
is that there can't be deadlocks as long as there's only one lock. With the more fine-grained
synchronization in the new buffer manager, one need to be careful not to introduce the risk
of deadlocks.

The current implementation of ConcurrentCache should not be deadlock prone, since each thread
never has locked more than one entry in the cache. When the replacement algorithm is implemented
it will be more complex, as there will be other data structures that we need to lock.

Take for instance insertion of a new item into the cache. Then we at least need to synchronize
on (A) the entry to insert, (B) the clock data structure, and (C) the entry to evict from
the cache. In a previous comment, I mentioned that it is OK to lock two entries if the first
one is about to be inserted and the second one is already in the cache, so locking C while
holding the lock on A is OK. But allowing C to be locked while holding the lock on both A
and B, would be problematic. If someone calls CacheManager.remove(), we first need to lock
the entry that is about to be removed, and then lock the clock data structure to actually
remove the entry. This would mean that we obtain the synchronization locks in the order C->B,
which could potentially lead to deadlocks if the insertion of entries had the lock order A->B->C.

To solve this, I think we would have to break up the A->B->C lock chain somehow. What
comes to mind, is that we could unlock B before we lock C, and then reobtain the lock on B
after we have locked C. That would leave an open window for others to modify the relationship
between B and C for a short while, so we would have to revalidate that what we thought we
knew is still true.

So instead of doing this

lock A (entry to insert)
-->lock B (clock)
---->fetch C (entry to evict) from B
---->lock C
------>reuse Cacheable from C in A
------>remove C from B
------>unlock C
---->insert A into B
---->unlock B
-->unlock A
return

we could do this

lock A (entry to insert)
-->lock B (clock)
---->fetch C (entry to evict)
---->unlock B
-->lock C
---->validate that no one else evicted C after we unlocked B (otherwise retry the preceding
steps)
---->reuse Cacheable from C in A
---->lock B
------>remove C from B
------>unlock B
---->unlock C
-->unlock A
return

This way we break down the A->B->C locking into the sequences A->B and by A->C->B,
none of which conflict with the B->C locking required by CacheFactory.remove().

> Implement a buffer manager using java.util.concurrent classes
> -------------------------------------------------------------
>
>                 Key: DERBY-2911
>                 URL: https://issues.apache.org/jira/browse/DERBY-2911
>             Project: Derby
>          Issue Type: Improvement
>          Components: Performance, Services
>    Affects Versions: 10.4.0.0
>            Reporter: Knut Anders Hatlen
>            Assignee: Knut Anders Hatlen
>            Priority: Minor
>         Attachments: d2911-1.diff, d2911-1.stat, d2911-2.diff, d2911-3.diff, d2911-4.diff,
d2911-entry-javadoc.diff, d2911-unused.diff, d2911-unused.stat, d2911perf.java
>
>
> There are indications that the buffer manager is a bottleneck for some types of multi-user
load. For instance, Anders Morken wrote this in a comment on DERBY-1704: "With a separate
table and index for each thread (to remove latch contention and lock waits from the equation)
we (...) found that org.apache.derby.impl.services.cache.Clock.find()/release() caused about
5 times more contention than the synchronization in LockSet.lockObject() and LockSet.unlock().
That might be an indicator of where to apply the next push".
> It would be interesting to see the scalability and performance of a buffer manager which
exploits the concurrency utilities added in Java SE 5.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message