db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Knut Anders Hatlen (JIRA)" <j...@apache.org>
Subject [jira] Commented: (DERBY-2911) Implement a buffer manager using java.util.concurrent classes
Date Tue, 28 Aug 2007 13:57:30 GMT

    [ https://issues.apache.org/jira/browse/DERBY-2911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12523242

Knut Anders Hatlen commented on DERBY-2911:

I am interested in working on this issue. What I would like to
achieve, is to make it possible for different threads to access the
buffer manager without blocking each other, as long as they request
different objects and the requested objects are present in the
cache. (That is, I think it is OK that accesses to the same object or
accesses that need to fetch an object into the cache might have to
wait for each other, as they do in the current buffer manager.)

There are two ways to achieve this:

  1) Rewrite the old buffer manager (Clock) so that it allows more
     concurrent access (possibly splitting the HashMap and changing
     the synchronization model for Clock/CachedItem)

  2) Write a new buffer manager which uses the concurrency utilities
     in newer Java versions (ConcurrentHashMap, ReentrantLock and

I like option 2 best myself, mostly because it allows us to reuse the
wheel (concurrency utils) rather than reinventing it. The downside is
that the old implementation must be kept in the code as long as we
support JVMs without the concurrency utilities (JDK 1.4 and
Foundation). Because of the clearly defined interface (CacheManager)
for the buffer manager, adding an alternative implementation should be
transparent to the rest of the Derby code, though.

If we decide to go for option 2, I will try to implement it
incrementally with these steps

  1) Implement a buffer manager with no replacement policy (that is,
     it ignores the maximum size and never throws data out). After
     this step, the buffer manager should allow concurrent access for
     all threads that request different objects.

  2) Implement the replacement policy. After this step, the buffer
     manager should be able to throw out objects that have not been
     used for some time, and thereby avoid growing bigger than the
     maximum size.

  3) Enable the new buffer manager by default for JDK 1.5 and higher.

In step 2, I think I will stick to the clock algorithm that we
currently use. Last year, a Google Summer of Code student investigated
different replacement algorithms for Derby. Although changing the
replacement algorithm is out of the scope of this issue, he suggested
some changes to make it easier to switch replacement algorithm. I will
see if I can get some ideas from his work.

Comments to this plan would be appreciated.

> Implement a buffer manager using java.util.concurrent classes
> -------------------------------------------------------------
>                 Key: DERBY-2911
>                 URL: https://issues.apache.org/jira/browse/DERBY-2911
>             Project: Derby
>          Issue Type: Improvement
>          Components: Performance, Services
>    Affects Versions:
>            Reporter: Knut Anders Hatlen
>            Priority: Minor
> There are indications that the buffer manager is a bottleneck for some types of multi-user
load. For instance, Anders Morken wrote this in a comment on DERBY-1704: "With a separate
table and index for each thread (to remove latch contention and lock waits from the equation)
we (...) found that org.apache.derby.impl.services.cache.Clock.find()/release() caused about
5 times more contention than the synchronization in LockSet.lockObject() and LockSet.unlock().
That might be an indicator of where to apply the next push".
> It would be interesting to see the scalability and performance of a buffer manager which
exploits the concurrency utilities added in Java SE 5.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message