directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alex Karasulu <akaras...@apache.org>
Subject Re: [jira] [Updated] (DIRSERVER-1642) Unexpected behaviour in JdbmIndex
Date Mon, 29 Aug 2011 10:17:04 GMT
Thanks Selcuk for your efforts. This a great summary and update of what
you've done.  More inline ...


On Mon, Aug 29, 2011 at 12:17 PM, Selcuk AYA <ayaselcuk@gmail.com> wrote:

> Resending as previous send didnt seem to make it...
>
> Hi I just attached my latest changes for the jdbm branch and wanted to
> give a status update and some technical details:
>
> 1) Summary
> *We now have a jdbm tree which treats find, insert, remove and browse
> as actions that execute in isolation. In particular, read actions will
> not be affected by ongoing structural changes to the tree and will
> only see data changes that completed before they started.
>
> *We allow one writer and multiple readers to execute concurrently.
> Synchronized operations are mostly removed.
>
> * Exisiting tests except the StoredProceduteIT and the unit tests I
> added for the versioned tree pass( I did mvn clean install
> -Dintegration). I think the problem with StoredProceduteIT is an
> existing one. There is a code piece where I serialize and deserialize
> tuple values stored in JDBM btree in order to do a deep copy. With
> StoredProcedureIT hello world stored procedure deserialization throws
> a UTFDataFormatException. On a clean brach, I added similar code to
> deserialize B+ tree page values right after they are serialized, and I
> hit the same issue. So I think this is some existing issue with stored
> procedure serialization/deserialization.
>
>
Yes we need to get to the bottom of this yes. It must be something wrong
with the way the SP test works. If need be let's create a JIRA issue on the
Apache Jira for this and come back to it later.



> 2) Changes above JDBM level
> * I added changes to call the newly added browser->close() interface
> when the cursors are closed  or a cursor is repositioned.
> * I hit some existing issues where cursors are not closed. In
> particular, I had to change SubentryInterceptor.java.java to close the
> cursor after search operations and change the JDBM container cursor to
> close the contained cursor  when it is closed. If required, I can
> provide these changes as separate fixes
>
>
Please do when you have a chance. It would be nice to get this out in the M3
release to avoid consistency issues especially when long running operations
are concerned during replication updates.


> 3) Technical details at JDBM level:
>
> *The core functionality is at LRUCache.java. This implements a
> concurrent, versioned cache. There a power of two hash buckets and a
> lock for each of the 8 buckets(lock striping). Number of hash buckets
> x where x is closest power of two such that x < max number of cache
> entries.
>
> *Cache replacement policy is LRU. There are 16 lru lists and each
> cache entry is assigned to one of the lru lists. Each lru is protected
> by a separate lock. LRU replacement is supposed to be fast. Threads
> choose an lru based on an lru randomizer. Since replacement is
> supposed to be fast and each thread randomly chooses an lru to replace
> from, lru operations should not be a bottleneck.
>
> * Each cache entry has a [startVersion, endVersion) where it is valid.
> At any time, a hash bucket chain looks like this:
>  (key1, startVersion11, endVersion11) <-> (key2, startversion21,
> endVersion21) <->
>          |
>  |
> (key1, startVersion12, endVersion12)         (key2, startversion22,
> endVersion22)
>          |
>  |
>       ....
> .......
>
> that is, there is a primary chain where entries for different keys are
> chained and then there is subchain where different versions for a
> given key are held. So, when readers search for a (key, version), they
> first walk the primary chain and then walk the subchain to find their
> entry.
>
> *As writes create previous versions of entries, they use part of the
> cache to store them. The rule is that such an entry cannot be replaced
> as long as there might be a reader which might read it. We keep track
> of the minimum read action version to make such entries replaceable .
>
> *As writes create previous versions of entries, they use part of the
> cache to store them. If there are long browse operations and quite a
> bit of updates going on at the same time, we might run into a case
> where most of the cache entries are used to store previous versions.
> We might even have a case  where all entries store previous versions
> and they cannot be replaced(because of the rule above). In this case,
> writers wait for a freeable cache entry. When a reader cannot find a
> replaceable entry, it  does read from disk while holding the bucket
> latch(and thus holding any possible writer on the same location). and
> return the entry to the user without populating the cache and thus
> without looking for a replaceable cache entry.  Since readers always
> make progress, min read version will eventually advance and writers
> will progress too. Normally, when readers or writers do IO, they
> release the hash latch.
>
> * There some helper classes for the LRUCache to work. Maybe the most
> interesting ones are ActionVersioning which uses AtomicInteger and
> AtomicReference and is optimized for the read mostly case. Also we
> have ExplicitList where remove operations are O(1) given an
> element(this is in contrast to O(n) remove given a pointer to an
> element on Java's lists). Such fast removes are needed for lru
> algorithm.
>
> *When (key,value) pairs are added to the Btree or when they are
> retrieved, Btree does a deep copy of the value(through serialization,
> deserialization). This is needed so that Btree can store previous
> versions of values. I assumed key stored in Btrees are not changed. If
> the do, even the CacheRecordManager currently in use wouldnt work.
>
> 4) Possible improvements:
> *if most of the cache entries are used to store previous versions,
> cache effectiveness will decrease.A solultion is to start spilling
> previous versions to disk when such a thing happens. The subchain we
> talked about above would have to be spilled to disk. However, this is
> only a performance problem and is a corner case one as well if it is
> true that ldap is read mostly.
>
>
I think we should not jump the gun on this as you suggest. Let's see how
some performance metrics turn out first and take a more empirical approach.
We still need to turn on transactions and make sure the upper layers us what
you've done properly to avoid corruption.


> * Currently when a write action is executing, if there is an IO
> exception action is aborted and  I do not advance the read version and
> thus readers do not see the affects of the aborted action. However, it
> seems that upper layers do not do enough cleanup in this case,


We need a review specifically to make sure the upper layers properly handle
these cases. Again let's leverage JIRA and make sure we get on this. It's
going to be critical to get out a solid ADS 2.0.0-M3 with replication.


> they
> continue using the jdbm stores and this will lead to inconsistency. A
> good thing would be to rollback all the dirty changes . Also, jdbm
> txns are not enable currently so a crash in the middle of syncing
> might leave the store inconsistent.
>
> 5) TODO:
> *add some more test cases for the verisioned btree to test corner cases.
> *I am not very willing to implement disk spilling since this is only a
> performance improvement needed in corner cases if stores are mostly
> read only. But if you guys think this is really necessary, I might
> look into this as well.
>

Thanks again Selcuk this is great work.

-- 
Best Regards,
-- Alex

Mime
View raw message