directory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Emmanuel Lecharny <elecha...@gmail.com>
Subject Thoughts about DIRSERVER-1663
Date Mon, 03 Oct 2011 09:18:55 GMT
Hi guys,

this error is a pretty annoying one. We had a convo with Selcuk last 
friday about it, which is sum up here.

Basically, what happens is that when we have multiple threads doing a 
search while some other are adding/deleting some entries which are 
potentially part of the returned results, we get some NPE. This is due 
to the fact that we use a cursor on an index which uses IDs of entries 
that can have been removed when we try to read them.

The discussion we had led to the fact that we need to implement a 
transaction system to protect the client from such problem. This can 
probably be implemented on top of what we have, even if it kills the 
performances.

OTOH, at some point, what we really need is to implement a MVCC system 
on top of the backend.

MVCC is a system which keeps old versions of elements until they aren't 
needed anymore. For instance, when we do a search, we will browse some 
entries using their IDs, provided by an index. When we start the search, 
we select the best possible index to browse the entries, and we get back 
a set of IDs. If we associate this operation with an unique transaction 
ID, we must guarantee that all the IDs from the set will be present 
until the cursor is totally read (or the search cancelled). If a 
modification is done on one of the entry associated with one of those 
IDs, then we still should be able to access to the previous entry. Such 
a modification must create a copy of the entry itself, but also of all 
the tuples in the indexes, associated with a revision number. The 
incoming transaction will use this revision number to get an immutable 
IDs set.

Now, at some point, that will create a hell lots of new entries and 
tuples in index tables. We must implement a system to clean up those 
duplicates once they are not in use. There are two ways to handle such a 
clean up :
- keep all the duplicates in the backend, removing them when no 
operation is associated with the old revision
- or create a rollback table, where the old elements are stored, with a 
limited size

The second solution is what Oracle is using. It's efficient, except when 
you have to grab old revisions, as you don't have to update the main 
database. All the old elements are simply pushed into this rollback 
table (rollback segment), and are available as long as they are not 
pushed out by newer elements (the table has a limited size).

Postgresql has implemented the first solution. The biggest advantage is 
that you can't have an error, but the database may be huge. You also 
need a thread to do the cleanup.

In any case, I just wanted to initiate a discussion about this problem 
and the potential solutions, so feel free to add your vision and 
knowledge in your response. It would be valuable to define a roadmap for 
such an implementation, and to discuss the different steps before diving 
into the code...

Thanks !

-- 
Regards,
Cordialement,
Emmanuel L├ęcharny
www.iktek.com


Mime
View raw message