incubator-clerezza-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Reto Bachmann-Gmuer <reto.bachm...@trialox.org>
Subject Re: filter() and interator() of LockableMGraphWrapper
Date Thu, 11 Nov 2010 10:52:43 GMT
Hi Manuel

It would be consistent to throw the ConcurrentModificationException in
the LocakbleMGraphWrapper if a write operation took place since the
last call to iterator.read, we currently leave it to the underlying
implementation to do this. So you're in so far as the current wrapping
does not do what one might expect in so far it doesn't guarantee all
read will be successful. However, and that I think is the legitimation
for those locks in the iterator, it guarantees that no read happens
during a write operation, I think even though it's unlikely it is not
to be excluded that such a read could compromise the write, so I think
the current implementation is sufficient and necessary to guarantee no
data corruption due to concurrent access (the assumption is that write
between calls of iterator.next() can corrupt the iterator but not the
graph).

Cheers,
Reto


On Wed, Nov 10, 2010 at 11:57 AM, Manuel Innerhofer <manuel@trialox.org> wrote:
> Hi Reto,
>
> I had a closer look at filter() and iterator() of LockableMGraphWrapper. I
> seems to
> me that the readlocks done in these methods and the LockableIterator are
> mostly
> unnecessary and performance impairing. The LockableIterator locks the graph
> for every call of next() and hasNext(), but in between calls the graph is
> not read-locked,
> therefore a write operation can occur.
> Because of this a caller of one of these methods has to read-lock the graph
> while
> iteratating anyway. I propose to no longer use LockableIterator in filter()
> and iterator().
> What do you think?
>
> Regards,
> Manuel
>

Mime
View raw message