jackrabbit-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jukka Zitting" <jukka.zitt...@gmail.com>
Subject Re: Database PersistenceManagers (was "Results of a JR Oracle test that we conducted)
Date Mon, 12 Mar 2007 15:23:16 GMT

On 3/12/07, Marcel Reutegger <marcel.reutegger@gmx.net> wrote:
> Jukka Zitting wrote:
> >> further note that write operations must occur within a single jdbc
> >> transaction, i.e. you can't get a new connection for every store/destroy
> >> operation.
> >
> > I think this is a design flaw. On the other hand we require
> > persistence managers to be "dumb" components, but then we rely on them
> > for complex features like transactions.
> I'd say those components are 'simple' rather than 'dumb' or 'complex'. the
> requirements are therefore also relatively simple: operations must have A(C)ID
> properties.
> A) a change log must be persisted as a whole or not at all
> I) while a change log is persisted a read must not see partially stored content

These could both be achieved also with connection pooling, just
acquire a connection at the beginning of PersistenceManager.store()
and commit the changes at the end of the method before releasing the

Similar pattern would also work for all the load() and exists()
methods to avoid the need to synchronize things on the prepared

> D) durability, well you get the idea...

Obviously. :-)

> C) this is actually handled by the upper level

ACK, the key is the write lock on SharedItemStateManager. In fact, do
we even need the database persistence managers to be transactional
over multiple method calls? And to follow, could we in fact already
now remove the synchronization of read operations given that
consistency is already achieved on a higher level?


Jukka Zitting

View raw message