jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thomas Mueller" <thomas.tom.muel...@gmail.com>
Subject Re: DbDataStore implementation
Date Wed, 28 Nov 2007 16:58:37 GMT

> * doesn't synchronizing the addRecord() method, and using only one connection defeat
one of the
> purposes of the data store of allowing maximum concurrency?

Yes that's true. Using the data store itself improves concurrency as
simple (non-blob) repository operations are not blocked by operations
that involve blobs. Using multiple connections could improve
concurrency and could even speed up the process (if the database
writes to multiple hard drives). So far I have not thought about that.
The question is: how important is this feature?

> * making the SQL strings private and not initializing them in a method of its own really
> extending the implementation

Sorry I have committed the properties files to the wrong folder first!
I have fixed it now. The SQL statements can be overloaded in the
<databaseType>.properties file in
src/main/resources/org/apache/jackrabbit/core/data/db. Currently they
are not overloaded, but maybe they need to be. I have only tested
derby and H2 so far. initDatabaseType() loads the properties file.

> (in any case, the SQL strings should be written as "UPDATE " + tableSQL

Both the table name and the SQL strings can be overloaded (in the
properties file), so building the SQL statements is not required in my

> * during a Session.save() there are various calls to DbDataStore.getRecord() and
> DbDataRecord.getStream(), for storing the blob int the blobStore. Why is this necesary
if the binary
> content is already in the data store? It seems that this copy is overwritten every time,
but I don't
> see the reason for all this calls to the DB, and file copies.

That's not good. I like to solve this problem. Does this occur when
simply storing a node with a large object? If not, do you have a
simple test case?


View raw message