jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Guggisberg <stefan.guggisb...@gmail.com>
Subject Re: [jr3] Tree model
Date Wed, 29 Feb 2012 14:54:10 GMT
On Wed, Feb 29, 2012 at 8:55 AM, Thomas Mueller <mueller@adobe.com> wrote:
> Hi,
> I think we all have more or less the same basic idea. Well, I would call
> it "Node" and not "Tree" because the term node more closely matches what
> we do. But first about what we seem to agree:
> * A persistent and immutable data structure has many benefits
>  (for example concurrency).
> * Mutable data strucutures also have benefits:
>  it avoids creating lots of garbage and a new root node
>  for each and every modification.
> I think we should use a structure that is immutable when reading, but
> mutable when writing. Write methods should return a new object (for
> example cloneAndAddChildNode), but only when necessary, that is, if the
> revision doesn't match. When the revision matches, the internal state is
> changed instead, and the cloneAndAddChildNode method returns 'this'. A
> first implementation is org.apache.jackrabbit.mk.simple.NodeImpl.
>> everyone has their own
>> MicroKernel implementation.

i won't comment on why we suddenly ended up with 2 independent
and supposedly competing implementations of the MicroKernel API
within the same sandbox project. that's the subject of a private discussion
i am having with thomas.

apart from that, some comments/clarifications follow inline...

> The reasons why we have two implementations is, Stefan and I could not
> agree on a few implementation details. My view is:
> * Stefan assumes it's important to use the content hash as the node id,
> while I think this hurts performance too much (as the randomly distributed
> node ids hurt storage performance in Jackrabbit 2). With the copy-on-write
> model it is even a bigger performance problem because each time a node is
> changed it gets a new node id (I didn't think about this first). That
> means even small repositories are problematic if there are many changes.

when i started the MicroKernel sandbox project about a year ago
i thought (and still think so) that a microkernel should support
clustering ootb. furthermore, mvcc seemed to be a promising
approach to address the concurrency issues we're suffering from
in jackrabbit v1-2.*.

there are not that many hierarchical mvcc-based repositories around
to learn from. the obvious candidates IMO were svn and git. while svn and git
do use a very similar internal versioning model one important difference is
the choice of identifiers. svn uses a numeric counter while git is
based on a content-addressed (hash) model, the content-hash identifiers
are a key aspect of git's architecture.

svn is client-server based while git uses a decentralized distributed model.

my assumption was that our clustering implementation could leverage the
content-addressed model since it should e.g. make it easier to sync
remote (sub)trees based on the content hash.

so far, WRT a clustering implementation, we're still on square 1.

as long as we don't have a clear idea of how to support clustering i am rather
reluctant to already give up on the content-addressable model.

> * Stefan implemented a different way to represent large child node lists.
> He used a h-tree (the hash code of the child node name), which means child
> nodes are pseudo-randomly distributed. I think this hurts performance too
> much (same reason as above: a stored index on randomly distributed keys is
> required).

the h-tree approach was my first take at supporting flat hierarchies.
it's intentionally simple and does have a few advantages (e.g.
efficient diffing of 2 large child node entries collections)
it's probably not the final answer to the problem of flat hierarchies.

since we can't assume that child node names do follow a specific
pattern (e.g. n1-n99999999) i don't follow your performance-related

> * Stefan believes concurrent write operations are very important while I
> believe this will not increase write throughput as much as it complicates
> the implementation (Stefan implementations merges changes while my
> implementation does not).
> * Stefans implementation re-constructs the journal by diffing the node
> tree. I believe this is troublesome because it is not always possible to
> re-construct the journal with regards to re-order and move operations.
> Also I believe this adds more complexity that the possible benefit
> (because the journal doesn't need to be stored). In my implementation the
> journal is stored.

i've considered storing the diff of a commit in the revision a while ago.
while it would be relatively easy to implement i currently don't see an
immediate benefit compared to diffing which OTOH is very efficient
thanks to the content-addressable model.

i imagine that in a clustering scenario there's a need to compute
the changes between 2 non-consecutive revisions, potentially
spanning a large number of intermediate revisions. just diffing
2 revsisions is IMO probably more efficient than reconstructing
the changes from a large number of revisions.


> There is quite a lot of common code, for example the data store, parsing
> and constructing JSON / JSOP. My implementation consists of the classes in
> org.apache.jackrabbit.mk.simple. The classes NodeImpl, NodeList,
> NodeListSmall are used again in multiple different modules (the index and
> the wrapper implementations for example). So what is really unique in my
> "storage" implementation is the 3 NodeMap implementations, NodeListTrie to
> support large child node lists, the Revision class, and SimpleKernelImpl.
> Regards,
> Thomas

View raw message