couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Antony Blakey <>
Subject Re: reiterating transactions vs. replication
Date Mon, 25 May 2009 06:05:24 GMT

On 25/05/2009, at 3:01 PM, Scott Shumaker wrote:

> 'A' maintains an ordered list of 'B' elements, where the order is
> user-defined - imagine allowing a user to re-order photos in a
> slideshow.  You want to store the photos separately from the
> slideshow, because they can be used in multiple slideshows.  Whenever
> you add or remove a photo, you need to update the order list as well.
> I've seen some people suggest some sort of gross solution where you
> try to store floating point order id's inside the B elements and
> change that to wedge an item in between another items (averaging the
> two new siblings' orders), but this is incredibly brittle and breaks
> down with enough re-ordering.

There is another solution to this problem that I posted to this list  
about 16 March 2009. In short:

The general solution is to treat (in abstract) the ordering of items  
as the in-order traversal of a binary tree. The brief form of the  
algorithm is to record in each item the path from the top as two bits  
e.g. 10 = termination, 01 = left, 11 = right. You then map that bit  
sequence, padded with 0, to an encoded form that preserves the  
ordering. Avoiding unbounded length requires a balanced tree, which  
requires transactional support. It has the benefit of a low number of  
documents touched per update (in an amortized sense).

By using 01 = left, 10 = termination, 11 = right, the length of the  
bit string becomes implicit (self-terminating) i.e. every pair  
contains a 1, and thus a function to compute an intermediate value  
given two *byte* sequences is easy. In practice you need to know the  
two adjacent values in order to avoid collisions, but you don't need  
to write to those documents.

This isn't brittle and won't break down, but as I say the insertion  
keys are unbounded.


I maintain a couchdb fork with transactional bulk doc commits, but  
they are really only useful to avoid requiring a merge interface for  
local operations. CouchDB replication still generates conflicts. If  
however you use a single write master, then transaction bulk doc  
commits can eliminate all conflicts.

Antony Blakey
CTO, Linkuistics Pty Ltd
Ph: 0438 840 787

It's amazing that the one side of the conversation that survived is "I  
don't know art, but I know what I like". The reply from the artist was  
"Madam, so does a cow".
   -- Carl Kirkendall

View raw message