Dear Wiki user,
You have subscribed to a wiki page or wiki category on "Subversion Wiki" for change notification.
The "StarDelta" page has been changed by StefanFuhrmann:
http://wiki.apache.org/subversion/StarDelta
Comment:
WIP. first part
New page:
= Star Deltas =
== Introduction ==
FSFS currently uses xdelta to store different version of the same node efficiently.
Basically, we represent node x_i as
x_i = x_i1 o \delta(x_i, x_i1)
x_i1 = x_i2 o \delta(x_i1, x_i2)
...
x_0 = x_0
and store x_0 plus the incremental \delta information. x_i gets reconstructed by
starting with x_0 and iteratively applying all deltas. Assuming that size(x_i) is
roughly proportional to i and the deltas averaging around some constant value,
this approach has the following properties:
storage size(N) = size(x_0) + sum_i=1^N(size(\delta(x_i, x_i1)))
= O(N)
reconstruction time(N) ~ size(N) + sum_i=1^N(size(x_i)))
= O(N^2)
with N being either the size of the node or the number of revisions to it.
To mitigate the quadratic runtime behavior, we use skipdeltas:
x_i = x_k + \delta(x_i, x_base(i)) with base(i) being next "rounder" binary
Since we skip intermediate representations, we will repeat the respective change
information (approx .5 log N times). Storage size and reconstruction time are now
storage size(N) = size(x_0) + sum_i=1^N(size(\delta(x_i, x_base(i))))
= O(N log N)
reconstruction time(N) ~ size(N) + sum_log N(size(x_base)))
= O(N log N)
Please note that the actual implementation uses a hybrid scheme.
'''Oberservation.''' This approach does not cover forked node histories as they
are common with branches and merges. The same change can be merged into many
branches but may not result in the same node content (i.e. rep caching may not
kick in). However, those changes themselves are very similar and often even
identical.
A potential solution might be to deltify deltas, i.e. to use a secondorder
deltification scheme. Once that is done, even higher orders might be used.
It is unclear how this could be implemented in a meaningful way. Also, its
effectiveness depends heavily on the order in which branched deltas get
processed.
== Basic Goals ==
We would like to be able to deltify everyting against else. \delta* no longer
considers individual pairs of texts but rather deltifies the text against all
previously processed text. Expected storage size, runtime size, writing and
reconstruction speed should all be O(N).
'''Note.''' Deltification is only a means to storage size reduction. We do not
attach any semantics to the deltification result. It is neither minimal nor
related to "svn delta" and friends.
== Data structure ==
The core data structure consists of two elements: the text buffer and the
instructions array. The text buffer is a single byte array containing the
various string fragments that will be combined to fulltexts according to
the instructions. The latter is a single array of (offset, count) pairs.
The offset may be prefixed (see below) but in its simpliest form, it will
copy COUNT bytes from text buffer starting at text buffer offset OFFSET.
Since we want to store more than a single fulltext ("string"), another
array will map the string index to the first instruction in the instructions
array:
{{attachment:stardeltacore.pngcore data structure}}
== Preliminary results ==
Test data set: first revisions of fs_fs.c
6xx files
15x MB total
38x kB max. file size
Data sizes:
780 kB text body
270k instructions (1.x MB) without common sequence optimization
3xx kB on disk (using quick_pack)
Execution times:
.? s creation (xx MB / s)
.? s optimization (xx MB / s)
.? s compression
.? s total write time (xx MB/s data in, xx MB/s out to disk)
.? s load and extract (xx MB/s)
.? s extraxt (xx MB/s)
.? s total read time (xx MB/data out, xx MB/s in from disk)
When imported into an empty 1.8 repository:
1.2 MB revs
1.5 s for svnadmin verify
