subversion-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Stein <>
Subject Re: eliminating sequential bottlenecks for huge commit and merge ops
Date Thu, 05 Jan 2012 00:08:41 GMT
On Jan 4, 2012 1:34 PM, "Joe Schaefer" <> wrote:
> As Daniel mentioned to me on irc, subversion doesn't use threading
> internally, so things like client side commit processing and merge
> operations are done one file at at time IIUC.
> Over in the openoffice podling we have a use-case for a 9GB working copy
> that regularly sees churn on each file in the tree.  commit and merge
> operations for such changes take upwards of 20min, and I'm wondering
> if there's anything we could do here to reduce that processing time
> by 2x or better by threading the per-dir processing somehow.
> Thoughts?

We've always taken the position that the amount of effort or size of
delta/data is proportional to the size of the change. If you change all of
a 9Gb working copy, then you should expect svn to take a good chunk of time
and space.

IOW, stop doing that :-)

That said, even if we were desirous of "fixing" this(*), we would have a
hard time doing it using threads. The Subversion client is pretty solidly
single-threaded. We take no precautions for operation in a multi-threaded


(*) I'd be interested in what they are doing. Is this a use case we might
see elsewhere? Or is this something silly they are doing, that would not be
seen elsewhere?

View raw message