incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Newson <rnew...@apache.org>
Subject Re: Speed up Replication with large Database
Date Thu, 30 May 2013 11:01:18 GMT
Hi,

The replication process will be reading through your databases
_changes feed. Copying the file to the target will immediately ensure
you have a redundant copy but it will not do anything to speed up a
replication as the replicator has no one to detect that you copied the
file. What it's now doing is asking the target if it has the documents
present on the source. Since you've copied the file, the answer is
always "yes" but it has to ask anyway. It will be going faster than if
you hadn't copied it, as it won't need to transfer document or
attachment bodies, but it will still take time.

a 600 GB .couch file is very unwieldy, though, have you ever compacted
that database?

B.


On 29 May 2013 16:24, Tilmann Sittig <sittig@prime-research.com> wrote:
> Hello all,
>
> I have a Question concerning Replication i found nothing about so far.
>
> I was asked to change a single couchDB Server to a load-balanced 2-node-setup.
> Setup with haproxy went smoothly, but when i got to Replication, i was confronted with
a large 600 GB data.couch file on the original server and a mediocre Bandwith between the
servers.
> So i sent a Harddisk to the Hoster, copied the data.couch file and installed it on the
new 2nd Node.
>
> When i configured a continuous Replication after that transfer i expected a much faster
Replication/Sync, but it is still running.
>
> Any Ideas how to speed that up?
>
> Thanks for your time,
>
> T.Sittig
>
>

Mime
View raw message