incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Hirst <>
Subject Re: Copying view file between replicated servers
Date Mon, 24 Jan 2011 13:33:18 GMT
On Mon, 2011-01-24 at 11:26 +0000, Randall Leeds wrote:
> On Mon, Jan 24, 2011 at 01:01, Paul Hirst <> wrote:

> > I just created a new design document with a new view and I just queried
> > the new view on the backup server in order to trigger an index build.
> > It's going to take a few hours to build. I don't really want that load
> > on the live server since it will slow it down too much. I was wondering
> > if, when the index build finishes, I can copy the view file from the
> > backup server to the live server. Will that work or are the view files
> > in some sense server/database specific?
> >
> As a consequence of the strategy mentioned above, this approach would
> work if your update sequence were *identical* on the two databases. I
> *WOULD NOT RECOMMEND* doing this, though. It would certainly be
> unsupported behavior if you got lucky.

Thanks, this is really useful to know! I shall not try my luck by
attempting this.

> I have two ideas if you need an alternative, but it depends on what
> you're trying to avoid.
> If you cannot deal with waiting for the new index to generate before
> querying it, create the new views in a separate design doc. Query that
> and wait for it to build. Once it has finished, rename the design
> document (update the old one) and your views should be "pre-indexed".

This is actually what I did on the backup server anyway because it's
replicated to the live server.

Which does bring me to another question. If you accidentally  trigger an
index rebuild is there any way to stop it short of restarting couchdb?

> If you cannot deal with the load generated by indexing itself, you
> could create a remote query server. Be sure that the CouchDB user can
> SSH without password and add ssh to the beginning of your query server
> command.


> If all if this makes perfect sense, you can go ahead and give it a
> shot. If it sounds terrifying, lets talk about it or catch me on IRC
> (tilgovi). This is the first time I've recommended anything like this
> be tried, so it probably deserves some close inspection before blindly
> listening to a word :).

This all makes sense but I'm worried it won't solve the problem. The CPU
load from the couchjs process doesn't seem particularly significant in
my case. When I have rebuilt indexes on the live server before it seemed
it was the disk IO which slowed everything down. My database currently
stands at 22 million documents and 528G in size and I guess that's a lot
of disk seeks when reading the documents and writing out the new index
file. So pushing the javascript execution over the network and onto
another box presumably won't help with that. However, I am a bit of a
newbie still so if I've misunderstood I'd love to be put right.

I think what I shall do in this case, is fail over to my backup server,
do a compact on what was the live server and then trigger an index
build. Then I can fail back again. I already do this for compacting
purposes and it seems I have a similar sort of problem here really.


Sophos Limited, The Pentagon, Abingdon Science Park, Abingdon, OX14 3YP, United Kingdom.
Company Reg No 2096520. VAT Reg No GB 991 2418 08.

View raw message