incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rob Pettefar <>
Subject Re: View update speed improvements
Date Wed, 02 Mar 2011 14:30:08 GMT
  On 02/03/2011 13:05, Bruno Rohée wrote:
> On Wed, Mar 2, 2011 at 12:33 PM, Rob Pettefar
> <>  wrote:
>>   Hi guys
>> I've got a question about improving the speed at which views are updated in
>> our system:
>> Currently we use a set of database documents to make up whole files after
>> they have been requested out of the system. When submitted back into the
>> database the old docs that held data are deleted and new docs are created in
>> their place. This was done for simplicity of design. However when we have
>> large file submitted into the system this will involve the deletion and
>> creation of a large number of docs being deleted and created (we are looking
>> at around 4,000 deletes and 4,000 new docs).
>> The views then take some time to update after this has happened.
>> If we were to instead, modify the contents of the 4,000 documents (perhaps
>> with some deletions and creations) would this reduce the amount of updates
>> the system would have to put though the views and thus, reduce the time
>> needed to update the views?
> I think it's pretty dependent on your data, whether your new documents
> are mostly identical or mostly different from the old ones. If it's
> the former the process can be sped up quite a bit as the map function
> will only be called on the changed documents, if it's the later not
> much speed gain to be expected IMHO.
This would probably involved writing over the content of the document, 
even with the same data as before, inuring a new revision number. I 
guess that this would cause the map functions to be run over it again.
However I think the key thing here is a question of how mass deletions 
are treated by the view updater.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message