incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Newson <robert.new...@gmail.com>
Subject Re: couchdb-lucene reindexes when restarted
Date Fri, 16 Apr 2010 08:21:40 GMT
That's more interesting. IIRC, Lucene's commit() method will only
write to disk if there have been document changes. So, if your
function doesn't update anything at all (your function returns null
for all documents, say) then the update_seq won't be updated, and
hence it will start over each time.

B.

On Fri, Apr 16, 2010 at 9:05 AM, Manokaran K <manokaran@gmail.com> wrote:
> On Thu, Apr 15, 2010 at 10:30 PM, Robert Newson <robert.newson@gmail.com>wrote:
>
>> I can't reproduce this. My setup always picks up where I left off, so
>> there must be some step I'm not doing to trigger this.
>>
>> Can you delete the target/indexes and reproduce this from scratch? If
>> so, could you list all the steps?
>>
>
> I get this problem only when I load couchdb with demo data for my
> application - a set of school related documents all generated with random
> data using a ruby script. When I tried with another ruby script that
> generated a bland set of docs, the problem vanished. So, it has to do with
> the docs I generate.
>
> But there are no errors in couchdb logs when I load the data! Only the
> update_seq (in c-l logs) does not get bumped up to the highest number.
> Instead it gets stuck at a lower number. Is there some way I can query
> couchdb to get the doc that resulted in this update_seq? Perhaps there will
> be some clue there!!
>
> thanks,
> mano
>

Mime
View raw message