lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adar, Eytan" <ey...@exch.hpl.hp.com>
Subject indexing race condition?
Date Fri, 07 Dec 2001 19:16:17 GMT
I have a piece of code that is indexing online... it basically watches a set
of files, and indexes new ones, and deletes ones that get deleted.

The problem I'm encountering is that when I index something, until I
flush/close the index the new documents aren't added.  This means that if my
user adds a file and then immediatley deletes it, the text still gets added.

I think I've tried to call optimize() but that doesn't seem to do it.  It
seems that I need to actually close the writer and reopen, and I don't want
to do that after every new document.

In other words:

add(d1) -> delete(d1) -> get(d1) = d1  (not what I want)
add(d1) -> close index -> delete(d1) -> get(d1) = null (what I want, but
inefficient)

I could just queue up all the delete requests and excute them (once in a
while) after I close the index.  The problem is that some of my delete
operations are actually part of a "replace" procedure (delete then add).
Waiting on the deletes will mean that I wipe the document totally from the
index (not what I wanted).  

I can start doing wierd things with timestamping so that I can delete the
first added document, etc. but that seems like a headache.

Hopefully this makes some sense, and if anyone has a suggestion/solution I'd
love to hear it.

Thanks,

Eytan

 

--
To unsubscribe, e-mail:   <mailto:lucene-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:lucene-dev-help@jakarta.apache.org>


Mime
View raw message