lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Doron Cohen <DOR...@il.ibm.com>
Subject Re: Lock-less commits
Date Fri, 25 Aug 2006 19:55:21 GMT
I am convinced to take back this version-file proposal - at first I thought
it gets same result with fewer changes but thanks to responses here I
understand it does not.

I added more comments below..

Michael McCandless <lucene@mikemccandless.com> wrote on 25/08/2006
04:16:22:

>
> >> If i'm understanding this suggestion correctly, the main change in
> >> observable behavior will be that actions performed by a "reader" will
> >> never block or invalidate actions performed by a "writer" -- writers
on
> >> the other hand can still block eachother.
> >>
> >
> > Yes this is true: here readers do not block writers (nor readers), a
writer
> > blocks readers, and a writer blocks other writers.
> >
> >> This seems like it might be the opposite of what most people would
want:
> >> that opening "reader" threads for doing searches need to be fast, and
if
> > a
> >> writer thread has to wait a half second that's okay.
> >
> > Right... this is an important point that I missed - in the
numbered-files
> > approach a reader never has to wait, while in this suggestion readers
may
> > need to wait for a writer that commits just now.
>
> Yes ideally a reader should never have to wait.
>
> In my local changes (using numbered files) for lock-less commits, I've
> implemented Yonik's suggestsion of opening segments in reverse order,
> and this has definitely reduced the number of "retries" that the
> searchers hit on opening the index.  Even in highly interactive
> searching (open searcher, do one search, close searcher, repeat) the
> retry rate is low.

In this highly interactive search scenario is it true that every opened
searcher needs a directory listing? - If so is this a possible performance
hit for the searchers, similar to discussion in this thread for writers.
But we should worry more for searchers...  In this case, how about
maintaining a separate version-file (as discussed) for allowing new
searchers/readers to easily detect the most current version that is stable
and should be used, without needing a directory listing. I believe that
NFS-wise, it should be as safe to use as a directory listing (more on NFS
below).

>
> And if necessary we could further reduce retries by adding some small
> [settable] pause into IndexWriter and/or not removing old segment files
> until some time has passed (at the expense of increased temporary disk
> usage).  I'm currently not planning on doing either of these unless in
> benchmarking I see performance regressions.
>
> > Still it is interesting to notice that the way Lucene works today,
readers
> > initialization also block one another, so they initialize serially -
each
> > reader needs to obtain a commit lock, initialize, and release the lock.
In
> > this suggestion all readers initialize in parallel, and perhaps
> > re-initialize if a writer happens to commit just now.
>
> I think this is one of the big improvements of switching to the
> lock-less approach: readers will never wait on other readers, as they do
> now.
>
> > Also, the way that writers do their work - most work is done out of the
> > "commit-window" - so the commit-window is both short and "relatively
rare".
>
> Agreed.  This is nice because it already reduces the chance of retry (in
> numbered files approach) or pause (in current Lucene or this proposal).
>
> >> I also don't believe this would "solve" the NFS issues with regards to
> > the
> >> commit lock -- as i recall, the problem stems from NFS not being able
to
> >> garuntee transactional order of file operations (ie: i open the commit
> >> lock file, i modify and close segments, i close/delete the commit file
--
> >> a remote NFS client might still see the orriginal segments file after
the
> >> commit file is deleted.  Your version file might suffer the same fate
> >> (with reader clients seeing V1==V2 because the whole file is a second
> >> stale)
> >
> > I thought that the (cooperative) lock-file related problems with NFS
stem
> > from deleteFile() that may return failure code due to timeout although
it
> > actually succeeded, possibly causing the lock-releasing party to retry
> > deleting, but now erroneously deleting a lock file just obtained by
another
> > process.
> >
> > The RFC for NFS version 2 (http://tools.ietf.org/html/rfc1094) says:
"All
> > of the procedures in the NFS protocol are assumed to be synchronous.
When
> > a procedure returns to the client, the client can assume that the
operation
> > has completed and any data associated with the request is now on stable
> > storage."
> >
> > So if writer did actions { a1 , a2 } in this order and they completed,
it
> > seems that a reader "seeing" the result of action a2 must also "feel"
the
> > result of action a1. (This would prevent errors with the proposed
version
> > number.) But I am no expert in NFS and may be wrong here.
>
> Operations are indeed synchronous to the server, though NFS V3 does add
> some support for asynchronous writes, eg see http://nfs.sourceforge.net.
>
> The big problem is the client's caching.  I've seen cases in my own
> testing where the NFS cache on one machine remains stale for quite some
> time (seconds) before "seeing" changes to a file on a server.  I think
> instead relying on a newly created file with the numbered approach (ie
> never before used file name) will avoid the risk that a client-side
> cache is presenting stale (or delayed) contents of a file.

I can't see why relying on files newly created is safer than relying on
files that were updated. In other words, I think we are ok with stales -
e.g. if the writer wrote 5 new files, and then wrote a 6th new file - the
appropriate segments file, it seems the numbered-files approach too is
counting on that the first 5 written files are visible to any client once
the 6th file is visible, otherwise readers might not find the files they
are looking for (the segment-infos just read from the 6th file). I
understand "staling" as - "although the 6th file was already written, a
reader might not yet see it yet, it would only see the first 5 files". But
this is not a problem, because the reader would attempt to use the previous
version. If that would fail, the reader would retry and now probably get
the newer version.

So it seems that the numbered-files scheme also relies on proper order of
operations, or I might misunderstand something here. Under this same
"visibility order assumption", I think the version-file should be safe to
use in NSF.

Doron


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message