lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dave Kor <>
Subject Re: CachingDirectory contribution
Date Mon, 08 Oct 2001 04:31:28 GMT
> A while back I wrote a CachingDirectory
> implementation for Lucene which
> allows for caching an index on a local machine other
> than the "root"
> machine. This can be very useful for handling heavy
> load (such as David
> Snyder's 13 GB index :-))

13GB is considered a light load for Lucene. I am
currently running a lucene demo on my old but trusty
pentium-120mhz laptop with a 9GB index. It takes
Lucene about 20 seconds to handle the very first
query, probably because it is loading the index into
memory. All subsequent queries are instantaneous. 

Anyway, I'm very curious as to how your directory
caching code works. What does it cache? files?
previously read data? Have you measured the amount of
performance improvement that was gained using your
caching system? What was the index size you used to
measure performance improvement?

If it does improve performance for huge indexes, I'll

This leads me to yet another of my buring questions..
has anyone pushed Lucene to its limits yet? If so,
what are they? What happens when Lucene hit its limit?
Does it throw an exception? coredump? 

> In addition to that I could also provide my
> OracleDirectory
> implementation which stores all index files into an
> Oracle database
> instead of a file system structure. I haven't done a
> SQLServerDirectory,
> but I'm willing to implement it as well :)

I assume you're using BLOBs to store the index files? 
What are the advantages of using the Oracle directory
over just using the file system?

Do You Yahoo!?
NEW from Yahoo! GeoCities - quick and easy web site hosting, just $8.95/month.

View raw message