lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Karel Tejnora <ka...@tejnora.cz>
Subject Lucene Index backboned by DB
Date Tue, 15 Nov 2005 22:23:24 GMT
Hi all,
    in our testing application using lucene 1.4.3. Thanks you guys for 
that great job.
We have index file around 12GiB, one file (merged). To retrieve hits it 
takes nice small amount of the time, but reading fields takes 10-100 
times more (the stored ones). I think because all the fields are read.
I would like to try implement lucene index files as tables in db with 
some lazy fields loading. As I have searched web I have found only impl. 
of the store.Directory (bdb), but it only holds data as binary streams. 
This technique will be not so helpful because BLOB operations are not 
fast performing. On another side I will have a lack of the freedom from 
documents fields variability but I can omit a lot of the skipping and 
many opened files. Also IndexWriter can have document/term locking 
granuality.
So I think that way leads to extends IndexWriter / IndexReader and have 
own implementation of index.Segment* classes. It is the best way or I 
missing smthg how achieve this?
If it is bad idea, I will be happy to heard another possibilities.

I would like also join development of the lucene.  Is there some points 
how to start?

Thx for reading this,
sorry if I did some mistakes

Karel


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message