lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rob Staveley (Tom)" <rstave...@seseit.com>
Subject RE: Compound / non-compound index files and SIGKILL
Date Mon, 05 Jun 2006 17:31:48 GMT
Thanks for the advice, Otis. It makes sense that they are not to be trusted
after the brutality of SIGKILL (the untrappable signal). Alas, SIGTERM
wasn't breaking out of my hang. I'll see what's snagging it in my
ShutdownHooks, because I need to be able to handle hangs one way or another,
because there are so many content handlers in the project now and
regenerating indexes isn't much fun.

-----Original Message-----
From: Otis Gospodnetic [mailto:otis_gospodnetic@yahoo.com] 
Sent: 05 June 2006 17:58
To: java-user@lucene.apache.org
Subject: Re: Compound / non-compound index files and SIGKILL

Rob,
My guess is that those are orphans that you cannot save and add to your
index.  Who knows what state things are in if you killed the JVM.  I think
you could try editing your segments file and manually adding all segments to
it (e.g. _15djq from your example), and then try optimizing the index, but
if I had to bet, I'd bet this would just blow up in your face during
optimization.  The solution is to figure out why things hang and fix the
culprit.

Another possibility might be to add ShutdownHooks to your application and
try to stop indexing gracefully.  However, with SIGKILL I'm afraid shutdown
hooks don't get to run (double-check that, I'm not certain).

Otis

----- Original Message ----
From: Rob Staveley (Tom) <rstaveley@seseit.com>
To: java-user@lucene.apache.org
Sent: Monday, June 5, 2006 6:17:11 AM
Subject: Compound / non-compound index files and SIGKILL

I've been indexing live data into a compound index from an MTA. I'm
resolving a bunch of problems unrelated to Lucene (disparate hangs in my
content handlers). When I get a hang, I typically need to kill my daemon,
alas more often than not using kill -9 (SIGKILL).

However, these SIGKILLs are leaving large temporary(?) files, which I guess
are non-compound index files transiently extracted from the working .cfs
files:

-rw-r--r--    1  373138432 Jun  2 13:42 _18hup.fdt
-rw-r--r--    1      5054464 Jun  2 13:42 _18hup.fdx
-rw-r--r--    1              426 Jun  2 13:42 _18hup.fnm

-rw-r--r--    1  457253888 Jun  2 09:22 _15djq.fdt
-rw-r--r--    1      6205440 Jun  2 09:22 _15djq.fdx
-rw-r--r--    1              426 Jun  2 09:21 _15djq.fnm

They are left intact after restarting my daemon. Presumably they are not
treated as being part of the compound index. I see no corresponding .cfs
file for them. 

As a consequence of these - I suspect - I am getting a very large overall
disk requirement for my index, presumably because of replicated field data.
My guess is that the field data in the orphaned .fdt files needs to be
regenerated.

In another index directory from a previous test run (again with SIGKILLs), I
have 98 GB of index files, with only 12 BG devoted to compound files for the
field index (.cfs). The rest of the disk space is used by orphaned
uncompounded index files; I see 51 GB devoted to uncompounded field data
(.fdt), 13 BG devoted to term positions (.prx) and 13 BG devoted to term
frequencies (.frq).

Here's my question:

How can I attempt to merge these orphaned into the compound index, using
IndexWriter.addIndexes(), or would I be foolish attempting this?




---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org

Mime
View raw message