Return-Path: X-Original-To: apmail-lucene-java-user-archive@www.apache.org Delivered-To: apmail-lucene-java-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A92ED7B4A for ; Tue, 13 Sep 2011 00:43:12 +0000 (UTC) Received: (qmail 42510 invoked by uid 500); 13 Sep 2011 00:43:10 -0000 Delivered-To: apmail-lucene-java-user-archive@lucene.apache.org Received: (qmail 42465 invoked by uid 500); 13 Sep 2011 00:43:10 -0000 Mailing-List: contact java-user-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: java-user@lucene.apache.org Delivered-To: mailing list java-user@lucene.apache.org Received: (qmail 42457 invoked by uid 99); 13 Sep 2011 00:43:10 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 13 Sep 2011 00:43:10 +0000 X-ASF-Spam-Status: No, hits=0.0 required=5.0 tests=SPF_PASS,T_FRT_ADULT2 X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of koji@r.email.ne.jp designates 202.224.39.197 as permitted sender) Received: from [202.224.39.197] (HELO mail1.asahi-net.or.jp) (202.224.39.197) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 13 Sep 2011 00:43:04 +0000 Received: from Koji-Sekiguchi-no-MacBook-Pro.local (w244069.ppp.asahi-net.or.jp [121.1.244.69]) by mail1.asahi-net.or.jp (Postfix) with ESMTP id 60EEB10D5BF for ; Tue, 13 Sep 2011 09:42:41 +0900 (JST) Message-ID: <4E6EA700.4020902@r.email.ne.jp> Date: Tue, 13 Sep 2011 09:42:40 +0900 From: Koji Sekiguchi User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:6.0.2) Gecko/20110902 Thunderbird/6.0.2 MIME-Version: 1.0 To: java-user@lucene.apache.org Subject: Re: background merge hit exception References: <4E6A0025.3010805@r.email.ne.jp> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org I've got some follow-up from the user. > Is it possible disk filled up? Though I'd expect an IOE during write > or close in that case. > > In this case nothing should be lost in the index: the merge simply > refused to commit itself, since it detected something went wrong. But > I believe we also have the same check during flush... have they hit an > exception during flush? They couldn't find any errors, including disk full, in their solr log/tomcat log/syslog, except the exception in the title. > Also: what java version are they running? We added this check > originally as a workaround for a JRE bug... but usually when that bug > strikes the file size is very close (like off by just 1 byte or 8 > bytes or something). They are using JDK6u15. If you think up something the cause of this problem, please let me know! koji -- Check out "Query Log Visualizer" for Apache Solr http://www.rondhuit-demo.com/loganalyzer/loganalyzer.html http://www.rondhuit.com/en/ (11/09/09 21:36), Michael McCandless wrote: > Interesting... > > This wouldn't be caused by the "NFS happily deletes open files" > problem (= Stale NFS file handle error). > > But this could in theory be caused by the NFS client somehow being > wrong about the file's metadata (file length). It's sort of odd > because I would expect since the client wrote the file, there wouldn't > be any stale client-side cache problems. > > What happened is SegmentMerger just merged all the stored docs, and as > a check in the end it verifies that the fdx file size is exactly 4 + > numDocs*8 bytes in length, but in your case it wasn't -- it was 10572 > bytes short, and so it aborts the merge. > > Is it possible disk filled up? Though I'd expect an IOE during write > or close in that case. > > In this case nothing should be lost in the index: the merge simply > refused to commit itself, since it detected something went wrong. But > I believe we also have the same check during flush... have they hit an > exception during flush? > > Also: what java version are they running? We added this check > originally as a workaround for a JRE bug... but usually when that bug > strikes the file size is very close (like off by just 1 byte or 8 > bytes or something). > > Mike McCandless > > http://blog.mikemccandless.com > > 2011/9/9 Koji Sekiguchi: >> A user here hit the exception the title says when optimizing. They're using Solr 1.4 >> (Lucene 2.9) running on a server that mounts NFS for index. >> >> I think I know the famous "Stale NFS File Handle IOException" problem, but I think it causes >> FileNoutFoundException. Is there any chance to hit the exception in the title due >> to NFS? If so what is the mechanism? >> >> The full stack trace is: >> >> 2011/09/07 9:40:00 org.apache.solr.update.DirectUpdateHandler2 commit >> INFO: start commit(optimize=true,waitFlush=true,waitSearcher=true,expungeDeletes=false) >> >> : >> >> 2011/09/07 9:40:52 org.apache.solr.update.processor.LogUpdateProcessor finish >> INFO: {} 0 52334 >> 2011/09/07 9:40:52 org.apache.solr.common.SolrException log >> FATAL: java.io.IOException: background merge hit exception: _73ie:C290089 _73if:C34 _73ig:C31 >> _73ir:C356 into _73is [optimize] [mergeDocStores] >> at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2908) >> at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2829) >> at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:403) >> at >> org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85) >> at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:169) >> at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:69) >> at >> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54) >> at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) >> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316) >> at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338) >> at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241) >> at >> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) >> at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) >> at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) >> at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) >> at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:525) >> at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) >> at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) >> at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568) >> at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) >> at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) >> at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) >> at >> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) >> at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) >> at java.lang.Thread.run(Thread.java:619) >> Caused by: java.lang.RuntimeException: mergeFields produced an invalid result: docCount is 290089 >> but fdx file size is 2310144 file=_73is.fdx file exists?=true; now aborting this merge to prevent >> index corruption >> at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:369) >> at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153) >> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5112) >> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4675) >> at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:235) >> at >> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:291\ >> ) >> >> >> koji >> -- >> Check out "Query Log Visualizer" for Apache Solr >> http://www.rondhuit-demo.com/loganalyzer/loganalyzer.html >> http://www.rondhuit.com/en/ >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org >> For additional commands, e-mail: java-user-help@lucene.apache.org >> >> > > --------------------------------------------------------------------- > To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org > For additional commands, e-mail: java-user-help@lucene.apache.org > > --------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org For additional commands, e-mail: java-user-help@lucene.apache.org