Return-Path: Delivered-To: apmail-incubator-lucene-net-commits-archive@minotaur.apache.org Received: (qmail 95368 invoked from network); 17 Nov 2009 01:14:24 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 17 Nov 2009 01:14:24 -0000 Received: (qmail 32776 invoked by uid 500); 17 Nov 2009 01:14:24 -0000 Delivered-To: apmail-incubator-lucene-net-commits-archive@incubator.apache.org Received: (qmail 32721 invoked by uid 500); 17 Nov 2009 01:14:24 -0000 Mailing-List: contact lucene-net-commits-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: lucene-net-dev@incubator.apache.org Delivered-To: mailing list lucene-net-commits@incubator.apache.org Received: (qmail 32712 invoked by uid 99); 17 Nov 2009 01:14:24 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Nov 2009 01:14:24 +0000 X-ASF-Spam-Status: No, hits=-2.6 required=5.0 tests=AWL,BAYES_00 X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Nov 2009 01:14:19 +0000 Received: by eris.apache.org (Postfix, from userid 65534) id 367EE23888D8; Tue, 17 Nov 2009 01:13:59 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r881077 [1/2] - in /incubator/lucene.net/trunk/C#/src: ./ Demo/DeleteFiles/ Demo/DemoLib/ Demo/IndexFiles/ Demo/IndexHtml/ Demo/SearchFiles/ Lucene.Net/ Lucene.Net/Analysis/ Lucene.Net/Analysis/Standard/ Lucene.Net/Index/ Lucene.Net/QueryPa... Date: Tue, 17 Nov 2009 01:13:58 -0000 To: lucene-net-commits@incubator.apache.org From: aroush@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20091117011359.367EE23888D8@eris.apache.org> Author: aroush Date: Tue Nov 17 01:13:56 2009 New Revision: 881077 URL: http://svn.apache.org/viewvc?rev=881077&view=rev Log: Apache Lucene.Net 2.9.1 build 001 "Beta" (port of Java Lucene 2.9.1 to Lucene.Net) Added: incubator/lucene.net/trunk/C#/src/Test/Search/TestPrefixInBooleanQuery.cs Modified: incubator/lucene.net/trunk/C#/src/CHANGES.txt incubator/lucene.net/trunk/C#/src/Demo/DeleteFiles/AssemblyInfo.cs incubator/lucene.net/trunk/C#/src/Demo/DemoLib/AssemblyInfo.cs incubator/lucene.net/trunk/C#/src/Demo/IndexFiles/AssemblyInfo.cs incubator/lucene.net/trunk/C#/src/Demo/IndexHtml/AssemblyInfo.cs incubator/lucene.net/trunk/C#/src/Demo/SearchFiles/AssemblyInfo.cs incubator/lucene.net/trunk/C#/src/HISTORY.txt incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardAnalyzer.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardTokenizer.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopAnalyzer.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopFilter.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TeeSinkTokenFilter.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Token.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenFilter.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenStream.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Tokenizer.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/AssemblyInfo.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DirectoryReader.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DocumentsWriter.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexReader.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexWriter.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermsHashPerField.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Lucene.Net.csproj incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.JJ incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParserTokenManager.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/FuzzyQuery.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Hits.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/IndexSearcher.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/NumericRangeQuery.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Payloads/PayloadNearQuery.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Scorer.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Searchable.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Searcher.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Util/Constants.cs incubator/lucene.net/trunk/C#/src/Lucene.Net/Util/Version.cs incubator/lucene.net/trunk/C#/src/Test/Analysis/BaseTokenStreamTestCase.cs incubator/lucene.net/trunk/C#/src/Test/Analysis/TestStandardAnalyzer.cs incubator/lucene.net/trunk/C#/src/Test/Analysis/TestTokenStreamBWComp.cs incubator/lucene.net/trunk/C#/src/Test/AssemblyInfo.cs incubator/lucene.net/trunk/C#/src/Test/Index/TestBackwardsCompatibility.cs incubator/lucene.net/trunk/C#/src/Test/Index/TestIndexWriter.cs incubator/lucene.net/trunk/C#/src/Test/Index/TestIndexWriterReader.cs incubator/lucene.net/trunk/C#/src/Test/QueryParser/TestQueryParser.cs incubator/lucene.net/trunk/C#/src/Test/Search/Payloads/TestPayloadNearQuery.cs incubator/lucene.net/trunk/C#/src/Test/Search/TestFuzzyQuery.cs incubator/lucene.net/trunk/C#/src/Test/Search/TestNumericRangeQuery32.cs incubator/lucene.net/trunk/C#/src/Test/Search/TestNumericRangeQuery64.cs incubator/lucene.net/trunk/C#/src/Test/Test.csproj Modified: incubator/lucene.net/trunk/C#/src/CHANGES.txt URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/CHANGES.txt?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/CHANGES.txt (original) +++ incubator/lucene.net/trunk/C#/src/CHANGES.txt Tue Nov 17 01:13:56 2009 @@ -1,5 +1,72 @@ Lucene Change Log -$Id: CHANGES.txt 817268 2009-09-21 14:23:44Z markrmiller $ +$Id: CHANGES.txt 832363 2009-11-03 09:37:36Z mikemccand $ + +======================= Release 2.9.1 2009-11-06 ======================= + +Changes in backwards compatibility policy + + * LUCENE-2002: Add required Version matchVersion argument when + constructing QueryParser or MultiFieldQueryParser and, default (as + of 2.9) enablePositionIncrements to true to match + StandardAnalyzer's 2.9 default (Uwe Schindler, Mike McCandless) + +Bug fixes + + * LUCENE-1974: Fixed nasty bug in BooleanQuery (when it used + BooleanScorer for scoring), whereby some matching documents fail to + be collected. (Fulin Tang via Mike McCandless) + + * LUCENE-1124: Make sure FuzzyQuery always matches the precise term. + (stefatwork@gmail.com via Mike McCandless) + + * LUCENE-1976: Fix IndexReader.isCurrent() to return the right thing + when the reader is a near real-time reader. (Jake Mannix via Mike + McCandless) + + * LUCENE-1986: Fix NPE when scoring PayloadNearQuery (Peter Keegan, + Mark Miller via Mike McCandless) + + * LUCENE-1992: Fix thread hazard if a merge is committing just as an + exception occurs during sync (Uwe Schindler, Mike McCandless) + + * LUCENE-1995: Note in javadocs that IndexWriter.setRAMBufferSizeMB + cannot exceed 2048 MB, and throw IllegalArgumentException if it + does. (Aaron McKee, Yonik Seeley, Mike McCandless) + + * LUCENE-2004: Fix Constants.LUCENE_MAIN_VERSION to not be inlined + by client code. (Uwe Schindler) + + * LUCENE-2016: Replace illegal U+FFFF character with the replacement + char (U+FFFD) during indexing, to prevent silent index corruption. + (Peter Keegan, Mike McCandless) + +API Changes + + * Un-deprecate search(Weight weight, Filter filter, int n) from + Searchable interface (deprecated by accident). (Uwe Schindler) + + * Un-deprecate o.a.l.util.Version constants. (Mike McCandless) + + * LUCENE-1987: Un-deprecate some ctors of Token, as they will not + be removed in 3.0 and are still useful. Also add some missing + o.a.l.util.Version constants for enabling invalid acronym + settings in StandardAnalyzer to be compatible with the coming + Lucene 3.0. (Uwe Schindler) + + * LUCENE-1973: Un-deprecate IndexSearcher.setDefaultFieldSortScoring, + to allow controlling per-IndexSearcher whether scores are computed + when sorting by field. (Uwe Schindler, Mike McCandless) + +Documentation + + * LUCENE-1955: Fix Hits deprecation notice to point users in right + direction. (Mike McCandless, Mark Miller) + + * Fix javadoc about score tracking done by search methods in Searcher + and IndexSearcher. (Mike McCandless) + + * LUCENE-2008: Javadoc improvements for TokenStream/Tokenizer/Token + (Luke Nezda via Mike McCandless) ======================= Release 2.9.0 2009-09-23 ======================= @@ -99,6 +166,11 @@ abstract rather than an interface) back compat break if you have overridden Query.creatWeight, so we have taken the opportunity to make this change. (Tim Smith, Shai Erera via Mark Miller) + + * LUCENE-1708 - IndexReader.document() no longer checks if the document is + deleted. You can call IndexReader.isDeleted(n) prior to calling document(n). + (Shai Erera via Mike McCandless) + Changes in runtime behavior @@ -149,9 +221,6 @@ rely on this behavior by the 3.0 release of Lucene. (Jonathan Mamou, Mark Miller via Mike McCandless) - * LUCENE-1708 - IndexReader.document() no longer checks if the document is - deleted. You can call IndexReader.isDeleted(n) prior to calling document(n). - (Shai Erera via Mike McCandless) * LUCENE-1715: Finalizers have been removed from the 4 core classes that still had them, since they will cause GC to take longer, thus @@ -793,7 +862,7 @@ using CloseableThreadLocal internally. (Jason Rutherglen via Mike McCandless). - * LUCENE-1224: Short circuit FuzzyQuery.rewrite when input token length + * LUCENE-1124: Short circuit FuzzyQuery.rewrite when input token length is small compared to minSimilarity. (Timo Nentwig, Mark Miller) * LUCENE-1316: MatchAllDocsQuery now avoids the synchronized Modified: incubator/lucene.net/trunk/C#/src/Demo/DeleteFiles/AssemblyInfo.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Demo/DeleteFiles/AssemblyInfo.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Demo/DeleteFiles/AssemblyInfo.cs (original) +++ incubator/lucene.net/trunk/C#/src/Demo/DeleteFiles/AssemblyInfo.cs Tue Nov 17 01:13:56 2009 @@ -33,7 +33,7 @@ [assembly: AssemblyDefaultAlias("Lucene.Net")] [assembly: AssemblyCulture("")] -[assembly: AssemblyInformationalVersionAttribute("2.9.0")] +[assembly: AssemblyInformationalVersionAttribute("2.9.1")] // // Version information for an assembly consists of the following four values: @@ -46,7 +46,7 @@ // You can specify all the values or you can default the Revision and Build Numbers // by using the '*' as shown below: -[assembly: AssemblyVersion("2.9.0.001")] +[assembly: AssemblyVersion("2.9.1.001")] // // In order to sign your assembly you must specify a key to use. Refer to the Modified: incubator/lucene.net/trunk/C#/src/Demo/DemoLib/AssemblyInfo.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Demo/DemoLib/AssemblyInfo.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Demo/DemoLib/AssemblyInfo.cs (original) +++ incubator/lucene.net/trunk/C#/src/Demo/DemoLib/AssemblyInfo.cs Tue Nov 17 01:13:56 2009 @@ -33,7 +33,7 @@ [assembly: AssemblyDefaultAlias("Lucene.Net")] [assembly: AssemblyCulture("")] -[assembly: AssemblyInformationalVersionAttribute("2.9.0")] +[assembly: AssemblyInformationalVersionAttribute("2.9.1")] // // Version information for an assembly consists of the following four values: @@ -46,7 +46,7 @@ // You can specify all the values or you can default the Revision and Build Numbers // by using the '*' as shown below: -[assembly: AssemblyVersion("2.9.0.001")] +[assembly: AssemblyVersion("2.9.1.001")] // // In order to sign your assembly you must specify a key to use. Refer to the Modified: incubator/lucene.net/trunk/C#/src/Demo/IndexFiles/AssemblyInfo.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Demo/IndexFiles/AssemblyInfo.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Demo/IndexFiles/AssemblyInfo.cs (original) +++ incubator/lucene.net/trunk/C#/src/Demo/IndexFiles/AssemblyInfo.cs Tue Nov 17 01:13:56 2009 @@ -33,7 +33,7 @@ [assembly: AssemblyDefaultAlias("Lucene.Net")] [assembly: AssemblyCulture("")] -[assembly: AssemblyInformationalVersionAttribute("2.9.0")] +[assembly: AssemblyInformationalVersionAttribute("2.9.1")] // // Version information for an assembly consists of the following four values: @@ -46,7 +46,7 @@ // You can specify all the values or you can default the Revision and Build Numbers // by using the '*' as shown below: -[assembly: AssemblyVersion("2.9.0.001")] +[assembly: AssemblyVersion("2.9.1.001")] // // In order to sign your assembly you must specify a key to use. Refer to the Modified: incubator/lucene.net/trunk/C#/src/Demo/IndexHtml/AssemblyInfo.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Demo/IndexHtml/AssemblyInfo.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Demo/IndexHtml/AssemblyInfo.cs (original) +++ incubator/lucene.net/trunk/C#/src/Demo/IndexHtml/AssemblyInfo.cs Tue Nov 17 01:13:56 2009 @@ -33,7 +33,7 @@ [assembly: AssemblyDefaultAlias("Lucene.Net")] [assembly: AssemblyCulture("")] -[assembly: AssemblyInformationalVersionAttribute("2.9.0")] +[assembly: AssemblyInformationalVersionAttribute("2.9.1")] // // Version information for an assembly consists of the following four values: @@ -46,7 +46,7 @@ // You can specify all the values or you can default the Revision and Build Numbers // by using the '*' as shown below: -[assembly: AssemblyVersion("2.9.0.001")] +[assembly: AssemblyVersion("2.9.1.001")] // // In order to sign your assembly you must specify a key to use. Refer to the Modified: incubator/lucene.net/trunk/C#/src/Demo/SearchFiles/AssemblyInfo.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Demo/SearchFiles/AssemblyInfo.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Demo/SearchFiles/AssemblyInfo.cs (original) +++ incubator/lucene.net/trunk/C#/src/Demo/SearchFiles/AssemblyInfo.cs Tue Nov 17 01:13:56 2009 @@ -33,7 +33,7 @@ [assembly: AssemblyDefaultAlias("Lucene.Net")] [assembly: AssemblyCulture("")] -[assembly: AssemblyInformationalVersionAttribute("2.9.0")] +[assembly: AssemblyInformationalVersionAttribute("2.9.1")] // // Version information for an assembly consists of the following four values: @@ -46,7 +46,7 @@ // You can specify all the values or you can default the Revision and Build Numbers // by using the '*' as shown below: -[assembly: AssemblyVersion("2.9.0.001")] +[assembly: AssemblyVersion("2.9.1.001")] // // In order to sign your assembly you must specify a key to use. Refer to the Modified: incubator/lucene.net/trunk/C#/src/HISTORY.txt URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/HISTORY.txt?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/HISTORY.txt (original) +++ incubator/lucene.net/trunk/C#/src/HISTORY.txt Tue Nov 17 01:13:56 2009 @@ -2,6 +2,10 @@ ------------------------- +16Nov09: + - Release: Apache Lucene.Net 2.9.1 build 001 "Beta" + + 03Nov09: - Release: Apache Lucene.Net 2.9.0 build 001 "Alpha" - Port: Test code Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardAnalyzer.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/Standard/StandardAnalyzer.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardAnalyzer.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardAnalyzer.cs Tue Nov 17 01:13:56 2009 @@ -23,22 +23,22 @@ namespace Lucene.Net.Analysis.Standard { - /// Filters {@link StandardTokenizer} with {@link StandardFilter}, {@link - /// LowerCaseFilter} and {@link StopFilter}, using a list of - /// English stop words. + /// Filters {@link StandardTokenizer} with {@link StandardFilter}, + /// {@link LowerCaseFilter} and {@link StopFilter}, using a list of English stop + /// words. /// /// - ///

You must specify the required {@link Version} - /// compatibility when creating StandardAnalyzer: + ///

+ /// You must specify the required {@link Version} compatibility when creating + /// StandardAnalyzer: ///

/// ///
- /// $Id: StandardAnalyzer.java 811070 2009-09-03 18:31:41Z hossman $ + /// $Id: StandardAnalyzer.java 829134 2009-10-23 17:18:53Z mikemccand $ /// public class StandardAnalyzer : Analyzer { @@ -280,6 +280,14 @@ { useDefaultStopPositionIncrements = true; } + if (matchVersion.OnOrAfter(Version.LUCENE_24)) + { + replaceInvalidAcronym = defaultReplaceInvalidAcronym; + } + else + { + replaceInvalidAcronym = false; + } } /// Constructs a {@link StandardTokenizer} filtered by a {@link Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardTokenizer.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/Standard/StandardTokenizer.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardTokenizer.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardTokenizer.cs Tue Nov 17 01:13:56 2009 @@ -25,6 +25,7 @@ using TermAttribute = Lucene.Net.Analysis.Tokenattributes.TermAttribute; using TypeAttribute = Lucene.Net.Analysis.Tokenattributes.TypeAttribute; using AttributeSource = Lucene.Net.Util.AttributeSource; +using Version = Lucene.Net.Util.Version; namespace Lucene.Net.Analysis.Standard { @@ -44,6 +45,15 @@ ///

Many applications have specific tokenizer needs. If this tokenizer does /// not suit your application, please consider copying this source code /// directory to your project and maintaining your own grammar-based tokenizer. + /// + /// + ///

+ /// You must specify the required {@link Version} compatibility when creating + /// StandardAnalyzer: + ///

///
public class StandardTokenizer:Tokenizer @@ -107,7 +117,9 @@ /// Creates a new instance of the {@link StandardTokenizer}. Attaches the /// input to a newly created JFlex scanner. /// - public StandardTokenizer(System.IO.TextReader input):this(input, false) + /// Use {@link #StandardTokenizer(Version, Reader)} instead + /// + public StandardTokenizer(System.IO.TextReader input):this(Version.LUCENE_24, input) { } @@ -121,6 +133,8 @@ /// /// See http://issues.apache.org/jira/browse/LUCENE-1068 /// + /// Use {@link #StandardTokenizer(Version, Reader)} instead + /// public StandardTokenizer(System.IO.TextReader input, bool replaceInvalidAcronym):base() { InitBlock(); @@ -128,7 +142,27 @@ Init(input, replaceInvalidAcronym); } + /// Creates a new instance of the + /// {@link org.apache.lucene.analysis.standard.StandardTokenizer}. Attaches + /// the input to the newly created JFlex scanner. + /// + /// + /// The input reader + /// + /// See http://issues.apache.org/jira/browse/LUCENE-1068 + /// + public StandardTokenizer(Version matchVersion, System.IO.TextReader input):base() + { + InitBlock(); + this.scanner = new StandardTokenizerImpl(input); + Init(input, matchVersion); + } + /// Creates a new StandardTokenizer with a given {@link AttributeSource}. + /// Use + /// {@link #StandardTokenizer(Version, AttributeSource, Reader)} + /// instead + /// public StandardTokenizer(AttributeSource source, System.IO.TextReader input, bool replaceInvalidAcronym):base(source) { InitBlock(); @@ -136,7 +170,19 @@ Init(input, replaceInvalidAcronym); } + /// Creates a new StandardTokenizer with a given {@link AttributeSource}. + public StandardTokenizer(Version matchVersion, AttributeSource source, System.IO.TextReader input):base(source) + { + InitBlock(); + this.scanner = new StandardTokenizerImpl(input); + Init(input, matchVersion); + } + /// Creates a new StandardTokenizer with a given {@link Lucene.Net.Util.AttributeSource.AttributeFactory} + /// Use + /// {@link #StandardTokenizer(Version, org.apache.lucene.util.AttributeSource.AttributeFactory, Reader)} + /// instead + /// public StandardTokenizer(AttributeFactory factory, System.IO.TextReader input, bool replaceInvalidAcronym):base(factory) { InitBlock(); @@ -144,6 +190,16 @@ Init(input, replaceInvalidAcronym); } + /// Creates a new StandardTokenizer with a given + /// {@link org.apache.lucene.util.AttributeSource.AttributeFactory} + /// + public StandardTokenizer(Version matchVersion, AttributeFactory factory, System.IO.TextReader input):base(factory) + { + InitBlock(); + this.scanner = new StandardTokenizerImpl(input); + Init(input, matchVersion); + } + private void Init(System.IO.TextReader input, bool replaceInvalidAcronym) { this.replaceInvalidAcronym = replaceInvalidAcronym; @@ -154,6 +210,18 @@ typeAtt = (TypeAttribute) AddAttribute(typeof(TypeAttribute)); } + private void Init(System.IO.TextReader input, Version matchVersion) + { + if (matchVersion.OnOrAfter(Version.LUCENE_24)) + { + Init(input, true); + } + else + { + Init(input, false); + } + } + // this tokenizer generates three attributes: // offset, positionIncrement and type private TermAttribute termAtt; Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopAnalyzer.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/StopAnalyzer.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopAnalyzer.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopAnalyzer.cs Tue Nov 17 01:13:56 2009 @@ -1,4 +1,3 @@ - /* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with @@ -18,10 +17,22 @@ using System; +using Version = Lucene.Net.Util.Version; + namespace Lucene.Net.Analysis { - /// Filters {@link LetterTokenizer} with {@link LowerCaseFilter} and {@link StopFilter}. + /// Filters {@link LetterTokenizer} with {@link LowerCaseFilter} and + /// {@link StopFilter}. + /// + /// + ///

+ /// You must specify the required {@link Version} compatibility when creating + /// StopAnalyzer: + ///

    + ///
  • As of 2.9, position increments are preserved + ///
+ ///
public sealed class StopAnalyzer:Analyzer { @@ -45,7 +56,7 @@ /// Builds an analyzer which removes words in /// ENGLISH_STOP_WORDS. /// - /// Use {@link #StopAnalyzer(boolean)} instead + /// Use {@link #StopAnalyzer(Version)} instead /// public StopAnalyzer() { @@ -54,12 +65,22 @@ enablePositionIncrements = false; } + /// Builds an analyzer which removes words in ENGLISH_STOP_WORDS. + public StopAnalyzer(Version matchVersion) + { + stopWords = ENGLISH_STOP_WORDS_SET; + useDefaultStopPositionIncrement = false; + enablePositionIncrements = StopFilter.GetEnablePositionIncrementsVersionDefault(matchVersion); + } + /// Builds an analyzer which removes words in /// ENGLISH_STOP_WORDS. /// - /// See {@link - /// StopFilter#setEnablePositionIncrements} + /// + /// See {@link StopFilter#SetEnablePositionIncrements} /// + /// Use {@link #StopAnalyzer(Version)} instead + /// public StopAnalyzer(bool enablePositionIncrements) { stopWords = ENGLISH_STOP_WORDS_SET; @@ -68,7 +89,7 @@ } /// Builds an analyzer with the stop words from the given set. - /// Use {@link #StopAnalyzer(Set, boolean)} instead + /// Use {@link #StopAnalyzer(Version, Set)} instead /// public StopAnalyzer(System.Collections.Hashtable stopWords) { @@ -78,11 +99,21 @@ } /// Builds an analyzer with the stop words from the given set. + public StopAnalyzer(Version matchVersion, System.Collections.Hashtable stopWords) + { + this.stopWords = stopWords; + useDefaultStopPositionIncrement = false; + enablePositionIncrements = StopFilter.GetEnablePositionIncrementsVersionDefault(matchVersion); + } + + /// Builds an analyzer with the stop words from the given set. /// Set of stop words /// - /// See {@link - /// StopFilter#setEnablePositionIncrements} + /// + /// See {@link StopFilter#SetEnablePositionIncrements} /// + /// Use {@link #StopAnalyzer(Version, Set)} instead + /// public StopAnalyzer(System.Collections.Hashtable stopWords, bool enablePositionIncrements) { this.stopWords = stopWords; @@ -93,6 +124,8 @@ /// Builds an analyzer which removes words in the provided array. /// Use {@link #StopAnalyzer(Set, boolean)} instead /// + /// Use {@link #StopAnalyzer(Version, Set)} instead + /// public StopAnalyzer(System.String[] stopWords) { this.stopWords = StopFilter.MakeStopSet(stopWords); @@ -103,10 +136,10 @@ /// Builds an analyzer which removes words in the provided array. /// Array of stop words /// - /// See {@link - /// StopFilter#setEnablePositionIncrements} + /// + /// See {@link StopFilter#SetEnablePositionIncrements} /// - /// Use {@link #StopAnalyzer(Set, boolean)} instead + /// Use {@link #StopAnalyzer(Version, Set)} instead /// public StopAnalyzer(System.String[] stopWords, bool enablePositionIncrements) { @@ -118,7 +151,7 @@ /// Builds an analyzer with the stop words from the given file. /// /// - /// Use {@link #StopAnalyzer(File, boolean)} instead + /// Use {@link #StopAnalyzer(Version, File)} instead /// public StopAnalyzer(System.IO.FileInfo stopwordsFile) { @@ -132,9 +165,11 @@ /// /// File to load stop words from /// - /// See {@link - /// StopFilter#setEnablePositionIncrements} + /// + /// See {@link StopFilter#SetEnablePositionIncrements} /// + /// Use {@link #StopAnalyzer(Version, File)} instead + /// public StopAnalyzer(System.IO.FileInfo stopwordsFile, bool enablePositionIncrements) { stopWords = WordlistLoader.GetWordSet(stopwordsFile); @@ -142,10 +177,26 @@ useDefaultStopPositionIncrement = false; } + /// Builds an analyzer with the stop words from the given file. + /// + /// + /// + /// + /// See above + /// + /// File to load stop words from + /// + public StopAnalyzer(Version matchVersion, System.IO.FileInfo stopwordsFile) + { + stopWords = WordlistLoader.GetWordSet(stopwordsFile); + this.enablePositionIncrements = StopFilter.GetEnablePositionIncrementsVersionDefault(matchVersion); + useDefaultStopPositionIncrement = false; + } + /// Builds an analyzer with the stop words from the given reader. /// /// - /// Use {@link #StopAnalyzer(Reader, boolean)} instead + /// Use {@link #StopAnalyzer(Version, Reader)} instead /// public StopAnalyzer(System.IO.TextReader stopwords) { @@ -159,17 +210,33 @@ /// /// Reader to load stop words from /// - /// See {@link - /// StopFilter#setEnablePositionIncrements} + /// + /// See {@link StopFilter#SetEnablePositionIncrements} /// + /// Use {@link #StopAnalyzer(Version, Reader)} instead + /// public StopAnalyzer(System.IO.TextReader stopwords, bool enablePositionIncrements) { stopWords = WordlistLoader.GetWordSet(stopwords); this.enablePositionIncrements = enablePositionIncrements; useDefaultStopPositionIncrement = false; } - - /// Filters LowerCaseTokenizer with StopFilter. + + /// Builds an analyzer with the stop words from the given reader. + /// + /// + /// See above + /// + /// Reader to load stop words from + /// + public StopAnalyzer(Version matchVersion, System.IO.TextReader stopwords) + { + stopWords = WordlistLoader.GetWordSet(stopwords); + this.enablePositionIncrements = StopFilter.GetEnablePositionIncrementsVersionDefault(matchVersion); + useDefaultStopPositionIncrement = false; + } + + /// Filters LowerCaseTokenizer with StopFilter. public override TokenStream TokenStream(System.String fieldName, System.IO.TextReader reader) { if (useDefaultStopPositionIncrement) Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopFilter.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/StopFilter.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopFilter.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopFilter.cs Tue Nov 17 01:13:56 2009 @@ -20,6 +20,7 @@ using PositionIncrementAttribute = Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttribute; using TermAttribute = Lucene.Net.Analysis.Tokenattributes.TermAttribute; using QueryParser = Lucene.Net.QueryParsers.QueryParser; +using Version = Lucene.Net.Util.Version; namespace Lucene.Net.Analysis { @@ -263,14 +264,34 @@ return ENABLE_POSITION_INCREMENTS_DEFAULT; } - /// Set the default position increments behavior of every StopFilter created from now on. + /// Returns version-dependent default for enablePositionIncrements. Analyzers + /// that embed StopFilter use this method when creating the StopFilter. Prior + /// to 2.9, this returns {@link #getEnablePositionIncrementsDefault}. On 2.9 + /// or later, it returns true. + /// + public static bool GetEnablePositionIncrementsVersionDefault(Version matchVersion) + { + if (matchVersion.OnOrAfter(Version.LUCENE_29)) + { + return true; + } + else + { + return ENABLE_POSITION_INCREMENTS_DEFAULT; + } + } + + /// Set the default position increments behavior of every StopFilter created + /// from now on. ///

- /// Note: behavior of a single StopFilter instance can be modified - /// with {@link #SetEnablePositionIncrements(boolean)}. - /// This static method allows control over behavior of classes using StopFilters internally, - /// for example {@link Lucene.Net.Analysis.Standard.StandardAnalyzer StandardAnalyzer}. + /// Note: behavior of a single StopFilter instance can be modified with + /// {@link #SetEnablePositionIncrements(boolean)}. This static method allows + /// control over behavior of classes using StopFilters internally, for + /// example {@link Lucene.Net.Analysis.Standard.StandardAnalyzer + /// StandardAnalyzer} if used with the no-arg ctor. ///

/// Default : false. + /// ///

/// /// Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TeeSinkTokenFilter.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/TeeSinkTokenFilter.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TeeSinkTokenFilter.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TeeSinkTokenFilter.cs Tue Nov 17 01:13:56 2009 @@ -46,7 +46,7 @@ /// d.add(new Field("f3", final3)); /// d.add(new Field("f4", final4)); /// - /// In this example, sink1 and sink2 will both get tokens from both + /// In this example, sink1 and sink2 will both get tokens from both /// reader1 and reader2 after whitespace tokenizer /// and now we can further wrap any of these in extra analysis, and more "sources" can be inserted if desired. /// It is important, that tees are consumed before sinks (in the above example, the field names must be Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Token.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/Token.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Token.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Token.cs Tue Nov 17 01:13:56 2009 @@ -38,8 +38,8 @@ ///

/// The start and end offsets permit applications to re-associate a token with /// its source text, e.g., to display highlighted query terms in a document - /// browser, or to show matching text fragments in a KWIC (KeyWord In Context) - /// display, etc. + /// browser, or to show matching text fragments in a KWIC display, etc. ///

/// The type is a string, assigned by a lexical analyzer /// (a.k.a. tokenizer), naming the lexical or syntactic class that the token @@ -71,9 +71,9 @@ /// associated performance cost has been added (below). The /// {@link #TermText()} method has been deprecated.

///
- ///

Tokenizers and filters should try to re-use a Token - /// instance when possible for best performance, by - /// implementing the {@link TokenStream#Next(Token)} API. + ///

Tokenizers and TokenFilters should try to re-use a Token instance when + /// possible for best performance, by implementing the + /// {@link TokenStream#IncrementToken()} API. /// Failing that, to create a new Token you should first use /// one of the constructors that starts with null text. To load /// the token from a char[] use {@link #SetTermBuffer(char[], int, int)}. @@ -87,30 +87,35 @@ /// set the length of the term text. See LUCENE-969 /// for details.

- ///

Typical reuse patterns: + ///

Typical Token reuse patterns: ///

    - ///
  • Copying text from a string (type is reset to #DEFAULT_TYPE if not specified):
    + ///
  • Copying text from a string (type is reset to {@link #DEFAULT_TYPE} if not + /// specified):
    ///
     	/// return reusableToken.reinit(string, startOffset, endOffset[, type]);
     	/// 
    ///
  • - ///
  • Copying some text from a string (type is reset to #DEFAULT_TYPE if not specified):
    + ///
  • Copying some text from a string (type is reset to {@link #DEFAULT_TYPE} + /// if not specified):
    ///
     	/// return reusableToken.reinit(string, 0, string.length(), startOffset, endOffset[, type]);
     	/// 
    ///
  • /// - ///
  • Copying text from char[] buffer (type is reset to #DEFAULT_TYPE if not specified):
    + ///
  • Copying text from char[] buffer (type is reset to {@link #DEFAULT_TYPE} + /// if not specified):
    ///
     	/// return reusableToken.reinit(buffer, 0, buffer.length, startOffset, endOffset[, type]);
     	/// 
    ///
  • - ///
  • Copying some text from a char[] buffer (type is reset to #DEFAULT_TYPE if not specified):
    + ///
  • Copying some text from a char[] buffer (type is reset to + /// {@link #DEFAULT_TYPE} if not specified):
    ///
     	/// return reusableToken.reinit(buffer, start, end - start, startOffset, endOffset[, type]);
     	/// 
    ///
  • - ///
  • Copying from one one Token to another (type is reset to #DEFAULT_TYPE if not specified):
    + ///
  • Copying from one one Token to another (type is reset to + /// {@link #DEFAULT_TYPE} if not specified):
    ///
     	/// return reusableToken.reinit(source.termBuffer(), 0, source.termLength(), source.startOffset(), source.endOffset()[, source.type()]);
     	/// 
    @@ -120,7 +125,8 @@ ///
      ///
    • clear() initializes all of the fields to default values. This was changed in contrast to Lucene 2.4, but should affect no one.
    • ///
    • Because TokenStreams can be chained, one cannot assume that the Token's current type is correct.
    • - ///
    • The startOffset and endOffset represent the start and offset in the source text. So be careful in adjusting them.
    • + ///
    • The startOffset and endOffset represent the start and offset in the + /// source text, so be careful in adjusting them.
    • ///
    • When caching a reusable token, clone it. When injecting a cached token into a stream that can be reset, clone it again.
    • ///
    ///

    @@ -247,8 +253,6 @@ /// /// end offset /// - /// Use {@link #Token(char[], int, int, int, int)} instead. - /// public Token(System.String text, int start, int end) { termText = text; @@ -269,8 +273,6 @@ /// /// token type /// - /// Use {@link #Token(char[], int, int, int, int)} and {@link #SetType(String)} instead. - /// public Token(System.String text, int start, int end, System.String typ) { termText = text; @@ -292,8 +294,6 @@ /// /// token type bits /// - /// Use {@link #Token(char[], int, int, int, int)} and {@link #SetFlags(int)} instead. - /// public Token(System.String text, int start, int end, int flags) { termText = text; Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenFilter.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/TokenFilter.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenFilter.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenFilter.cs Tue Nov 17 01:13:56 2009 @@ -20,16 +20,13 @@ namespace Lucene.Net.Analysis { - /// A TokenFilter is a TokenStream whose input is another token stream. + /// A TokenFilter is a TokenStream whose input is another TokenStream. ///

    - /// This is an abstract class. - /// NOTE: subclasses must override - /// {@link #IncrementToken()} if the new TokenStream API is used - /// and {@link #Next(Token)} or {@link #Next()} if the old - /// TokenStream API is used. - ///

    - /// See {@link TokenStream} + /// This is an abstract class; subclasses must override {@link #IncrementToken()}. + /// ///

    + /// + /// public abstract class TokenFilter:TokenStream { /// The source of tokens for this filter. Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenStream.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/TokenStream.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenStream.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenStream.cs Tue Nov 17 01:13:56 2009 @@ -59,7 +59,7 @@ ///
  • Instantiation of TokenStream/{@link TokenFilter}s which add/get /// attributes to/from the {@link AttributeSource}. ///
  • The consumer calls {@link TokenStream#Reset()}. - ///
  • the consumer retrieves attributes from the stream and stores local + ///
  • The consumer retrieves attributes from the stream and stores local /// references to all attributes it wants to access ///
  • The consumer calls {@link #IncrementToken()} until it returns false and /// consumes the attributes after each call. @@ -317,10 +317,15 @@ return onlyUseNewAPI; } - /// Consumers (ie {@link IndexWriter}) use this method to advance the stream to + /// Consumers (i.e., {@link IndexWriter}) use this method to advance the stream to /// the next token. Implementing classes must implement this method and update /// the appropriate {@link AttributeImpl}s with the attributes of the next /// token. + ///

    + /// The producer must make no assumptions about the attributes after the + /// method has been returned: the caller may arbitrarily change it. If the + /// producer needs to preserve the state for subsequent calls, it can use + /// {@link #captureState} to create a copy of the current attribute state. ///

    /// This method is called for every token of a document, so an efficient /// implementation is crucial for good performance. To avoid calls to Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Tokenizer.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/Tokenizer.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Tokenizer.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Tokenizer.cs Tue Nov 17 01:13:56 2009 @@ -22,20 +22,14 @@ namespace Lucene.Net.Analysis { - ///

    A Tokenizer is a TokenStream whose input is a Reader. + /// A Tokenizer is a TokenStream whose input is a Reader. ///

    - /// This is an abstract class. + /// This is an abstract class; subclasses must override {@link #IncrementToken()} ///

    - /// NOTE: subclasses must override - /// {@link #IncrementToken()} if the new TokenStream API is used - /// and {@link #Next(Token)} or {@link #Next()} if the old - /// TokenStream API is used. - ///

    - /// NOTE: Subclasses overriding {@link #IncrementToken()} must - /// call {@link AttributeSource#ClearAttributes()} before - /// setting attributes. - /// Subclasses overriding {@link #Next(Token)} must call - /// {@link Token#Clear()} before setting Token attributes. + /// NOTE: Subclasses overriding {@link #IncrementToken()} must call + /// {@link AttributeSource#ClearAttributes()} before setting attributes. + /// Subclasses overriding {@link #IncrementToken()} must call + /// {@link Token#Clear()} before setting Token attributes. ///

    public abstract class Tokenizer:TokenStream Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/AssemblyInfo.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/AssemblyInfo.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/AssemblyInfo.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/AssemblyInfo.cs Tue Nov 17 01:13:56 2009 @@ -33,7 +33,7 @@ [assembly: AssemblyDefaultAlias("Lucene.Net")] [assembly: AssemblyCulture("")] -[assembly: AssemblyInformationalVersionAttribute("2.9.0")] +[assembly: AssemblyInformationalVersionAttribute("2.9.1")] // @@ -47,7 +47,7 @@ // You can specify all the values or you can default the Revision and Build Numbers // by using the '*' as shown below: -[assembly: AssemblyVersion("2.9.0.001")] +[assembly: AssemblyVersion("2.9.1.001")] // Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DirectoryReader.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/DirectoryReader.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DirectoryReader.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DirectoryReader.cs Tue Nov 17 01:13:56 2009 @@ -93,6 +93,7 @@ private System.Collections.Generic.Dictionary synced = new System.Collections.Generic.Dictionary(); private Lock writeLock; private SegmentInfos segmentInfos; + private SegmentInfos segmentInfosStart; private bool stale; private int termInfosIndexDivisor; @@ -170,6 +171,7 @@ this.directory = writer.GetDirectory(); this.readOnly = true; this.segmentInfos = infos; + segmentInfosStart = (SegmentInfos) infos.Clone(); this.termInfosIndexDivisor = termInfosIndexDivisor; if (!readOnly) { @@ -997,19 +999,18 @@ return segmentInfos.GetUserData(); } - /// Check whether this IndexReader is still using the current (i.e., most recently committed) version of the index. If - /// a writer has committed any changes to the index since this reader was opened, this will return false, - /// in which case you must open a new IndexReader in order to see the changes. See the description of the autoCommit flag which controls when the {@link IndexWriter} - /// actually commits changes to the index. - /// - /// - /// CorruptIndexException if the index is corrupt - /// IOException if there is a low-level IO error public override bool IsCurrent() { EnsureOpen(); - return SegmentInfos.ReadCurrentVersion(directory) == segmentInfos.GetVersion(); + if (writer == null || writer.IsClosed()) + { + // we loaded SegmentInfos from the directory + return SegmentInfos.ReadCurrentVersion(directory) == segmentInfos.GetVersion(); + } + else + { + return writer.NrtIsCurrent(segmentInfosStart); + } } protected internal override void DoClose() Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DocumentsWriter.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/DocumentsWriter.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DocumentsWriter.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DocumentsWriter.cs Tue Nov 17 01:13:56 2009 @@ -683,6 +683,14 @@ } } + internal bool AnyChanges() + { + lock (this) + { + return numDocsInRAM != 0 || deletesInRAM.numTerms != 0 || deletesInRAM.docIDs.Count != 0 || deletesInRAM.queries.Count != 0; + } + } + private void InitFlushState(bool onlyDocStore) { lock (this) Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexReader.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/IndexReader.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexReader.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexReader.cs Tue Nov 17 01:13:56 2009 @@ -64,7 +64,7 @@ /// IndexReader instance; use your own /// (non-Lucene) objects instead. ///
    - /// $Id: IndexReader.java 807735 2009-08-25 18:02:39Z markrmiller $ + /// $Id: IndexReader.java 826049 2009-10-16 19:28:55Z mikemccand $ /// public abstract class IndexReader : System.ICloneable { @@ -836,8 +836,31 @@ return SegmentInfos.ReadCurrentUserData(directory); } - /// Version number when this IndexReader was opened. Not implemented in the IndexReader base class. - /// UnsupportedOperationException unless overridden in subclass + /// Version number when this IndexReader was opened. Not implemented in the + /// IndexReader base class. + /// + ///

    + /// If this reader is based on a Directory (ie, was created by calling + /// {@link #Open}, or {@link #Reopen} on a reader based on a Directory), then + /// this method returns the version recorded in the commit that the reader + /// opened. This version is advanced every time {@link IndexWriter#Commit} is + /// called. + ///

    + /// + ///

    + /// If instead this reader is a near real-time reader (ie, obtained by a call + /// to {@link IndexWriter#GetReader}, or by calling {@link #Reopen} on a near + /// real-time reader), then this method returns the version of the last + /// commit done by the writer. Note that even as further changes are made + /// with the writer, the version will not changed until a commit is + /// completed. Thus, you should not rely on this method to determine when a + /// near real-time reader should be opened. Use {@link #IsCurrent} instead. + ///

    + /// + ///
    + /// UnsupportedOperationException + /// unless overridden in subclass + /// public virtual long GetVersion() { throw new System.NotSupportedException("This reader does not support this method."); @@ -890,19 +913,30 @@ throw new System.NotSupportedException("This reader does not support this method."); } - /// Check whether this IndexReader is still using the - /// current (i.e., most recently committed) version of the - /// index. If a writer has committed any changes to the - /// index since this reader was opened, this will return - /// false, in which case you must open a new - /// IndexReader in order to see the changes. See the - /// description of the autoCommit - /// flag which controls when the {@link IndexWriter} - /// actually commits changes to the index. + /// Check whether any new changes have occurred to the index since this + /// reader was opened. + /// + ///

    + /// If this reader is based on a Directory (ie, was created by calling + /// {@link #open}, or {@link #reopen} on a reader based on a Directory), then + /// this method checks if any further commits (see {@link IndexWriter#commit} + /// have occurred in that directory). + ///

    /// ///

    - /// Not implemented in the IndexReader base class. + /// If instead this reader is a near real-time reader (ie, obtained by a call + /// to {@link IndexWriter#getReader}, or by calling {@link #reopen} on a near + /// real-time reader), then this method checks if either a new commmit has + /// occurred, or any new uncommitted changes have taken place via the writer. + /// Note that even if the writer has only performed merging, this method will + /// still return false. ///

    + /// + ///

    + /// In any event, if this returns false, you should call {@link #reopen} to + /// get a new reader that sees the changes. + ///

    + /// ///
    /// CorruptIndexException if the index is corrupt /// IOException if there is a low-level IO error Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexWriter.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/IndexWriter.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexWriter.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexWriter.cs Tue Nov 17 01:13:56 2009 @@ -350,13 +350,21 @@ // readers. private volatile bool poolReaders; - /// Expert: returns a readonly reader containing all - /// current updates. Flush is called automatically. This - /// provides "near real-time" searching, in that changes - /// made during an IndexWriter session can be made - /// available for searching without closing the writer. + /// Expert: returns a readonly reader, covering all committed as well as + /// un-committed changes to the index. This provides "near real-time" + /// searching, in that changes made during an IndexWriter session can be + /// quickly made available for searching without closing the writer nor + /// calling {@link #commit}. /// - ///

    It's near real-time because there is no hard + ///

    + /// Note that this is functionally equivalent to calling {#commit} and then + /// using {@link IndexReader#open} to open a new reader. But the turarnound + /// time of this method should be faster since it avoids the potentially + /// costly {@link #commit}. + ///

    + /// + ///

    + /// It's near real-time because there is no hard /// guarantee on how quickly you can get a new reader after /// making changes with IndexWriter. You'll have to /// experiment in your situation to determine if it's @@ -2137,6 +2145,14 @@ /// instead of RAM usage (each buffered delete Query counts /// as one). /// + ///

    + /// NOTE: because IndexWriter uses ints when managing its + /// internal storage, the absolute maximum value for this setting is somewhat + /// less than 2048 MB. The precise limit depends on various factors, such as + /// how large your documents are, how many fields have norms, etc., so it's + /// best to set this value comfortably under 2048. + ///

    + /// ///

    The default value is {@link #DEFAULT_RAM_BUFFER_SIZE_MB}.

    /// ///
    @@ -2146,6 +2162,10 @@ ///
    public virtual void SetRAMBufferSizeMB(double mb) { + if (mb > 2048.0) + { + throw new System.ArgumentException("ramBufferSize " + mb + " is too large; should be comfortably less than 2048"); + } if (mb != DISABLE_AUTO_FLUSH && mb <= 0.0) throw new System.ArgumentException("ramBufferSize should be > 0.0 MB when enabled"); if (mb == DISABLE_AUTO_FLUSH && GetMaxBufferedDocs() == DISABLE_AUTO_FLUSH) @@ -5237,7 +5257,7 @@ // Must note the change to segmentInfos so any commits // in-flight don't lose it: - changeCount++; + Checkpoint(); // If the merged segments had pending changes, clear // them so that they don't bother writing them to @@ -6644,6 +6664,31 @@ { return true; } + + internal virtual bool NrtIsCurrent(SegmentInfos infos) + { + lock (this) + { + if (!infos.Equals(segmentInfos)) + { + // if any structural changes (new segments), we are + // stale + return false; + } + else + { + return !docWriter.AnyChanges(); + } + } + } + + internal virtual bool IsClosed() + { + lock (this) + { + return closed; + } + } static IndexWriter() { DEFAULT_MERGE_FACTOR = LogMergePolicy.DEFAULT_MERGE_FACTOR; Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermsHashPerField.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/TermsHashPerField.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermsHashPerField.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermsHashPerField.cs Tue Nov 17 01:13:56 2009 @@ -436,9 +436,11 @@ } } } - else if (ch >= UnicodeUtil.UNI_SUR_HIGH_START && ch <= UnicodeUtil.UNI_SUR_HIGH_END) - // Unpaired + else if (ch >= UnicodeUtil.UNI_SUR_HIGH_START && (ch <= UnicodeUtil.UNI_SUR_HIGH_END || ch == 0xffff)) + { + // Unpaired or 0xffff ch = tokenText[downto] = (char) (UnicodeUtil.UNI_REPLACEMENT_CHAR); + } code = (code * 31) + ch; } Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Lucene.Net.csproj URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Lucene.Net.csproj?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Lucene.Net.csproj (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Lucene.Net.csproj Tue Nov 17 01:13:56 2009 @@ -75,7 +75,7 @@ False - .\ICSharpCode.SharpZipLib.dll + ..\..\..\..\..\Lucene-2.9\SharpZipLib\netcf-20\ICSharpCode.SharpZipLib.dll System Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs Tue Nov 17 01:13:56 2009 @@ -23,6 +23,7 @@ using MultiPhraseQuery = Lucene.Net.Search.MultiPhraseQuery; using PhraseQuery = Lucene.Net.Search.PhraseQuery; using Query = Lucene.Net.Search.Query; +using Version = Lucene.Net.Util.Version; namespace Lucene.Net.QueryParsers { @@ -30,64 +31,156 @@ /// A QueryParser which constructs queries to search multiple fields. /// /// - /// $Revision: 804016 $ + /// $Revision: 829134 $ /// public class MultiFieldQueryParser:QueryParser { protected internal System.String[] fields; protected internal System.Collections.IDictionary boosts; - /// Creates a MultiFieldQueryParser. - /// Allows passing of a map with term to Boost, and the boost to apply to each term. + /// Creates a MultiFieldQueryParser. Allows passing of a map with term to + /// Boost, and the boost to apply to each term. /// - ///

    It will, when parse(String query) - /// is called, construct a query like this (assuming the query consists of - /// two terms and you specify the two fields title and body):

    + ///

    + /// It will, when parse(String query) is called, construct a query like this + /// (assuming the query consists of two terms and you specify the two fields + /// title and body): + ///

    + /// + /// + /// (title:term1 body:term1) (title:term2 body:term2) + /// + /// + ///

    + /// When setDefaultOperator(AND_OPERATOR) is set, the result will be: + ///

    + /// + /// + /// +(title:term1 body:term1) +(title:term2 body:term2) + /// + /// + ///

    + /// When you pass a boost (title=>5 body=>10) you can get + ///

    + /// + /// + /// +(title:term1^5.0 body:term1^10.0) +(title:term2^5.0 body:term2^10.0) + /// + /// + ///

    + /// In other words, all the query's terms must appear, but it doesn't matter + /// in what fields they appear. + ///

    + /// + ///
    + /// Please use + /// {@link #MultiFieldQueryParser(Version, String[], Analyzer, Map)} + /// instead + /// + public MultiFieldQueryParser(System.String[] fields, Analyzer analyzer, System.Collections.IDictionary boosts):this(Version.LUCENE_24, fields, analyzer) + { + this.boosts = boosts; + } + + /// Creates a MultiFieldQueryParser. Allows passing of a map with term to + /// Boost, and the boost to apply to each term. + /// + ///

    + /// It will, when parse(String query) is called, construct a query like this + /// (assuming the query consists of two terms and you specify the two fields + /// title and body): + ///

    /// /// /// (title:term1 body:term1) (title:term2 body:term2) /// /// - ///

    When setDefaultOperator(AND_OPERATOR) is set, the result will be:

    + ///

    + /// When setDefaultOperator(AND_OPERATOR) is set, the result will be: + ///

    /// /// /// +(title:term1 body:term1) +(title:term2 body:term2) /// /// - ///

    When you pass a boost (title=>5 body=>10) you can get

    + ///

    + /// When you pass a boost (title=>5 body=>10) you can get + ///

    /// /// /// +(title:term1^5.0 body:term1^10.0) +(title:term2^5.0 body:term2^10.0) /// /// - ///

    In other words, all the query's terms must appear, but it doesn't matter in - /// what fields they appear.

    + ///

    + /// In other words, all the query's terms must appear, but it doesn't matter + /// in what fields they appear. + ///

    ///
    - public MultiFieldQueryParser(System.String[] fields, Analyzer analyzer, System.Collections.IDictionary boosts):this(fields, analyzer) + public MultiFieldQueryParser(Version matchVersion, System.String[] fields, Analyzer analyzer, System.Collections.IDictionary boosts):this(matchVersion, fields, analyzer) { this.boosts = boosts; } /// Creates a MultiFieldQueryParser. /// - ///

    It will, when parse(String query) - /// is called, construct a query like this (assuming the query consists of - /// two terms and you specify the two fields title and body):

    + ///

    + /// It will, when parse(String query) is called, construct a query like this + /// (assuming the query consists of two terms and you specify the two fields + /// title and body): + ///

    /// /// /// (title:term1 body:term1) (title:term2 body:term2) /// /// - ///

    When setDefaultOperator(AND_OPERATOR) is set, the result will be:

    + ///

    + /// When setDefaultOperator(AND_OPERATOR) is set, the result will be: + ///

    /// /// /// +(title:term1 body:term1) +(title:term2 body:term2) /// /// - ///

    In other words, all the query's terms must appear, but it doesn't matter in - /// what fields they appear.

    + ///

    + /// In other words, all the query's terms must appear, but it doesn't matter + /// in what fields they appear. + ///

    + /// ///
    - public MultiFieldQueryParser(System.String[] fields, Analyzer analyzer):base(null, analyzer) + /// Please use + /// {@link #MultiFieldQueryParser(Version, String[], Analyzer)} + /// instead + /// + public MultiFieldQueryParser(System.String[] fields, Analyzer analyzer):this(Version.LUCENE_24, fields, analyzer) + { + } + + /// Creates a MultiFieldQueryParser. + /// + ///

    + /// It will, when parse(String query) is called, construct a query like this + /// (assuming the query consists of two terms and you specify the two fields + /// title and body): + ///

    + /// + /// + /// (title:term1 body:term1) (title:term2 body:term2) + /// + /// + ///

    + /// When setDefaultOperator(AND_OPERATOR) is set, the result will be: + ///

    + /// + /// + /// +(title:term1 body:term1) +(title:term2 body:term2) + /// + /// + ///

    + /// In other words, all the query's terms must appear, but it doesn't matter + /// in what fields they appear. + ///

    + ///
    + public MultiFieldQueryParser(Version matchVersion, System.String[] fields, Analyzer analyzer):base(matchVersion, null, analyzer) { this.fields = fields; } @@ -205,11 +298,13 @@ /// Parses a query which searches on the fields specified. ///

    /// If x fields are specified, this effectively constructs: + /// ///

    -		/// 
    +		/// <code>
     		/// (field1:query1) (field2:query2) (field3:query3)...(fieldx:queryx)
    -		/// 
    +		/// </code>
     		/// 
    + /// ///
    /// Queries strings to parse /// @@ -217,18 +312,56 @@ /// /// Analyzer to use /// - /// ParseException if query parsing fails - /// IllegalArgumentException if the length of the queries array differs - /// from the length of the fields array + /// ParseException + /// if query parsing fails + /// + /// IllegalArgumentException + /// if the length of the queries array differs from the length of + /// the fields array /// + /// Use {@link #Parse(Version,String[],String[],Analyzer)} + /// instead + /// public static Query Parse(System.String[] queries, System.String[] fields, Analyzer analyzer) { + return Parse(Version.LUCENE_24, queries, fields, analyzer); + } + + /// Parses a query which searches on the fields specified. + ///

    + /// If x fields are specified, this effectively constructs: + /// + ///

    +		/// <code>
    +		/// (field1:query1) (field2:query2) (field3:query3)...(fieldx:queryx)
    +		/// </code>
    +		/// 
    + /// + ///
    + /// Lucene version to match; this is passed through to + /// QueryParser. + /// + /// Queries strings to parse + /// + /// Fields to search on + /// + /// Analyzer to use + /// + /// ParseException + /// if query parsing fails + /// + /// IllegalArgumentException + /// if the length of the queries array differs from the length of + /// the fields array + /// + public static Query Parse(Version matchVersion, System.String[] queries, System.String[] fields, Analyzer analyzer) + { if (queries.Length != fields.Length) throw new System.ArgumentException("queries.length != fields.length"); BooleanQuery bQuery = new BooleanQuery(); for (int i = 0; i < fields.Length; i++) { - QueryParser qp = new QueryParser(fields[i], analyzer); + QueryParser qp = new QueryParser(matchVersion, fields[i], analyzer); Query q = qp.Parse(queries[i]); if (q != null && (!(q is BooleanQuery) || ((BooleanQuery) q).GetClauses().Length > 0)) { @@ -272,14 +405,65 @@ /// IllegalArgumentException if the length of the fields array differs /// from the length of the flags array /// + /// Use + /// {@link #Parse(Version, String, String[], BooleanClause.Occur[], Analyzer)} + /// instead + /// public static Query Parse(System.String query, System.String[] fields, BooleanClause.Occur[] flags, Analyzer analyzer) { + return Parse(Version.LUCENE_24, query, fields, flags, analyzer); + } + + /// Parses a query, searching on the fields specified. Use this if you need + /// to specify certain fields as required, and others as prohibited. + ///

    + /// + ///

    +		/// Usage:
    +		/// <code>
    +		/// String[] fields = {"filename", "contents", "description"};
    +		/// BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD,
    +		/// BooleanClause.Occur.MUST,
    +		/// BooleanClause.Occur.MUST_NOT};
    +		/// MultiFieldQueryParser.parse("query", fields, flags, analyzer);
    +		/// </code>
    +		/// 
    + ///

    + /// The code above would construct a query: + /// + ///

    +		/// <code>
    +		/// (filename:query) +(contents:query) -(description:query)
    +		/// </code>
    +		/// 
    + /// + ///
    + /// Lucene version to match; this is passed through to + /// QueryParser. + /// + /// Query string to parse + /// + /// Fields to search on + /// + /// Flags describing the fields + /// + /// Analyzer to use + /// + /// ParseException + /// if query parsing fails + /// + /// IllegalArgumentException + /// if the length of the fields array differs from the length of + /// the flags array + /// + public static Query Parse(Version matchVersion, System.String query, System.String[] fields, BooleanClause.Occur[] flags, Analyzer analyzer) + { if (fields.Length != flags.Length) throw new System.ArgumentException("fields.length != flags.length"); BooleanQuery bQuery = new BooleanQuery(); for (int i = 0; i < fields.Length; i++) { - QueryParser qp = new QueryParser(fields[i], analyzer); + QueryParser qp = new QueryParser(matchVersion, fields[i], analyzer); Query q = qp.Parse(query); if (q != null && (!(q is BooleanQuery) || ((BooleanQuery) q).GetClauses().Length > 0)) { @@ -324,14 +508,65 @@ /// IllegalArgumentException if the length of the queries, fields, /// and flags array differ /// + /// Used + /// {@link #Parse(Version, String[], String[], BooleanClause.Occur[], Analyzer)} + /// instead + /// public static Query Parse(System.String[] queries, System.String[] fields, BooleanClause.Occur[] flags, Analyzer analyzer) { + return Parse(Version.LUCENE_24, queries, fields, flags, analyzer); + } + + /// Parses a query, searching on the fields specified. Use this if you need + /// to specify certain fields as required, and others as prohibited. + ///

    + /// + ///

    +		/// Usage:
    +		/// <code>
    +		/// String[] query = {"query1", "query2", "query3"};
    +		/// String[] fields = {"filename", "contents", "description"};
    +		/// BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD,
    +		/// BooleanClause.Occur.MUST,
    +		/// BooleanClause.Occur.MUST_NOT};
    +		/// MultiFieldQueryParser.parse(query, fields, flags, analyzer);
    +		/// </code>
    +		/// 
    + ///

    + /// The code above would construct a query: + /// + ///

    +		/// <code>
    +		/// (filename:query1) +(contents:query2) -(description:query3)
    +		/// </code>
    +		/// 
    + /// + ///
    + /// Lucene version to match; this is passed through to + /// QueryParser. + /// + /// Queries string to parse + /// + /// Fields to search on + /// + /// Flags describing the fields + /// + /// Analyzer to use + /// + /// ParseException + /// if query parsing fails + /// + /// IllegalArgumentException + /// if the length of the queries, fields, and flags array differ + /// + public static Query Parse(Version matchVersion, System.String[] queries, System.String[] fields, BooleanClause.Occur[] flags, Analyzer analyzer) + { if (!(queries.Length == fields.Length && queries.Length == flags.Length)) throw new System.ArgumentException("queries, fields, and flags array have have different length"); BooleanQuery bQuery = new BooleanQuery(); for (int i = 0; i < fields.Length; i++) { - QueryParser qp = new QueryParser(fields[i], analyzer); + QueryParser qp = new QueryParser(matchVersion, fields[i], analyzer); Query q = qp.Parse(queries[i]); if (q != null && (!(q is BooleanQuery) || ((BooleanQuery) q).GetClauses().Length > 0)) { Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.JJ URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/QueryParser/QueryParser.JJ?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.JJ (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.JJ Tue Nov 17 01:13:56 2009 @@ -59,6 +59,7 @@ import org.apache.lucene.search.TermQuery; import org.apache.lucene.search.WildcardQuery; import org.apache.lucene.util.Parameter; +import org.apache.lucene.util.Version; /** * This class is generated by JavaCC. The most important method is @@ -125,6 +126,14 @@ *

    NOTE: there is a new QueryParser in contrib, which matches * the same syntax as this class, but is more modular, * enabling substantial customization to how a query is created. + * + * + *

    NOTE: You must specify the required {@link Version} + * compatibility when creating QueryParser: + *

    */ public class QueryParser { @@ -149,7 +158,7 @@ boolean lowercaseExpandedTerms = true; MultiTermQuery.RewriteMethod multiTermRewriteMethod = MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT; boolean allowLeadingWildcard = false; - boolean enablePositionIncrements = false; + boolean enablePositionIncrements = true; Analyzer analyzer; String field; @@ -182,11 +191,26 @@ /** Constructs a query parser. * @param f the default field for query terms. * @param a used to find terms in the query text. + * @deprecated Use {@link #QueryParser(Version, String, Analyzer)} instead */ public QueryParser(String f, Analyzer a) { + this(Version.LUCENE_24, f, a); + } + + /** Constructs a query parser. + * @param matchVersion Lucene version to match. See {@link above) + * @param f the default field for query terms. + * @param a used to find terms in the query text. + */ + public QueryParser(Version matchVersion, String f, Analyzer a) { this(new FastCharStream(new StringReader(""))); analyzer = a; field = f; + if (matchVersion.onOrAfter(Version.LUCENE_29)) { + enablePositionIncrements = true; + } else { + enablePositionIncrements = false; + } } /** Parses a query string, returning a {@link org.apache.lucene.search.Query}. @@ -1179,7 +1203,7 @@ System.out.println("Usage: java org.apache.lucene.queryParser.QueryParser "); System.exit(0); } - QueryParser qp = new QueryParser("field", + QueryParser qp = new QueryParser(Version.LUCENE_CURRENT, "field", new org.apache.lucene.analysis.SimpleAnalyzer()); Query q = qp.parse(args[0]); System.out.println(q.toString("field")); Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/QueryParser/QueryParser.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.cs Tue Nov 17 01:13:56 2009 @@ -40,6 +40,7 @@ using TermQuery = Lucene.Net.Search.TermQuery; using TermRangeQuery = Lucene.Net.Search.TermRangeQuery; using WildcardQuery = Lucene.Net.Search.WildcardQuery; +using Version = Lucene.Net.Util.Version; namespace Lucene.Net.QueryParsers { @@ -109,6 +110,17 @@ /// the same syntax as this class, but is more modular, /// enabling substantial customization to how a query is created. ///
    + /// + ///

    NOTE: there is a new QueryParser in contrib, which matches + /// the same syntax as this class, but is more modular, + /// enabling substantial customization to how a query is created. + ///

    + /// NOTE: You must specify the required {@link Version} compatibility when + /// creating QueryParser: + ///
      + ///
    • As of 2.9, {@link #SetEnablePositionIncrements} is true by default. + ///
    + ///
    public class QueryParser : QueryParserConstants { private void InitBlock() @@ -141,7 +153,7 @@ internal bool lowercaseExpandedTerms = true; internal MultiTermQuery.RewriteMethod multiTermRewriteMethod; internal bool allowLeadingWildcard = false; - internal bool enablePositionIncrements = false; + internal bool enablePositionIncrements = true; internal Analyzer analyzer; internal System.String field; @@ -178,10 +190,33 @@ /// /// used to find terms in the query text. /// - public QueryParser(System.String f, Analyzer a):this(new FastCharStream(new System.IO.StringReader(""))) + /// Use {@link #QueryParser(Version, String, Analyzer)} instead + /// + public QueryParser(System.String f, Analyzer a):this(Version.LUCENE_24, f, a) + { + } + + /// Constructs a query parser. + /// + /// + /// Lucene version to match. See above) + /// + /// the default field for query terms. + /// + /// used to find terms in the query text. + /// + public QueryParser(Version matchVersion, System.String f, Analyzer a):this(new FastCharStream(new System.IO.StringReader(""))) { analyzer = a; field = f; + if (matchVersion.OnOrAfter(Version.LUCENE_29)) + { + enablePositionIncrements = true; + } + else + { + enablePositionIncrements = false; + } } /// Parses a query string, returning a {@link Lucene.Net.Search.Query}. @@ -867,7 +902,7 @@ if (resolution == null) { // no default or field specific date resolution has been set, - // use deprecated DateField to maintain compatibilty with + // use deprecated DateField to maintain compatibility with // pre-1.9 Lucene versions. part1 = DateField.DateToString(d1); part2 = DateField.DateToString(d2); @@ -1333,7 +1368,7 @@ System.Console.Out.WriteLine("Usage: java Lucene.Net.QueryParsers.QueryParser "); System.Environment.Exit(0); } - QueryParser qp = new QueryParser("field", new Lucene.Net.Analysis.SimpleAnalyzer()); + QueryParser qp = new QueryParser(Version.LUCENE_CURRENT, "field", new Lucene.Net.Analysis.SimpleAnalyzer()); Query q = qp.Parse(args[0]); System.Console.Out.WriteLine(q.ToString("field")); } @@ -1962,6 +1997,15 @@ } } + private bool Jj_3R_2() + { + if (Jj_scan_token(Lucene.Net.QueryParsers.QueryParserConstants.TERM)) + return true; + if (Jj_scan_token(Lucene.Net.QueryParsers.QueryParserConstants.COLON)) + return true; + return false; + } + private bool Jj_3_1() { Token xsp; @@ -1984,15 +2028,6 @@ return false; } - private bool Jj_3R_2() - { - if (Jj_scan_token(Lucene.Net.QueryParsers.QueryParserConstants.TERM)) - return true; - if (Jj_scan_token(Lucene.Net.QueryParsers.QueryParserConstants.COLON)) - return true; - return false; - } - /// Generated Token Manager. public QueryParserTokenManager token_source; /// Current token. @@ -2019,7 +2054,7 @@ private int jj_gc = 0; /// Constructor with user supplied CharStream. - public QueryParser(CharStream stream) + protected internal QueryParser(CharStream stream) { InitBlock(); token_source = new QueryParserTokenManager(stream); @@ -2046,7 +2081,7 @@ } /// Constructor with generated Token Manager. - public QueryParser(QueryParserTokenManager tm) + protected internal QueryParser(QueryParserTokenManager tm) { InitBlock(); token_source = tm; Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParserTokenManager.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/QueryParser/QueryParserTokenManager.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParserTokenManager.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParserTokenManager.cs Tue Nov 17 01:13:56 2009 @@ -40,6 +40,7 @@ using TermQuery = Lucene.Net.Search.TermQuery; using TermRangeQuery = Lucene.Net.Search.TermRangeQuery; using WildcardQuery = Lucene.Net.Search.WildcardQuery; +using Version = Lucene.Net.Util.Version; namespace Lucene.Net.QueryParsers { Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/FuzzyQuery.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Search/FuzzyQuery.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/FuzzyQuery.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/FuzzyQuery.cs Tue Nov 17 01:13:56 2009 @@ -133,8 +133,8 @@ { if (!termLongEnough) { - // can't match - return new BooleanQuery(); + // can only match if it's exact + return new TermQuery(term); } FilteredTermEnum enumerator = GetEnum(reader); Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Hits.cs URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Search/Hits.cs?rev=881077&r1=881076&r2=881077&view=diff ============================================================================== --- incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Hits.cs (original) +++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Hits.cs Tue Nov 17 01:13:56 2009 @@ -23,30 +23,33 @@ namespace Lucene.Net.Search { - /// A ranked list of documents, used to hold search results. + /// A ranked list of documents, used to hold search results. ///

    - /// Caution: Iterate only over the hits needed. Iterating over all - /// hits is generally not desirable and may be the source of - /// performance issues. If you need to iterate over many or all hits, consider - /// using the search method that takes a {@link HitCollector}. + /// Caution: Iterate only over the hits needed. Iterating over all hits is + /// generally not desirable and may be the source of performance issues. If you + /// need to iterate over many or all hits, consider using the search method that + /// takes a {@link HitCollector}. ///

    - ///

    Note: Deleting matching documents concurrently with traversing - /// the hits, might, when deleting hits that were not yet retrieved, decrease - /// {@link #Length()}. In such case, - /// {@link java.util.ConcurrentModificationException ConcurrentModificationException} - /// is thrown when accessing hit n ≥ current_{@link #Length()} - /// (but n < {@link #Length()}_at_start). + ///

    + /// Note: Deleting matching documents concurrently with traversing the + /// hits, might, when deleting hits that were not yet retrieved, decrease + /// {@link #Length()}. In such case, + /// {@link java.util.ConcurrentModificationException + /// ConcurrentModificationException} is thrown when accessing hit n + /// ≥ current_{@link #Length()} (but n < {@link #Length()} + /// _at_start). /// ///

    - /// - /// see {@link TopScoreDocCollector} and {@link TopDocs} :
    + /// see {@link Searcher#Search(Query, int)}, + /// {@link Searcher#Search(Query, Filter, int)} and + /// {@link Searcher#Search(Query, Filter, int, Sort)}:
    + /// ///
    -	/// TopScoreDocCollector collector = new TopScoreDocCollector(hitsPerPage);
    -	/// searcher.search(query, collector);
    -	/// ScoreDoc[] hits = collector.topDocs().scoreDocs;
    -	/// for (int i = 0; i < hits.length; i++) {
    +	/// TopDocs topDocs = searcher.Search(query, numHits);
    +	/// ScoreDoc[] hits = topDocs.scoreDocs;
    +	/// for (int i = 0; i < hits.Length; i++) {
     	/// int docId = hits[i].doc;
    -	/// Document d = searcher.doc(docId);
    +	/// Document d = searcher.Doc(docId);
     	/// // do something with current hit
     	/// ...
     	///