lucenenet-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aro...@apache.org
Subject svn commit: r881077 [1/2] - in /incubator/lucene.net/trunk/C#/src: ./ Demo/DeleteFiles/ Demo/DemoLib/ Demo/IndexFiles/ Demo/IndexHtml/ Demo/SearchFiles/ Lucene.Net/ Lucene.Net/Analysis/ Lucene.Net/Analysis/Standard/ Lucene.Net/Index/ Lucene.Net/QueryPa...
Date Tue, 17 Nov 2009 01:13:58 GMT
Author: aroush
Date: Tue Nov 17 01:13:56 2009
New Revision: 881077

URL: http://svn.apache.org/viewvc?rev=881077&view=rev
Log:
Apache Lucene.Net 2.9.1 build 001 "Beta" (port of Java Lucene 2.9.1 to Lucene.Net)

Added:
    incubator/lucene.net/trunk/C#/src/Test/Search/TestPrefixInBooleanQuery.cs
Modified:
    incubator/lucene.net/trunk/C#/src/CHANGES.txt
    incubator/lucene.net/trunk/C#/src/Demo/DeleteFiles/AssemblyInfo.cs
    incubator/lucene.net/trunk/C#/src/Demo/DemoLib/AssemblyInfo.cs
    incubator/lucene.net/trunk/C#/src/Demo/IndexFiles/AssemblyInfo.cs
    incubator/lucene.net/trunk/C#/src/Demo/IndexHtml/AssemblyInfo.cs
    incubator/lucene.net/trunk/C#/src/Demo/SearchFiles/AssemblyInfo.cs
    incubator/lucene.net/trunk/C#/src/HISTORY.txt
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardAnalyzer.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardTokenizer.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopAnalyzer.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopFilter.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TeeSinkTokenFilter.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Token.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenFilter.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenStream.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Tokenizer.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/AssemblyInfo.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DirectoryReader.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DocumentsWriter.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexReader.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexWriter.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermsHashPerField.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Lucene.Net.csproj
    incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.JJ
    incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParserTokenManager.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/FuzzyQuery.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Hits.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/IndexSearcher.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/NumericRangeQuery.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Payloads/PayloadNearQuery.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Scorer.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Searchable.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Searcher.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Util/Constants.cs
    incubator/lucene.net/trunk/C#/src/Lucene.Net/Util/Version.cs
    incubator/lucene.net/trunk/C#/src/Test/Analysis/BaseTokenStreamTestCase.cs
    incubator/lucene.net/trunk/C#/src/Test/Analysis/TestStandardAnalyzer.cs
    incubator/lucene.net/trunk/C#/src/Test/Analysis/TestTokenStreamBWComp.cs
    incubator/lucene.net/trunk/C#/src/Test/AssemblyInfo.cs
    incubator/lucene.net/trunk/C#/src/Test/Index/TestBackwardsCompatibility.cs
    incubator/lucene.net/trunk/C#/src/Test/Index/TestIndexWriter.cs
    incubator/lucene.net/trunk/C#/src/Test/Index/TestIndexWriterReader.cs
    incubator/lucene.net/trunk/C#/src/Test/QueryParser/TestQueryParser.cs
    incubator/lucene.net/trunk/C#/src/Test/Search/Payloads/TestPayloadNearQuery.cs
    incubator/lucene.net/trunk/C#/src/Test/Search/TestFuzzyQuery.cs
    incubator/lucene.net/trunk/C#/src/Test/Search/TestNumericRangeQuery32.cs
    incubator/lucene.net/trunk/C#/src/Test/Search/TestNumericRangeQuery64.cs
    incubator/lucene.net/trunk/C#/src/Test/Test.csproj

Modified: incubator/lucene.net/trunk/C#/src/CHANGES.txt
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/CHANGES.txt?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/CHANGES.txt (original)
+++ incubator/lucene.net/trunk/C#/src/CHANGES.txt Tue Nov 17 01:13:56 2009
@@ -1,5 +1,72 @@
 Lucene Change Log
-$Id: CHANGES.txt 817268 2009-09-21 14:23:44Z markrmiller $
+$Id: CHANGES.txt 832363 2009-11-03 09:37:36Z mikemccand $
+
+======================= Release 2.9.1 2009-11-06 =======================
+
+Changes in backwards compatibility policy
+
+ * LUCENE-2002: Add required Version matchVersion argument when
+   constructing QueryParser or MultiFieldQueryParser and, default (as
+   of 2.9) enablePositionIncrements to true to match
+   StandardAnalyzer's 2.9 default (Uwe Schindler, Mike McCandless)
+
+Bug fixes
+
+ * LUCENE-1974: Fixed nasty bug in BooleanQuery (when it used
+   BooleanScorer for scoring), whereby some matching documents fail to
+   be collected.  (Fulin Tang via Mike McCandless)
+
+ * LUCENE-1124: Make sure FuzzyQuery always matches the precise term.
+   (stefatwork@gmail.com via Mike McCandless)
+
+ * LUCENE-1976: Fix IndexReader.isCurrent() to return the right thing
+   when the reader is a near real-time reader.  (Jake Mannix via Mike
+   McCandless)
+
+ * LUCENE-1986: Fix NPE when scoring PayloadNearQuery (Peter Keegan,
+   Mark Miller via Mike McCandless)
+
+ * LUCENE-1992: Fix thread hazard if a merge is committing just as an
+   exception occurs during sync (Uwe Schindler, Mike McCandless)
+
+ * LUCENE-1995: Note in javadocs that IndexWriter.setRAMBufferSizeMB
+   cannot exceed 2048 MB, and throw IllegalArgumentException if it
+   does.  (Aaron McKee, Yonik Seeley, Mike McCandless)
+
+ * LUCENE-2004: Fix Constants.LUCENE_MAIN_VERSION to not be inlined
+   by client code.  (Uwe Schindler)
+
+ * LUCENE-2016: Replace illegal U+FFFF character with the replacement
+   char (U+FFFD) during indexing, to prevent silent index corruption.
+   (Peter Keegan, Mike McCandless)
+
+API Changes
+
+ * Un-deprecate search(Weight weight, Filter filter, int n) from
+   Searchable interface (deprecated by accident).  (Uwe Schindler)
+
+ * Un-deprecate o.a.l.util.Version constants.  (Mike McCandless)
+
+ * LUCENE-1987: Un-deprecate some ctors of Token, as they will not
+   be removed in 3.0 and are still useful. Also add some missing
+   o.a.l.util.Version constants for enabling invalid acronym
+   settings in StandardAnalyzer to be compatible with the coming
+   Lucene 3.0.  (Uwe Schindler)
+
+ * LUCENE-1973: Un-deprecate IndexSearcher.setDefaultFieldSortScoring,
+   to allow controlling per-IndexSearcher whether scores are computed
+   when sorting by field.  (Uwe Schindler, Mike McCandless)
+   
+Documentation
+
+ * LUCENE-1955: Fix Hits deprecation notice to point users in right
+   direction. (Mike McCandless, Mark Miller)
+   
+ * Fix javadoc about score tracking done by search methods in Searcher 
+   and IndexSearcher.  (Mike McCandless)
+
+ * LUCENE-2008: Javadoc improvements for TokenStream/Tokenizer/Token
+   (Luke Nezda via Mike McCandless)
 
 ======================= Release 2.9.0 2009-09-23 =======================
 
@@ -99,6 +166,11 @@
     abstract rather than an interface) back compat break if you have overridden 
     Query.creatWeight, so we have taken the opportunity to make this change.
     (Tim Smith, Shai Erera via Mark Miller)
+
+ * LUCENE-1708 - IndexReader.document() no longer checks if the document is 
+    deleted. You can call IndexReader.isDeleted(n) prior to calling document(n).
+    (Shai Erera via Mike McCandless)
+
  
 Changes in runtime behavior
 
@@ -149,9 +221,6 @@
     rely on this behavior by the 3.0 release of Lucene. (Jonathan
     Mamou, Mark Miller via Mike McCandless)
 
- * LUCENE-1708 - IndexReader.document() no longer checks if the document is 
-    deleted. You can call IndexReader.isDeleted(n) prior to calling document(n).
-    (Shai Erera via Mike McCandless)
 
  * LUCENE-1715: Finalizers have been removed from the 4 core classes
     that still had them, since they will cause GC to take longer, thus
@@ -793,7 +862,7 @@
     using CloseableThreadLocal internally.  (Jason Rutherglen via Mike
     McCandless).
     
- * LUCENE-1224: Short circuit FuzzyQuery.rewrite when input token length 
+ * LUCENE-1124: Short circuit FuzzyQuery.rewrite when input token length 
     is small compared to minSimilarity. (Timo Nentwig, Mark Miller)
 
  * LUCENE-1316: MatchAllDocsQuery now avoids the synchronized

Modified: incubator/lucene.net/trunk/C#/src/Demo/DeleteFiles/AssemblyInfo.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Demo/DeleteFiles/AssemblyInfo.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Demo/DeleteFiles/AssemblyInfo.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Demo/DeleteFiles/AssemblyInfo.cs Tue Nov 17 01:13:56 2009
@@ -33,7 +33,7 @@
 [assembly: AssemblyDefaultAlias("Lucene.Net")]
 [assembly: AssemblyCulture("")]
 
-[assembly: AssemblyInformationalVersionAttribute("2.9.0")]
+[assembly: AssemblyInformationalVersionAttribute("2.9.1")]
 
 //
 // Version information for an assembly consists of the following four values:
@@ -46,7 +46,7 @@
 // You can specify all the values or you can default the Revision and Build Numbers 
 // by using the '*' as shown below:
 
-[assembly: AssemblyVersion("2.9.0.001")]
+[assembly: AssemblyVersion("2.9.1.001")]
 
 //
 // In order to sign your assembly you must specify a key to use. Refer to the 

Modified: incubator/lucene.net/trunk/C#/src/Demo/DemoLib/AssemblyInfo.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Demo/DemoLib/AssemblyInfo.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Demo/DemoLib/AssemblyInfo.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Demo/DemoLib/AssemblyInfo.cs Tue Nov 17 01:13:56 2009
@@ -33,7 +33,7 @@
 [assembly: AssemblyDefaultAlias("Lucene.Net")]
 [assembly: AssemblyCulture("")]
 
-[assembly: AssemblyInformationalVersionAttribute("2.9.0")]
+[assembly: AssemblyInformationalVersionAttribute("2.9.1")]
 
 //
 // Version information for an assembly consists of the following four values:
@@ -46,7 +46,7 @@
 // You can specify all the values or you can default the Revision and Build Numbers 
 // by using the '*' as shown below:
 
-[assembly: AssemblyVersion("2.9.0.001")]
+[assembly: AssemblyVersion("2.9.1.001")]
 
 //
 // In order to sign your assembly you must specify a key to use. Refer to the 

Modified: incubator/lucene.net/trunk/C#/src/Demo/IndexFiles/AssemblyInfo.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Demo/IndexFiles/AssemblyInfo.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Demo/IndexFiles/AssemblyInfo.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Demo/IndexFiles/AssemblyInfo.cs Tue Nov 17 01:13:56 2009
@@ -33,7 +33,7 @@
 [assembly: AssemblyDefaultAlias("Lucene.Net")]
 [assembly: AssemblyCulture("")]
 
-[assembly: AssemblyInformationalVersionAttribute("2.9.0")]
+[assembly: AssemblyInformationalVersionAttribute("2.9.1")]
 
 //
 // Version information for an assembly consists of the following four values:
@@ -46,7 +46,7 @@
 // You can specify all the values or you can default the Revision and Build Numbers 
 // by using the '*' as shown below:
 
-[assembly: AssemblyVersion("2.9.0.001")]
+[assembly: AssemblyVersion("2.9.1.001")]
 
 //
 // In order to sign your assembly you must specify a key to use. Refer to the 

Modified: incubator/lucene.net/trunk/C#/src/Demo/IndexHtml/AssemblyInfo.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Demo/IndexHtml/AssemblyInfo.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Demo/IndexHtml/AssemblyInfo.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Demo/IndexHtml/AssemblyInfo.cs Tue Nov 17 01:13:56 2009
@@ -33,7 +33,7 @@
 [assembly: AssemblyDefaultAlias("Lucene.Net")]
 [assembly: AssemblyCulture("")]
 
-[assembly: AssemblyInformationalVersionAttribute("2.9.0")]
+[assembly: AssemblyInformationalVersionAttribute("2.9.1")]
 
 //
 // Version information for an assembly consists of the following four values:
@@ -46,7 +46,7 @@
 // You can specify all the values or you can default the Revision and Build Numbers 
 // by using the '*' as shown below:
 
-[assembly: AssemblyVersion("2.9.0.001")]
+[assembly: AssemblyVersion("2.9.1.001")]
 
 //
 // In order to sign your assembly you must specify a key to use. Refer to the 

Modified: incubator/lucene.net/trunk/C#/src/Demo/SearchFiles/AssemblyInfo.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Demo/SearchFiles/AssemblyInfo.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Demo/SearchFiles/AssemblyInfo.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Demo/SearchFiles/AssemblyInfo.cs Tue Nov 17 01:13:56 2009
@@ -33,7 +33,7 @@
 [assembly: AssemblyDefaultAlias("Lucene.Net")]
 [assembly: AssemblyCulture("")]
 
-[assembly: AssemblyInformationalVersionAttribute("2.9.0")]
+[assembly: AssemblyInformationalVersionAttribute("2.9.1")]
 
 //
 // Version information for an assembly consists of the following four values:
@@ -46,7 +46,7 @@
 // You can specify all the values or you can default the Revision and Build Numbers 
 // by using the '*' as shown below:
 
-[assembly: AssemblyVersion("2.9.0.001")]
+[assembly: AssemblyVersion("2.9.1.001")]
 
 //
 // In order to sign your assembly you must specify a key to use. Refer to the 

Modified: incubator/lucene.net/trunk/C#/src/HISTORY.txt
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/HISTORY.txt?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/HISTORY.txt (original)
+++ incubator/lucene.net/trunk/C#/src/HISTORY.txt Tue Nov 17 01:13:56 2009
@@ -2,6 +2,10 @@
 -------------------------
 
 
+16Nov09:
+	- Release: Apache Lucene.Net 2.9.1 build 001 "Beta"
+
+
 03Nov09:
 	- Release: Apache Lucene.Net 2.9.0 build 001 "Alpha"
 	- Port: Test code

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardAnalyzer.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/Standard/StandardAnalyzer.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardAnalyzer.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardAnalyzer.cs Tue Nov 17 01:13:56 2009
@@ -23,22 +23,22 @@
 namespace Lucene.Net.Analysis.Standard
 {
 	
-	/// <summary> Filters {@link StandardTokenizer} with {@link StandardFilter}, {@link
-	/// LowerCaseFilter} and {@link StopFilter}, using a list of
-	/// English stop words.
+	/// <summary> Filters {@link StandardTokenizer} with {@link StandardFilter},
+	/// {@link LowerCaseFilter} and {@link StopFilter}, using a list of English stop
+	/// words.
 	/// 
 	/// <a name="version"/>
-	/// <p>You must specify the required {@link Version}
-	/// compatibility when creating StandardAnalyzer:
+	/// <p>
+	/// You must specify the required {@link Version} compatibility when creating
+	/// StandardAnalyzer:
 	/// <ul>
-	/// <li> As of 2.9, StopFilter preserves position
-	/// increments by default
-	/// <li> As of 2.9, Tokens incorrectly identified as acronyms
-	/// are corrected (see <a href="https://issues.apache.org/jira/browse/LUCENE-1068">LUCENE-1608</a>
+	/// <li>As of 2.9, StopFilter preserves position increments
+	/// <li>As of 2.4, Tokens incorrectly identified as acronyms are corrected (see
+	/// <a href="https://issues.apache.org/jira/browse/LUCENE-1068">LUCENE-1608</a>
 	/// </ul>
 	/// 
 	/// </summary>
-	/// <version>  $Id: StandardAnalyzer.java 811070 2009-09-03 18:31:41Z hossman $
+	/// <version>  $Id: StandardAnalyzer.java 829134 2009-10-23 17:18:53Z mikemccand $
 	/// </version>
 	public class StandardAnalyzer : Analyzer
 	{
@@ -280,6 +280,14 @@
 			{
 				useDefaultStopPositionIncrements = true;
 			}
+			if (matchVersion.OnOrAfter(Version.LUCENE_24))
+			{
+				replaceInvalidAcronym = defaultReplaceInvalidAcronym;
+			}
+			else
+			{
+				replaceInvalidAcronym = false;
+			}
 		}
 		
 		/// <summary>Constructs a {@link StandardTokenizer} filtered by a {@link

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardTokenizer.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/Standard/StandardTokenizer.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardTokenizer.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Standard/StandardTokenizer.cs Tue Nov 17 01:13:56 2009
@@ -25,6 +25,7 @@
 using TermAttribute = Lucene.Net.Analysis.Tokenattributes.TermAttribute;
 using TypeAttribute = Lucene.Net.Analysis.Tokenattributes.TypeAttribute;
 using AttributeSource = Lucene.Net.Util.AttributeSource;
+using Version = Lucene.Net.Util.Version;
 
 namespace Lucene.Net.Analysis.Standard
 {
@@ -44,6 +45,15 @@
 	/// <p>Many applications have specific tokenizer needs.  If this tokenizer does
 	/// not suit your application, please consider copying this source code
 	/// directory to your project and maintaining your own grammar-based tokenizer.
+	/// 
+	/// <a name="version"/>
+	/// <p>
+	/// You must specify the required {@link Version} compatibility when creating
+	/// StandardAnalyzer:
+	/// <ul>
+	/// <li>As of 2.4, Tokens incorrectly identified as acronyms are corrected (see
+	/// <a href="https://issues.apache.org/jira/browse/LUCENE-1068">LUCENE-1608</a>
+	/// </ul>
 	/// </summary>
 	
 	public class StandardTokenizer:Tokenizer
@@ -107,7 +117,9 @@
 		/// <summary> Creates a new instance of the {@link StandardTokenizer}. Attaches the
 		/// <code>input</code> to a newly created JFlex scanner.
 		/// </summary>
-		public StandardTokenizer(System.IO.TextReader input):this(input, false)
+		/// <deprecated> Use {@link #StandardTokenizer(Version, Reader)} instead
+		/// </deprecated>
+		public StandardTokenizer(System.IO.TextReader input):this(Version.LUCENE_24, input)
 		{
 		}
 		
@@ -121,6 +133,8 @@
 		/// 
 		/// See http://issues.apache.org/jira/browse/LUCENE-1068
 		/// </param>
+		/// <deprecated> Use {@link #StandardTokenizer(Version, Reader)} instead
+		/// </deprecated>
 		public StandardTokenizer(System.IO.TextReader input, bool replaceInvalidAcronym):base()
 		{
 			InitBlock();
@@ -128,7 +142,27 @@
 			Init(input, replaceInvalidAcronym);
 		}
 		
+		/// <summary> Creates a new instance of the
+		/// {@link org.apache.lucene.analysis.standard.StandardTokenizer}. Attaches
+		/// the <code>input</code> to the newly created JFlex scanner.
+		/// 
+		/// </summary>
+		/// <param name="input">The input reader
+		/// 
+		/// See http://issues.apache.org/jira/browse/LUCENE-1068
+		/// </param>
+		public StandardTokenizer(Version matchVersion, System.IO.TextReader input):base()
+		{
+			InitBlock();
+			this.scanner = new StandardTokenizerImpl(input);
+			Init(input, matchVersion);
+		}
+		
 		/// <summary> Creates a new StandardTokenizer with a given {@link AttributeSource}. </summary>
+		/// <deprecated> Use
+		/// {@link #StandardTokenizer(Version, AttributeSource, Reader)}
+		/// instead
+		/// </deprecated>
 		public StandardTokenizer(AttributeSource source, System.IO.TextReader input, bool replaceInvalidAcronym):base(source)
 		{
 			InitBlock();
@@ -136,7 +170,19 @@
 			Init(input, replaceInvalidAcronym);
 		}
 		
+		/// <summary> Creates a new StandardTokenizer with a given {@link AttributeSource}.</summary>
+		public StandardTokenizer(Version matchVersion, AttributeSource source, System.IO.TextReader input):base(source)
+		{
+			InitBlock();
+			this.scanner = new StandardTokenizerImpl(input);
+			Init(input, matchVersion);
+		}
+		
 		/// <summary> Creates a new StandardTokenizer with a given {@link Lucene.Net.Util.AttributeSource.AttributeFactory} </summary>
+		/// <deprecated> Use
+		/// {@link #StandardTokenizer(Version, org.apache.lucene.util.AttributeSource.AttributeFactory, Reader)}
+		/// instead
+		/// </deprecated>
 		public StandardTokenizer(AttributeFactory factory, System.IO.TextReader input, bool replaceInvalidAcronym):base(factory)
 		{
 			InitBlock();
@@ -144,6 +190,16 @@
 			Init(input, replaceInvalidAcronym);
 		}
 		
+		/// <summary> Creates a new StandardTokenizer with a given
+		/// {@link org.apache.lucene.util.AttributeSource.AttributeFactory}
+		/// </summary>
+		public StandardTokenizer(Version matchVersion, AttributeFactory factory, System.IO.TextReader input):base(factory)
+		{
+			InitBlock();
+			this.scanner = new StandardTokenizerImpl(input);
+			Init(input, matchVersion);
+		}
+		
 		private void  Init(System.IO.TextReader input, bool replaceInvalidAcronym)
 		{
 			this.replaceInvalidAcronym = replaceInvalidAcronym;
@@ -154,6 +210,18 @@
 			typeAtt = (TypeAttribute) AddAttribute(typeof(TypeAttribute));
 		}
 		
+		private void  Init(System.IO.TextReader input, Version matchVersion)
+		{
+			if (matchVersion.OnOrAfter(Version.LUCENE_24))
+			{
+				Init(input, true);
+			}
+			else
+			{
+				Init(input, false);
+			}
+		}
+		
 		// this tokenizer generates three attributes:
 		// offset, positionIncrement and type
 		private TermAttribute termAtt;

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopAnalyzer.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/StopAnalyzer.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopAnalyzer.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopAnalyzer.cs Tue Nov 17 01:13:56 2009
@@ -1,4 +1,3 @@
-
 /* 
  * Licensed to the Apache Software Foundation (ASF) under one or more
  * contributor license agreements.  See the NOTICE file distributed with
@@ -18,10 +17,22 @@
 
 using System;
 
+using Version = Lucene.Net.Util.Version;
+
 namespace Lucene.Net.Analysis
 {
 	
-	/// <summary>Filters {@link LetterTokenizer} with {@link LowerCaseFilter} and {@link StopFilter}. </summary>
+	/// <summary> Filters {@link LetterTokenizer} with {@link LowerCaseFilter} and
+	/// {@link StopFilter}.
+	/// 
+	/// <a name="version"/>
+	/// <p>
+	/// You must specify the required {@link Version} compatibility when creating
+	/// StopAnalyzer:
+	/// <ul>
+	/// <li>As of 2.9, position increments are preserved
+	/// </ul>
+	/// </summary>
 	
 	public sealed class StopAnalyzer:Analyzer
 	{
@@ -45,7 +56,7 @@
 		/// <summary>Builds an analyzer which removes words in
 		/// ENGLISH_STOP_WORDS.
 		/// </summary>
-		/// <deprecated> Use {@link #StopAnalyzer(boolean)} instead 
+		/// <deprecated> Use {@link #StopAnalyzer(Version)} instead
 		/// </deprecated>
 		public StopAnalyzer()
 		{
@@ -54,12 +65,22 @@
 			enablePositionIncrements = false;
 		}
 		
+		/// <summary> Builds an analyzer which removes words in ENGLISH_STOP_WORDS.</summary>
+		public StopAnalyzer(Version matchVersion)
+		{
+			stopWords = ENGLISH_STOP_WORDS_SET;
+			useDefaultStopPositionIncrement = false;
+			enablePositionIncrements = StopFilter.GetEnablePositionIncrementsVersionDefault(matchVersion);
+		}
+		
 		/// <summary>Builds an analyzer which removes words in
 		/// ENGLISH_STOP_WORDS.
 		/// </summary>
-		/// <param name="enablePositionIncrements">See {@link
-		/// StopFilter#setEnablePositionIncrements} 
+		/// <param name="enablePositionIncrements">
+		/// See {@link StopFilter#SetEnablePositionIncrements}
 		/// </param>
+		/// <deprecated> Use {@link #StopAnalyzer(Version)} instead
+		/// </deprecated>
 		public StopAnalyzer(bool enablePositionIncrements)
 		{
 			stopWords = ENGLISH_STOP_WORDS_SET;
@@ -68,7 +89,7 @@
 		}
 		
 		/// <summary>Builds an analyzer with the stop words from the given set.</summary>
-		/// <deprecated> Use {@link #StopAnalyzer(Set, boolean)} instead 
+		/// <deprecated> Use {@link #StopAnalyzer(Version, Set)} instead
 		/// </deprecated>
 		public StopAnalyzer(System.Collections.Hashtable stopWords)
 		{
@@ -78,11 +99,21 @@
 		}
 		
 		/// <summary>Builds an analyzer with the stop words from the given set.</summary>
+		public StopAnalyzer(Version matchVersion, System.Collections.Hashtable stopWords)
+		{
+			this.stopWords = stopWords;
+			useDefaultStopPositionIncrement = false;
+			enablePositionIncrements = StopFilter.GetEnablePositionIncrementsVersionDefault(matchVersion);
+		}
+		
+		/// <summary>Builds an analyzer with the stop words from the given set.</summary>
 		/// <param name="stopWords">Set of stop words
 		/// </param>
-		/// <param name="enablePositionIncrements">See {@link
-		/// StopFilter#setEnablePositionIncrements} 
+		/// <param name="enablePositionIncrements">
+		/// See {@link StopFilter#SetEnablePositionIncrements}
 		/// </param>
+		/// <deprecated> Use {@link #StopAnalyzer(Version, Set)} instead
+		/// </deprecated>
 		public StopAnalyzer(System.Collections.Hashtable stopWords, bool enablePositionIncrements)
 		{
 			this.stopWords = stopWords;
@@ -93,6 +124,8 @@
 		/// <summary>Builds an analyzer which removes words in the provided array.</summary>
 		/// <deprecated> Use {@link #StopAnalyzer(Set, boolean)} instead 
 		/// </deprecated>
+		/// <deprecated> Use {@link #StopAnalyzer(Version, Set)} instead
+		/// </deprecated>
 		public StopAnalyzer(System.String[] stopWords)
 		{
 			this.stopWords = StopFilter.MakeStopSet(stopWords);
@@ -103,10 +136,10 @@
 		/// <summary>Builds an analyzer which removes words in the provided array.</summary>
 		/// <param name="stopWords">Array of stop words
 		/// </param>
-		/// <param name="enablePositionIncrements">See {@link
-		/// StopFilter#setEnablePositionIncrements} 
+		/// <param name="enablePositionIncrements">
+		/// See {@link StopFilter#SetEnablePositionIncrements}
 		/// </param>
-		/// <deprecated> Use {@link #StopAnalyzer(Set, boolean)} instead
+		/// <deprecated> Use {@link #StopAnalyzer(Version, Set)} instead
 		/// </deprecated>
 		public StopAnalyzer(System.String[] stopWords, bool enablePositionIncrements)
 		{
@@ -118,7 +151,7 @@
 		/// <summary>Builds an analyzer with the stop words from the given file.</summary>
 		/// <seealso cref="WordlistLoader.GetWordSet(File)">
 		/// </seealso>
-		/// <deprecated> Use {@link #StopAnalyzer(File, boolean)} instead 
+		/// <deprecated> Use {@link #StopAnalyzer(Version, File)} instead
 		/// </deprecated>
 		public StopAnalyzer(System.IO.FileInfo stopwordsFile)
 		{
@@ -132,9 +165,11 @@
 		/// </seealso>
 		/// <param name="stopwordsFile">File to load stop words from
 		/// </param>
-		/// <param name="enablePositionIncrements">See {@link
-		/// StopFilter#setEnablePositionIncrements} 
+		/// <param name="enablePositionIncrements">
+		/// See {@link StopFilter#SetEnablePositionIncrements}
 		/// </param>
+		/// <deprecated> Use {@link #StopAnalyzer(Version, File)} instead
+		/// </deprecated>
 		public StopAnalyzer(System.IO.FileInfo stopwordsFile, bool enablePositionIncrements)
 		{
 			stopWords = WordlistLoader.GetWordSet(stopwordsFile);
@@ -142,10 +177,26 @@
 			useDefaultStopPositionIncrement = false;
 		}
 		
+		/// <summary> Builds an analyzer with the stop words from the given file.
+		/// 
+		/// </summary>
+		/// <seealso cref="WordlistLoader.getWordSet(File)">
+		/// </seealso>
+		/// <param name="matchVersion">See <a href="#version">above</a>
+		/// </param>
+		/// <param name="stopwordsFile">File to load stop words from
+		/// </param>
+		public StopAnalyzer(Version matchVersion, System.IO.FileInfo stopwordsFile)
+		{
+			stopWords = WordlistLoader.GetWordSet(stopwordsFile);
+			this.enablePositionIncrements = StopFilter.GetEnablePositionIncrementsVersionDefault(matchVersion);
+			useDefaultStopPositionIncrement = false;
+		}
+		
 		/// <summary>Builds an analyzer with the stop words from the given reader.</summary>
 		/// <seealso cref="WordlistLoader.GetWordSet(Reader)">
 		/// </seealso>
-		/// <deprecated> Use {@link #StopAnalyzer(Reader, boolean)} instead
+		/// <deprecated> Use {@link #StopAnalyzer(Version, Reader)} instead
 		/// </deprecated>
 		public StopAnalyzer(System.IO.TextReader stopwords)
 		{
@@ -159,17 +210,33 @@
 		/// </seealso>
 		/// <param name="stopwords">Reader to load stop words from
 		/// </param>
-		/// <param name="enablePositionIncrements">See {@link
-		/// StopFilter#setEnablePositionIncrements} 
+		/// <param name="enablePositionIncrements">
+		/// See {@link StopFilter#SetEnablePositionIncrements}
 		/// </param>
+		/// <deprecated> Use {@link #StopAnalyzer(Version, Reader)} instead
+		/// </deprecated>
 		public StopAnalyzer(System.IO.TextReader stopwords, bool enablePositionIncrements)
 		{
 			stopWords = WordlistLoader.GetWordSet(stopwords);
 			this.enablePositionIncrements = enablePositionIncrements;
 			useDefaultStopPositionIncrement = false;
 		}
-		
-		/// <summary>Filters LowerCaseTokenizer with StopFilter. </summary>
+
+        /// <summary>Builds an analyzer with the stop words from the given reader. </summary>
+        /// <seealso cref="WordlistLoader.GetWordSet(Reader)">
+        /// </seealso>
+        /// <param name="matchVersion">See <a href="#Version">above</a>
+        /// </param>
+        /// <param name="stopwords">Reader to load stop words from
+        /// </param>
+        public StopAnalyzer(Version matchVersion, System.IO.TextReader stopwords)
+        {
+            stopWords = WordlistLoader.GetWordSet(stopwords);
+            this.enablePositionIncrements = StopFilter.GetEnablePositionIncrementsVersionDefault(matchVersion);
+            useDefaultStopPositionIncrement = false;
+        }
+
+        /// <summary>Filters LowerCaseTokenizer with StopFilter. </summary>
 		public override TokenStream TokenStream(System.String fieldName, System.IO.TextReader reader)
 		{
 			if (useDefaultStopPositionIncrement)

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopFilter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/StopFilter.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopFilter.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/StopFilter.cs Tue Nov 17 01:13:56 2009
@@ -20,6 +20,7 @@
 using PositionIncrementAttribute = Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttribute;
 using TermAttribute = Lucene.Net.Analysis.Tokenattributes.TermAttribute;
 using QueryParser = Lucene.Net.QueryParsers.QueryParser;
+using Version = Lucene.Net.Util.Version;
 
 namespace Lucene.Net.Analysis
 {
@@ -263,14 +264,34 @@
 			return ENABLE_POSITION_INCREMENTS_DEFAULT;
 		}
 		
-		/// <summary> Set the default position increments behavior of every StopFilter created from now on.
+		/// <summary> Returns version-dependent default for enablePositionIncrements. Analyzers
+		/// that embed StopFilter use this method when creating the StopFilter. Prior
+		/// to 2.9, this returns {@link #getEnablePositionIncrementsDefault}. On 2.9
+		/// or later, it returns true.
+		/// </summary>
+		public static bool GetEnablePositionIncrementsVersionDefault(Version matchVersion)
+		{
+			if (matchVersion.OnOrAfter(Version.LUCENE_29))
+			{
+				return true;
+			}
+			else
+			{
+				return ENABLE_POSITION_INCREMENTS_DEFAULT;
+			}
+		}
+		
+		/// <summary> Set the default position increments behavior of every StopFilter created
+		/// from now on.
 		/// <p>
-		/// Note: behavior of a single StopFilter instance can be modified 
-		/// with {@link #SetEnablePositionIncrements(boolean)}.
-		/// This static method allows control over behavior of classes using StopFilters internally, 
-		/// for example {@link Lucene.Net.Analysis.Standard.StandardAnalyzer StandardAnalyzer}. 
+		/// Note: behavior of a single StopFilter instance can be modified with
+		/// {@link #SetEnablePositionIncrements(boolean)}. This static method allows
+		/// control over behavior of classes using StopFilters internally, for
+		/// example {@link Lucene.Net.Analysis.Standard.StandardAnalyzer
+		/// StandardAnalyzer} if used with the no-arg ctor.
 		/// <p>
 		/// Default : false.
+		/// 
 		/// </summary>
 		/// <seealso cref="setEnablePositionIncrements(boolean).">
 		/// </seealso>

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TeeSinkTokenFilter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/TeeSinkTokenFilter.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TeeSinkTokenFilter.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TeeSinkTokenFilter.cs Tue Nov 17 01:13:56 2009
@@ -46,7 +46,7 @@
 	/// d.add(new Field("f3", final3));
 	/// d.add(new Field("f4", final4));
 	/// </pre>
-	/// In this example, <code>sink1</code> and <code>sink2<code> will both get tokens from both
+	/// In this example, <code>sink1</code> and <code>sink2</code> will both get tokens from both
 	/// <code>reader1</code> and <code>reader2</code> after whitespace tokenizer
 	/// and now we can further wrap any of these in extra analysis, and more "sources" can be inserted if desired.
 	/// It is important, that tees are consumed before sinks (in the above example, the field names must be

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Token.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/Token.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Token.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Token.cs Tue Nov 17 01:13:56 2009
@@ -38,8 +38,8 @@
 	/// <p>
 	/// The start and end offsets permit applications to re-associate a token with
 	/// its source text, e.g., to display highlighted query terms in a document
-	/// browser, or to show matching text fragments in a KWIC (KeyWord In Context)
-	/// display, etc.
+	/// browser, or to show matching text fragments in a <abbr
+	/// title="KeyWord In Context">KWIC</abbr> display, etc.
 	/// <p>
 	/// The type is a string, assigned by a lexical analyzer
 	/// (a.k.a. tokenizer), naming the lexical or syntactic class that the token
@@ -71,9 +71,9 @@
 	/// associated performance cost has been added (below).  The
 	/// {@link #TermText()} method has been deprecated.</p>
 	/// </summary>
-	/// <summary><p>Tokenizers and filters should try to re-use a Token
-	/// instance when possible for best performance, by
-	/// implementing the {@link TokenStream#Next(Token)} API.
+	/// <summary><p>Tokenizers and TokenFilters should try to re-use a Token instance when
+	/// possible for best performance, by implementing the
+	/// {@link TokenStream#IncrementToken()} API.
 	/// Failing that, to create a new Token you should first use
 	/// one of the constructors that starts with null text.  To load
 	/// the token from a char[] use {@link #SetTermBuffer(char[], int, int)}.
@@ -87,30 +87,35 @@
 	/// set the length of the term text.  See <a target="_top"
 	/// href="https://issues.apache.org/jira/browse/LUCENE-969">LUCENE-969</a>
 	/// for details.</p>
-	/// <p>Typical reuse patterns:
+	/// <p>Typical Token reuse patterns:
 	/// <ul>
-	/// <li> Copying text from a string (type is reset to #DEFAULT_TYPE if not specified):<br/>
+	/// <li> Copying text from a string (type is reset to {@link #DEFAULT_TYPE} if not
+	/// specified):<br/>
 	/// <pre>
 	/// return reusableToken.reinit(string, startOffset, endOffset[, type]);
 	/// </pre>
 	/// </li>
-	/// <li> Copying some text from a string (type is reset to #DEFAULT_TYPE if not specified):<br/>
+	/// <li> Copying some text from a string (type is reset to {@link #DEFAULT_TYPE}
+	/// if not specified):<br/>
 	/// <pre>
 	/// return reusableToken.reinit(string, 0, string.length(), startOffset, endOffset[, type]);
 	/// </pre>
 	/// </li>
 	/// </li>
-	/// <li> Copying text from char[] buffer (type is reset to #DEFAULT_TYPE if not specified):<br/>
+	/// <li> Copying text from char[] buffer (type is reset to {@link #DEFAULT_TYPE}
+	/// if not specified):<br/>
 	/// <pre>
 	/// return reusableToken.reinit(buffer, 0, buffer.length, startOffset, endOffset[, type]);
 	/// </pre>
 	/// </li>
-	/// <li> Copying some text from a char[] buffer (type is reset to #DEFAULT_TYPE if not specified):<br/>
+	/// <li> Copying some text from a char[] buffer (type is reset to
+	/// {@link #DEFAULT_TYPE} if not specified):<br/>
 	/// <pre>
 	/// return reusableToken.reinit(buffer, start, end - start, startOffset, endOffset[, type]);
 	/// </pre>
 	/// </li>
-	/// <li> Copying from one one Token to another (type is reset to #DEFAULT_TYPE if not specified):<br/>
+	/// <li> Copying from one one Token to another (type is reset to
+	/// {@link #DEFAULT_TYPE} if not specified):<br/>
 	/// <pre>
 	/// return reusableToken.reinit(source.termBuffer(), 0, source.termLength(), source.startOffset(), source.endOffset()[, source.type()]);
 	/// </pre>
@@ -120,7 +125,8 @@
 	/// <ul>
 	/// <li>clear() initializes all of the fields to default values. This was changed in contrast to Lucene 2.4, but should affect no one.</li>
 	/// <li>Because <code>TokenStreams</code> can be chained, one cannot assume that the <code>Token's</code> current type is correct.</li>
-	/// <li>The startOffset and endOffset represent the start and offset in the source text. So be careful in adjusting them.</li>
+	/// <li>The startOffset and endOffset represent the start and offset in the
+	/// source text, so be careful in adjusting them.</li>
 	/// <li>When caching a reusable token, clone it. When injecting a cached token into a stream that can be reset, clone it again.</li>
 	/// </ul>
 	/// </p>
@@ -247,8 +253,6 @@
 		/// </param>
 		/// <param name="end">end offset
 		/// </param>
-		/// <deprecated> Use {@link #Token(char[], int, int, int, int)} instead.
-		/// </deprecated>
 		public Token(System.String text, int start, int end)
 		{
 			termText = text;
@@ -269,8 +273,6 @@
 		/// </param>
 		/// <param name="typ">token type
 		/// </param>
-		/// <deprecated> Use {@link #Token(char[], int, int, int, int)} and {@link #SetType(String)} instead.
-		/// </deprecated>
 		public Token(System.String text, int start, int end, System.String typ)
 		{
 			termText = text;
@@ -292,8 +294,6 @@
 		/// </param>
 		/// <param name="flags">token type bits
 		/// </param>
-		/// <deprecated> Use {@link #Token(char[], int, int, int, int)} and {@link #SetFlags(int)} instead.
-		/// </deprecated>
 		public Token(System.String text, int start, int end, int flags)
 		{
 			termText = text;

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenFilter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/TokenFilter.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenFilter.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenFilter.cs Tue Nov 17 01:13:56 2009
@@ -20,16 +20,13 @@
 namespace Lucene.Net.Analysis
 {
 	
-	/// <summary>A TokenFilter is a TokenStream whose input is another token stream.
+	/// <summary> A TokenFilter is a TokenStream whose input is another TokenStream.
 	/// <p>
-	/// This is an abstract class.
-	/// NOTE: subclasses must override 
-	/// {@link #IncrementToken()} if the new TokenStream API is used
-	/// and {@link #Next(Token)} or {@link #Next()} if the old
-	/// TokenStream API is used.
-	/// <p>
-	/// See {@link TokenStream}
+	/// This is an abstract class; subclasses must override {@link #IncrementToken()}.
+	/// 
 	/// </summary>
+	/// <seealso cref="TokenStream">
+	/// </seealso>
 	public abstract class TokenFilter:TokenStream
 	{
 		/// <summary>The source of tokens for this filter. </summary>

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenStream.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/TokenStream.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenStream.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/TokenStream.cs Tue Nov 17 01:13:56 2009
@@ -59,7 +59,7 @@
 	/// <li>Instantiation of <code>TokenStream</code>/{@link TokenFilter}s which add/get
 	/// attributes to/from the {@link AttributeSource}.
 	/// <li>The consumer calls {@link TokenStream#Reset()}.
-	/// <li>the consumer retrieves attributes from the stream and stores local
+	/// <li>The consumer retrieves attributes from the stream and stores local
 	/// references to all attributes it wants to access
 	/// <li>The consumer calls {@link #IncrementToken()} until it returns false and
 	/// consumes the attributes after each call.
@@ -317,10 +317,15 @@
 			return onlyUseNewAPI;
 		}
 		
-		/// <summary> Consumers (ie {@link IndexWriter}) use this method to advance the stream to
+		/// <summary> Consumers (i.e., {@link IndexWriter}) use this method to advance the stream to
 		/// the next token. Implementing classes must implement this method and update
 		/// the appropriate {@link AttributeImpl}s with the attributes of the next
 		/// token.
+		/// <P>
+		/// The producer must make no assumptions about the attributes after the
+		/// method has been returned: the caller may arbitrarily change it. If the
+		/// producer needs to preserve the state for subsequent calls, it can use
+		/// {@link #captureState} to create a copy of the current attribute state.
 		/// <p>
 		/// This method is called for every token of a document, so an efficient
 		/// implementation is crucial for good performance. To avoid calls to

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Tokenizer.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Analysis/Tokenizer.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Tokenizer.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Analysis/Tokenizer.cs Tue Nov 17 01:13:56 2009
@@ -22,20 +22,14 @@
 namespace Lucene.Net.Analysis
 {
 	
-	/// <summary>A Tokenizer is a TokenStream whose input is a Reader.
+	/// <summary> A Tokenizer is a TokenStream whose input is a Reader.
 	/// <p>
-	/// This is an abstract class.
+	/// This is an abstract class; subclasses must override {@link #IncrementToken()}
 	/// <p>
-	/// NOTE: subclasses must override 
-	/// {@link #IncrementToken()} if the new TokenStream API is used
-	/// and {@link #Next(Token)} or {@link #Next()} if the old
-	/// TokenStream API is used.
-	/// <p>
-	/// NOTE: Subclasses overriding {@link #IncrementToken()} must
-	/// call {@link AttributeSource#ClearAttributes()} before
-	/// setting attributes.
-	/// Subclasses overriding {@link #Next(Token)} must call
-	/// {@link Token#Clear()} before setting Token attributes. 
+	/// NOTE: Subclasses overriding {@link #IncrementToken()} must call
+	/// {@link AttributeSource#ClearAttributes()} before setting attributes.
+	/// Subclasses overriding {@link #IncrementToken()} must call
+	/// {@link Token#Clear()} before setting Token attributes.
 	/// </summary>
 	
 	public abstract class Tokenizer:TokenStream

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/AssemblyInfo.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/AssemblyInfo.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/AssemblyInfo.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/AssemblyInfo.cs Tue Nov 17 01:13:56 2009
@@ -33,7 +33,7 @@
 [assembly: AssemblyDefaultAlias("Lucene.Net")]
 [assembly: AssemblyCulture("")]
 
-[assembly: AssemblyInformationalVersionAttribute("2.9.0")]
+[assembly: AssemblyInformationalVersionAttribute("2.9.1")]
 
 
 //
@@ -47,7 +47,7 @@
 // You can specify all the values or you can default the Revision and Build Numbers 
 // by using the '*' as shown below:
 
-[assembly: AssemblyVersion("2.9.0.001")]
+[assembly: AssemblyVersion("2.9.1.001")]
 
 
 //

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DirectoryReader.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/DirectoryReader.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DirectoryReader.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DirectoryReader.cs Tue Nov 17 01:13:56 2009
@@ -93,6 +93,7 @@
         private System.Collections.Generic.Dictionary<string, string> synced = new System.Collections.Generic.Dictionary<string, string>();
 		private Lock writeLock;
 		private SegmentInfos segmentInfos;
+		private SegmentInfos segmentInfosStart;
 		private bool stale;
 		private int termInfosIndexDivisor;
 		
@@ -170,6 +171,7 @@
 			this.directory = writer.GetDirectory();
 			this.readOnly = true;
 			this.segmentInfos = infos;
+			segmentInfosStart = (SegmentInfos) infos.Clone();
 			this.termInfosIndexDivisor = termInfosIndexDivisor;
 			if (!readOnly)
 			{
@@ -997,19 +999,18 @@
 			return segmentInfos.GetUserData();
 		}
 		
-		/// <summary> Check whether this IndexReader is still using the current (i.e., most recently committed) version of the index.  If
-		/// a writer has committed any changes to the index since this reader was opened, this will return <code>false</code>,
-		/// in which case you must open a new IndexReader in order to see the changes.  See the description of the <a
-		/// href="IndexWriter.html#autoCommit"><code>autoCommit</code></a> flag which controls when the {@link IndexWriter}
-		/// actually commits changes to the index.
-		/// 
-		/// </summary>
-		/// <throws>  CorruptIndexException if the index is corrupt </throws>
-		/// <throws>  IOException           if there is a low-level IO error </throws>
 		public override bool IsCurrent()
 		{
 			EnsureOpen();
-			return SegmentInfos.ReadCurrentVersion(directory) == segmentInfos.GetVersion();
+			if (writer == null || writer.IsClosed())
+			{
+				// we loaded SegmentInfos from the directory
+				return SegmentInfos.ReadCurrentVersion(directory) == segmentInfos.GetVersion();
+			}
+			else
+			{
+				return writer.NrtIsCurrent(segmentInfosStart);
+			}
 		}
 		
 		protected internal override void  DoClose()

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DocumentsWriter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/DocumentsWriter.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DocumentsWriter.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/DocumentsWriter.cs Tue Nov 17 01:13:56 2009
@@ -683,6 +683,14 @@
 			}
 		}
 		
+		internal bool AnyChanges()
+		{
+			lock (this)
+			{
+				return numDocsInRAM != 0 || deletesInRAM.numTerms != 0 || deletesInRAM.docIDs.Count != 0 || deletesInRAM.queries.Count != 0;
+			}
+		}
+		
 		private void  InitFlushState(bool onlyDocStore)
 		{
 			lock (this)

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexReader.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/IndexReader.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexReader.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexReader.cs Tue Nov 17 01:13:56 2009
@@ -64,7 +64,7 @@
 	/// <code>IndexReader</code> instance; use your own
 	/// (non-Lucene) objects instead.
 	/// </summary>
-	/// <version>  $Id: IndexReader.java 807735 2009-08-25 18:02:39Z markrmiller $
+	/// <version>  $Id: IndexReader.java 826049 2009-10-16 19:28:55Z mikemccand $
 	/// </version>
 	public abstract class IndexReader : System.ICloneable
 	{
@@ -836,8 +836,31 @@
 			return SegmentInfos.ReadCurrentUserData(directory);
 		}
 		
-		/// <summary> Version number when this IndexReader was opened. Not implemented in the IndexReader base class.</summary>
-		/// <throws>  UnsupportedOperationException unless overridden in subclass </throws>
+		/// <summary> Version number when this IndexReader was opened. Not implemented in the
+		/// IndexReader base class.
+		/// 
+		/// <p>
+		/// If this reader is based on a Directory (ie, was created by calling
+		/// {@link #Open}, or {@link #Reopen} on a reader based on a Directory), then
+		/// this method returns the version recorded in the commit that the reader
+		/// opened. This version is advanced every time {@link IndexWriter#Commit} is
+		/// called.
+		/// </p>
+		/// 
+		/// <p>
+		/// If instead this reader is a near real-time reader (ie, obtained by a call
+		/// to {@link IndexWriter#GetReader}, or by calling {@link #Reopen} on a near
+		/// real-time reader), then this method returns the version of the last
+		/// commit done by the writer. Note that even as further changes are made
+		/// with the writer, the version will not changed until a commit is
+		/// completed. Thus, you should not rely on this method to determine when a
+		/// near real-time reader should be opened. Use {@link #IsCurrent} instead.
+		/// </p>
+		/// 
+		/// </summary>
+		/// <throws>  UnsupportedOperationException </throws>
+		/// <summary>             unless overridden in subclass
+		/// </summary>
 		public virtual long GetVersion()
 		{
 			throw new System.NotSupportedException("This reader does not support this method.");
@@ -890,19 +913,30 @@
 			throw new System.NotSupportedException("This reader does not support this method.");
 		}
 		
-		/// <summary> Check whether this IndexReader is still using the
-		/// current (i.e., most recently committed) version of the
-		/// index.  If a writer has committed any changes to the
-		/// index since this reader was opened, this will return
-		/// <code>false</code>, in which case you must open a new
-		/// IndexReader in order to see the changes.  See the
-		/// description of the <a href="IndexWriter.html#autoCommit"><code>autoCommit</code></a>
-		/// flag which controls when the {@link IndexWriter}
-		/// actually commits changes to the index.
+		/// <summary> Check whether any new changes have occurred to the index since this
+		/// reader was opened.
+		/// 
+		/// <p>
+		/// If this reader is based on a Directory (ie, was created by calling
+		/// {@link #open}, or {@link #reopen} on a reader based on a Directory), then
+		/// this method checks if any further commits (see {@link IndexWriter#commit}
+		/// have occurred in that directory).
+		/// </p>
 		/// 
 		/// <p>
-		/// Not implemented in the IndexReader base class.
+		/// If instead this reader is a near real-time reader (ie, obtained by a call
+		/// to {@link IndexWriter#getReader}, or by calling {@link #reopen} on a near
+		/// real-time reader), then this method checks if either a new commmit has
+		/// occurred, or any new uncommitted changes have taken place via the writer.
+		/// Note that even if the writer has only performed merging, this method will
+		/// still return false.
 		/// </p>
+		/// 
+		/// <p>
+		/// In any event, if this returns false, you should call {@link #reopen} to
+		/// get a new reader that sees the changes.
+		/// </p>
+		/// 
 		/// </summary>
 		/// <throws>  CorruptIndexException if the index is corrupt </throws>
 		/// <throws>  IOException if there is a low-level IO error </throws>

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexWriter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/IndexWriter.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexWriter.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexWriter.cs Tue Nov 17 01:13:56 2009
@@ -350,13 +350,21 @@
 		// readers.
 		private volatile bool poolReaders;
 		
-		/// <summary> Expert: returns a readonly reader containing all
-		/// current updates.  Flush is called automatically.  This
-		/// provides "near real-time" searching, in that changes
-		/// made during an IndexWriter session can be made
-		/// available for searching without closing the writer.
+		/// <summary> Expert: returns a readonly reader, covering all committed as well as
+		/// un-committed changes to the index. This provides "near real-time"
+		/// searching, in that changes made during an IndexWriter session can be
+		/// quickly made available for searching without closing the writer nor
+		/// calling {@link #commit}.
 		/// 
-		/// <p>It's near real-time because there is no hard
+		/// <p>
+		/// Note that this is functionally equivalent to calling {#commit} and then
+		/// using {@link IndexReader#open} to open a new reader. But the turarnound
+		/// time of this method should be faster since it avoids the potentially
+		/// costly {@link #commit}.
+		/// <p>
+		/// 
+		/// <p>
+		/// It's <i>near</i> real-time because there is no hard
 		/// guarantee on how quickly you can get a new reader after
 		/// making changes with IndexWriter.  You'll have to
 		/// experiment in your situation to determine if it's
@@ -2137,6 +2145,14 @@
 		/// instead of RAM usage (each buffered delete Query counts
 		/// as one).
 		/// 
+		/// <p>
+		/// <b>NOTE</b>: because IndexWriter uses <code>int</code>s when managing its
+		/// internal storage, the absolute maximum value for this setting is somewhat
+		/// less than 2048 MB. The precise limit depends on various factors, such as
+		/// how large your documents are, how many fields have norms, etc., so it's
+		/// best to set this value comfortably under 2048.
+		/// </p>
+		/// 
 		/// <p> The default value is {@link #DEFAULT_RAM_BUFFER_SIZE_MB}.</p>
 		/// 
 		/// </summary>
@@ -2146,6 +2162,10 @@
 		/// </summary>
 		public virtual void  SetRAMBufferSizeMB(double mb)
 		{
+			if (mb > 2048.0)
+			{
+				throw new System.ArgumentException("ramBufferSize " + mb + " is too large; should be comfortably less than 2048");
+			}
 			if (mb != DISABLE_AUTO_FLUSH && mb <= 0.0)
 				throw new System.ArgumentException("ramBufferSize should be > 0.0 MB when enabled");
 			if (mb == DISABLE_AUTO_FLUSH && GetMaxBufferedDocs() == DISABLE_AUTO_FLUSH)
@@ -5237,7 +5257,7 @@
 				
 				// Must note the change to segmentInfos so any commits
 				// in-flight don't lose it:
-				changeCount++;
+				Checkpoint();
 				
 				// If the merged segments had pending changes, clear
 				// them so that they don't bother writing them to
@@ -6644,6 +6664,31 @@
 		{
 			return true;
 		}
+		
+		internal virtual bool NrtIsCurrent(SegmentInfos infos)
+		{
+			lock (this)
+			{
+				if (!infos.Equals(segmentInfos))
+				{
+					// if any structural changes (new segments), we are
+					// stale
+					return false;
+				}
+				else
+				{
+					return !docWriter.AnyChanges();
+				}
+			}
+		}
+		
+		internal virtual bool IsClosed()
+		{
+			lock (this)
+			{
+				return closed;
+			}
+		}
 		static IndexWriter()
 		{
 			DEFAULT_MERGE_FACTOR = LogMergePolicy.DEFAULT_MERGE_FACTOR;

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermsHashPerField.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/TermsHashPerField.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermsHashPerField.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermsHashPerField.cs Tue Nov 17 01:13:56 2009
@@ -436,9 +436,11 @@
 						}
 					}
 				}
-				else if (ch >= UnicodeUtil.UNI_SUR_HIGH_START && ch <= UnicodeUtil.UNI_SUR_HIGH_END)
-				// Unpaired
+				else if (ch >= UnicodeUtil.UNI_SUR_HIGH_START && (ch <= UnicodeUtil.UNI_SUR_HIGH_END || ch == 0xffff))
+				{
+					// Unpaired or 0xffff
 					ch = tokenText[downto] = (char) (UnicodeUtil.UNI_REPLACEMENT_CHAR);
+				}
 				
 				code = (code * 31) + ch;
 			}

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Lucene.Net.csproj
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Lucene.Net.csproj?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Lucene.Net.csproj (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Lucene.Net.csproj Tue Nov 17 01:13:56 2009
@@ -75,7 +75,7 @@
   <ItemGroup>
     <Reference Include="ICSharpCode.SharpZipLib, Version=0.85.5.452, Culture=neutral, processorArchitecture=MSIL">
       <SpecificVersion>False</SpecificVersion>
-      <HintPath>.\ICSharpCode.SharpZipLib.dll</HintPath>
+      <HintPath>..\..\..\..\..\Lucene-2.9\SharpZipLib\netcf-20\ICSharpCode.SharpZipLib.dll</HintPath>
     </Reference>
     <Reference Include="System">
       <Name>System</Name>

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs Tue Nov 17 01:13:56 2009
@@ -23,6 +23,7 @@
 using MultiPhraseQuery = Lucene.Net.Search.MultiPhraseQuery;
 using PhraseQuery = Lucene.Net.Search.PhraseQuery;
 using Query = Lucene.Net.Search.Query;
+using Version = Lucene.Net.Util.Version;
 
 namespace Lucene.Net.QueryParsers
 {
@@ -30,64 +31,156 @@
 	/// <summary> A QueryParser which constructs queries to search multiple fields.
 	/// 
 	/// </summary>
-	/// <version>  $Revision: 804016 $
+	/// <version>  $Revision: 829134 $
 	/// </version>
 	public class MultiFieldQueryParser:QueryParser
 	{
 		protected internal System.String[] fields;
 		protected internal System.Collections.IDictionary boosts;
 		
-		/// <summary> Creates a MultiFieldQueryParser. 
-		/// Allows passing of a map with term to Boost, and the boost to apply to each term.
+		/// <summary> Creates a MultiFieldQueryParser. Allows passing of a map with term to
+		/// Boost, and the boost to apply to each term.
 		/// 
-		/// <p>It will, when parse(String query)
-		/// is called, construct a query like this (assuming the query consists of
-		/// two terms and you specify the two fields <code>title</code> and <code>body</code>):</p>
+		/// <p>
+		/// It will, when parse(String query) is called, construct a query like this
+		/// (assuming the query consists of two terms and you specify the two fields
+		/// <code>title</code> and <code>body</code>):
+		/// </p>
+		/// 
+		/// <code>
+		/// (title:term1 body:term1) (title:term2 body:term2)
+		/// </code>
+		/// 
+		/// <p>
+		/// When setDefaultOperator(AND_OPERATOR) is set, the result will be:
+		/// </p>
+		/// 
+		/// <code>
+		/// +(title:term1 body:term1) +(title:term2 body:term2)
+		/// </code>
+		/// 
+		/// <p>
+		/// When you pass a boost (title=>5 body=>10) you can get
+		/// </p>
+		/// 
+		/// <code>
+		/// +(title:term1^5.0 body:term1^10.0) +(title:term2^5.0 body:term2^10.0)
+		/// </code>
+		/// 
+		/// <p>
+		/// In other words, all the query's terms must appear, but it doesn't matter
+		/// in what fields they appear.
+		/// </p>
+		/// 
+		/// </summary>
+		/// <deprecated> Please use
+		/// {@link #MultiFieldQueryParser(Version, String[], Analyzer, Map)}
+		/// instead
+		/// </deprecated>
+		public MultiFieldQueryParser(System.String[] fields, Analyzer analyzer, System.Collections.IDictionary boosts):this(Version.LUCENE_24, fields, analyzer)
+		{
+			this.boosts = boosts;
+		}
+		
+		/// <summary> Creates a MultiFieldQueryParser. Allows passing of a map with term to
+		/// Boost, and the boost to apply to each term.
+		/// 
+		/// <p>
+		/// It will, when parse(String query) is called, construct a query like this
+		/// (assuming the query consists of two terms and you specify the two fields
+		/// <code>title</code> and <code>body</code>):
+		/// </p>
 		/// 
 		/// <code>
 		/// (title:term1 body:term1) (title:term2 body:term2)
 		/// </code>
 		/// 
-		/// <p>When setDefaultOperator(AND_OPERATOR) is set, the result will be:</p>
+		/// <p>
+		/// When setDefaultOperator(AND_OPERATOR) is set, the result will be:
+		/// </p>
 		/// 
 		/// <code>
 		/// +(title:term1 body:term1) +(title:term2 body:term2)
 		/// </code>
 		/// 
-		/// <p>When you pass a boost (title=>5 body=>10) you can get </p>
+		/// <p>
+		/// When you pass a boost (title=>5 body=>10) you can get
+		/// </p>
 		/// 
 		/// <code>
 		/// +(title:term1^5.0 body:term1^10.0) +(title:term2^5.0 body:term2^10.0)
 		/// </code>
 		/// 
-		/// <p>In other words, all the query's terms must appear, but it doesn't matter in
-		/// what fields they appear.</p>
+		/// <p>
+		/// In other words, all the query's terms must appear, but it doesn't matter
+		/// in what fields they appear.
+		/// </p>
 		/// </summary>
-		public MultiFieldQueryParser(System.String[] fields, Analyzer analyzer, System.Collections.IDictionary boosts):this(fields, analyzer)
+		public MultiFieldQueryParser(Version matchVersion, System.String[] fields, Analyzer analyzer, System.Collections.IDictionary boosts):this(matchVersion, fields, analyzer)
 		{
 			this.boosts = boosts;
 		}
 		
 		/// <summary> Creates a MultiFieldQueryParser.
 		/// 
-		/// <p>It will, when parse(String query)
-		/// is called, construct a query like this (assuming the query consists of
-		/// two terms and you specify the two fields <code>title</code> and <code>body</code>):</p>
+		/// <p>
+		/// It will, when parse(String query) is called, construct a query like this
+		/// (assuming the query consists of two terms and you specify the two fields
+		/// <code>title</code> and <code>body</code>):
+		/// </p>
 		/// 
 		/// <code>
 		/// (title:term1 body:term1) (title:term2 body:term2)
 		/// </code>
 		/// 
-		/// <p>When setDefaultOperator(AND_OPERATOR) is set, the result will be:</p>
+		/// <p>
+		/// When setDefaultOperator(AND_OPERATOR) is set, the result will be:
+		/// </p>
 		/// 
 		/// <code>
 		/// +(title:term1 body:term1) +(title:term2 body:term2)
 		/// </code>
 		/// 
-		/// <p>In other words, all the query's terms must appear, but it doesn't matter in
-		/// what fields they appear.</p>
+		/// <p>
+		/// In other words, all the query's terms must appear, but it doesn't matter
+		/// in what fields they appear.
+		/// </p>
+		/// 
 		/// </summary>
-		public MultiFieldQueryParser(System.String[] fields, Analyzer analyzer):base(null, analyzer)
+		/// <deprecated> Please use
+		/// {@link #MultiFieldQueryParser(Version, String[], Analyzer)}
+		/// instead
+		/// </deprecated>
+		public MultiFieldQueryParser(System.String[] fields, Analyzer analyzer):this(Version.LUCENE_24, fields, analyzer)
+		{
+		}
+		
+		/// <summary> Creates a MultiFieldQueryParser.
+		/// 
+		/// <p>
+		/// It will, when parse(String query) is called, construct a query like this
+		/// (assuming the query consists of two terms and you specify the two fields
+		/// <code>title</code> and <code>body</code>):
+		/// </p>
+		/// 
+		/// <code>
+		/// (title:term1 body:term1) (title:term2 body:term2)
+		/// </code>
+		/// 
+		/// <p>
+		/// When setDefaultOperator(AND_OPERATOR) is set, the result will be:
+		/// </p>
+		/// 
+		/// <code>
+		/// +(title:term1 body:term1) +(title:term2 body:term2)
+		/// </code>
+		/// 
+		/// <p>
+		/// In other words, all the query's terms must appear, but it doesn't matter
+		/// in what fields they appear.
+		/// </p>
+		/// </summary>
+		public MultiFieldQueryParser(Version matchVersion, System.String[] fields, Analyzer analyzer):base(matchVersion, null, analyzer)
 		{
 			this.fields = fields;
 		}
@@ -205,11 +298,13 @@
 		/// <summary> Parses a query which searches on the fields specified.
 		/// <p>
 		/// If x fields are specified, this effectively constructs:
+		/// 
 		/// <pre>
-		/// <code>
+		/// &lt;code&gt;
 		/// (field1:query1) (field2:query2) (field3:query3)...(fieldx:queryx)
-		/// </code>
+		/// &lt;/code&gt;
 		/// </pre>
+		/// 
 		/// </summary>
 		/// <param name="queries">Queries strings to parse
 		/// </param>
@@ -217,18 +312,56 @@
 		/// </param>
 		/// <param name="analyzer">Analyzer to use
 		/// </param>
-		/// <throws>  ParseException if query parsing fails </throws>
-		/// <throws>  IllegalArgumentException if the length of the queries array differs </throws>
-		/// <summary>  from the length of the fields array
+		/// <throws>  ParseException </throws>
+		/// <summary>             if query parsing fails
+		/// </summary>
+		/// <throws>  IllegalArgumentException </throws>
+		/// <summary>             if the length of the queries array differs from the length of
+		/// the fields array
 		/// </summary>
+		/// <deprecated> Use {@link #Parse(Version,String[],String[],Analyzer)}
+		/// instead
+		/// </deprecated>
 		public static Query Parse(System.String[] queries, System.String[] fields, Analyzer analyzer)
 		{
+			return Parse(Version.LUCENE_24, queries, fields, analyzer);
+		}
+		
+		/// <summary> Parses a query which searches on the fields specified.
+		/// <p>
+		/// If x fields are specified, this effectively constructs:
+		/// 
+		/// <pre>
+		/// &lt;code&gt;
+		/// (field1:query1) (field2:query2) (field3:query3)...(fieldx:queryx)
+		/// &lt;/code&gt;
+		/// </pre>
+		/// 
+		/// </summary>
+		/// <param name="matchVersion">Lucene version to match; this is passed through to
+		/// QueryParser.
+		/// </param>
+		/// <param name="queries">Queries strings to parse
+		/// </param>
+		/// <param name="fields">Fields to search on
+		/// </param>
+		/// <param name="analyzer">Analyzer to use
+		/// </param>
+		/// <throws>  ParseException </throws>
+		/// <summary>             if query parsing fails
+		/// </summary>
+		/// <throws>  IllegalArgumentException </throws>
+		/// <summary>             if the length of the queries array differs from the length of
+		/// the fields array
+		/// </summary>
+		public static Query Parse(Version matchVersion, System.String[] queries, System.String[] fields, Analyzer analyzer)
+		{
 			if (queries.Length != fields.Length)
 				throw new System.ArgumentException("queries.length != fields.length");
 			BooleanQuery bQuery = new BooleanQuery();
 			for (int i = 0; i < fields.Length; i++)
 			{
-				QueryParser qp = new QueryParser(fields[i], analyzer);
+				QueryParser qp = new QueryParser(matchVersion, fields[i], analyzer);
 				Query q = qp.Parse(queries[i]);
 				if (q != null && (!(q is BooleanQuery) || ((BooleanQuery) q).GetClauses().Length > 0))
 				{
@@ -272,14 +405,65 @@
 		/// <throws>  IllegalArgumentException if the length of the fields array differs </throws>
 		/// <summary>  from the length of the flags array
 		/// </summary>
+		/// <deprecated> Use
+		/// {@link #Parse(Version, String, String[], BooleanClause.Occur[], Analyzer)}
+		/// instead
+		/// </deprecated>
 		public static Query Parse(System.String query, System.String[] fields, BooleanClause.Occur[] flags, Analyzer analyzer)
 		{
+			return Parse(Version.LUCENE_24, query, fields, flags, analyzer);
+		}
+		
+		/// <summary> Parses a query, searching on the fields specified. Use this if you need
+		/// to specify certain fields as required, and others as prohibited.
+		/// <p>
+		/// 
+		/// <pre>
+		/// Usage:
+		/// &lt;code&gt;
+		/// String[] fields = {&quot;filename&quot;, &quot;contents&quot;, &quot;description&quot;};
+		/// BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD,
+		/// BooleanClause.Occur.MUST,
+		/// BooleanClause.Occur.MUST_NOT};
+		/// MultiFieldQueryParser.parse(&quot;query&quot;, fields, flags, analyzer);
+		/// &lt;/code&gt;
+		/// </pre>
+		/// <p>
+		/// The code above would construct a query:
+		/// 
+		/// <pre>
+		/// &lt;code&gt;
+		/// (filename:query) +(contents:query) -(description:query)
+		/// &lt;/code&gt;
+		/// </pre>
+		/// 
+		/// </summary>
+		/// <param name="matchVersion">Lucene version to match; this is passed through to
+		/// QueryParser.
+		/// </param>
+		/// <param name="query">Query string to parse
+		/// </param>
+		/// <param name="fields">Fields to search on
+		/// </param>
+		/// <param name="flags">Flags describing the fields
+		/// </param>
+		/// <param name="analyzer">Analyzer to use
+		/// </param>
+		/// <throws>  ParseException </throws>
+		/// <summary>             if query parsing fails
+		/// </summary>
+		/// <throws>  IllegalArgumentException </throws>
+		/// <summary>             if the length of the fields array differs from the length of
+		/// the flags array
+		/// </summary>
+		public static Query Parse(Version matchVersion, System.String query, System.String[] fields, BooleanClause.Occur[] flags, Analyzer analyzer)
+		{
 			if (fields.Length != flags.Length)
 				throw new System.ArgumentException("fields.length != flags.length");
 			BooleanQuery bQuery = new BooleanQuery();
 			for (int i = 0; i < fields.Length; i++)
 			{
-				QueryParser qp = new QueryParser(fields[i], analyzer);
+				QueryParser qp = new QueryParser(matchVersion, fields[i], analyzer);
 				Query q = qp.Parse(query);
 				if (q != null && (!(q is BooleanQuery) || ((BooleanQuery) q).GetClauses().Length > 0))
 				{
@@ -324,14 +508,65 @@
 		/// <throws>  IllegalArgumentException if the length of the queries, fields, </throws>
 		/// <summary>  and flags array differ
 		/// </summary>
+		/// <deprecated> Used
+		/// {@link #Parse(Version, String[], String[], BooleanClause.Occur[], Analyzer)}
+		/// instead
+		/// </deprecated>
 		public static Query Parse(System.String[] queries, System.String[] fields, BooleanClause.Occur[] flags, Analyzer analyzer)
 		{
+			return Parse(Version.LUCENE_24, queries, fields, flags, analyzer);
+		}
+		
+		/// <summary> Parses a query, searching on the fields specified. Use this if you need
+		/// to specify certain fields as required, and others as prohibited.
+		/// <p>
+		/// 
+		/// <pre>
+		/// Usage:
+		/// &lt;code&gt;
+		/// String[] query = {&quot;query1&quot;, &quot;query2&quot;, &quot;query3&quot;};
+		/// String[] fields = {&quot;filename&quot;, &quot;contents&quot;, &quot;description&quot;};
+		/// BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD,
+		/// BooleanClause.Occur.MUST,
+		/// BooleanClause.Occur.MUST_NOT};
+		/// MultiFieldQueryParser.parse(query, fields, flags, analyzer);
+		/// &lt;/code&gt;
+		/// </pre>
+		/// <p>
+		/// The code above would construct a query:
+		/// 
+		/// <pre>
+		/// &lt;code&gt;
+		/// (filename:query1) +(contents:query2) -(description:query3)
+		/// &lt;/code&gt;
+		/// </pre>
+		/// 
+		/// </summary>
+		/// <param name="matchVersion">Lucene version to match; this is passed through to
+		/// QueryParser.
+		/// </param>
+		/// <param name="queries">Queries string to parse
+		/// </param>
+		/// <param name="fields">Fields to search on
+		/// </param>
+		/// <param name="flags">Flags describing the fields
+		/// </param>
+		/// <param name="analyzer">Analyzer to use
+		/// </param>
+		/// <throws>  ParseException </throws>
+		/// <summary>             if query parsing fails
+		/// </summary>
+		/// <throws>  IllegalArgumentException </throws>
+		/// <summary>             if the length of the queries, fields, and flags array differ
+		/// </summary>
+		public static Query Parse(Version matchVersion, System.String[] queries, System.String[] fields, BooleanClause.Occur[] flags, Analyzer analyzer)
+		{
 			if (!(queries.Length == fields.Length && queries.Length == flags.Length))
 				throw new System.ArgumentException("queries, fields, and flags array have have different length");
 			BooleanQuery bQuery = new BooleanQuery();
 			for (int i = 0; i < fields.Length; i++)
 			{
-				QueryParser qp = new QueryParser(fields[i], analyzer);
+				QueryParser qp = new QueryParser(matchVersion, fields[i], analyzer);
 				Query q = qp.Parse(queries[i]);
 				if (q != null && (!(q is BooleanQuery) || ((BooleanQuery) q).GetClauses().Length > 0))
 				{

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.JJ
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/QueryParser/QueryParser.JJ?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.JJ (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.JJ Tue Nov 17 01:13:56 2009
@@ -59,6 +59,7 @@
 import org.apache.lucene.search.TermQuery;
 import org.apache.lucene.search.WildcardQuery;
 import org.apache.lucene.util.Parameter;
+import org.apache.lucene.util.Version;
 
 /**
  * This class is generated by JavaCC.  The most important method is
@@ -125,6 +126,14 @@
  * <p><b>NOTE</b>: there is a new QueryParser in contrib, which matches
  * the same syntax as this class, but is more modular,
  * enabling substantial customization to how a query is created.
+ *
+ * <a name="version"/>
+ * <p><b>NOTE</b>: You must specify the required {@link Version}
+ * compatibility when creating QueryParser:
+ * <ul>
+ *    <li> As of 2.9, {@link #setEnablePositionIncrements} is true by
+ *         default.
+ * </ul>
  */
 public class QueryParser {
 
@@ -149,7 +158,7 @@
   boolean lowercaseExpandedTerms = true;
   MultiTermQuery.RewriteMethod multiTermRewriteMethod = MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT;
   boolean allowLeadingWildcard = false;
-  boolean enablePositionIncrements = false;
+  boolean enablePositionIncrements = true;
 
   Analyzer analyzer;
   String field;
@@ -182,11 +191,26 @@
   /** Constructs a query parser.
    *  @param f  the default field for query terms.
    *  @param a   used to find terms in the query text.
+   *  @deprecated Use {@link #QueryParser(Version, String, Analyzer)} instead
    */
   public QueryParser(String f, Analyzer a) {
+    this(Version.LUCENE_24, f, a);
+  }
+
+  /** Constructs a query parser.
+   *  @param matchVersion  Lucene version to match.  See {@link <a href="#version">above</a>)
+   *  @param f  the default field for query terms.
+   *  @param a   used to find terms in the query text.
+   */
+  public QueryParser(Version matchVersion, String f, Analyzer a) {
     this(new FastCharStream(new StringReader("")));
     analyzer = a;
     field = f;
+    if (matchVersion.onOrAfter(Version.LUCENE_29)) {
+      enablePositionIncrements = true;
+    } else {
+      enablePositionIncrements = false;
+    }
   }
 
   /** Parses a query string, returning a {@link org.apache.lucene.search.Query}.
@@ -1179,7 +1203,7 @@
       System.out.println("Usage: java org.apache.lucene.queryParser.QueryParser <input>");
       System.exit(0);
     }
-    QueryParser qp = new QueryParser("field",
+    QueryParser qp = new QueryParser(Version.LUCENE_CURRENT, "field",
                            new org.apache.lucene.analysis.SimpleAnalyzer());
     Query q = qp.parse(args[0]);
     System.out.println(q.toString("field"));

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/QueryParser/QueryParser.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParser.cs Tue Nov 17 01:13:56 2009
@@ -40,6 +40,7 @@
 using TermQuery = Lucene.Net.Search.TermQuery;
 using TermRangeQuery = Lucene.Net.Search.TermRangeQuery;
 using WildcardQuery = Lucene.Net.Search.WildcardQuery;
+using Version = Lucene.Net.Util.Version;
 
 namespace Lucene.Net.QueryParsers
 {
@@ -109,6 +110,17 @@
 	/// the same syntax as this class, but is more modular,
 	/// enabling substantial customization to how a query is created.
 	/// </summary>
+	/// 
+	/// <p><b>NOTE</b>: there is a new QueryParser in contrib, which matches
+	/// the same syntax as this class, but is more modular,
+	/// enabling substantial customization to how a query is created.
+	/// </summary>
+	/// <b>NOTE</b>: You must specify the required {@link Version} compatibility when
+	/// creating QueryParser:
+	/// <ul>
+	/// <li>As of 2.9, {@link #SetEnablePositionIncrements} is true by default.
+	/// </ul>
+	/// </summary>
 	public class QueryParser : QueryParserConstants
 	{
 		private void  InitBlock()
@@ -141,7 +153,7 @@
 		internal bool lowercaseExpandedTerms = true;
 		internal MultiTermQuery.RewriteMethod multiTermRewriteMethod;
 		internal bool allowLeadingWildcard = false;
-		internal bool enablePositionIncrements = false;
+		internal bool enablePositionIncrements = true;
 		
 		internal Analyzer analyzer;
 		internal System.String field;
@@ -178,10 +190,33 @@
 		/// </param>
 		/// <param name="a">  used to find terms in the query text.
 		/// </param>
-		public QueryParser(System.String f, Analyzer a):this(new FastCharStream(new System.IO.StringReader("")))
+		/// <deprecated> Use {@link #QueryParser(Version, String, Analyzer)} instead
+		/// </deprecated>
+		public QueryParser(System.String f, Analyzer a):this(Version.LUCENE_24, f, a)
+		{
+		}
+		
+		/// <summary> Constructs a query parser.
+		/// 
+		/// </summary>
+		/// <param name="matchVersion">Lucene version to match. See <a href="#version">above</a>)
+		/// </param>
+		/// <param name="f">the default field for query terms.
+		/// </param>
+		/// <param name="a">used to find terms in the query text.
+		/// </param>
+		public QueryParser(Version matchVersion, System.String f, Analyzer a):this(new FastCharStream(new System.IO.StringReader("")))
 		{
 			analyzer = a;
 			field = f;
+			if (matchVersion.OnOrAfter(Version.LUCENE_29))
+			{
+				enablePositionIncrements = true;
+			}
+			else
+			{
+				enablePositionIncrements = false;
+			}
 		}
 		
 		/// <summary>Parses a query string, returning a {@link Lucene.Net.Search.Query}.</summary>
@@ -867,7 +902,7 @@
                 if (resolution == null)
                 {
                     // no default or field specific date resolution has been set,
-                    // use deprecated DateField to maintain compatibilty with
+					// use deprecated DateField to maintain compatibility with
                     // pre-1.9 Lucene versions.
                     part1 = DateField.DateToString(d1);
                     part2 = DateField.DateToString(d2);
@@ -1333,7 +1368,7 @@
 				System.Console.Out.WriteLine("Usage: java Lucene.Net.QueryParsers.QueryParser <input>");
 				System.Environment.Exit(0);
 			}
-			QueryParser qp = new QueryParser("field", new Lucene.Net.Analysis.SimpleAnalyzer());
+			QueryParser qp = new QueryParser(Version.LUCENE_CURRENT, "field", new Lucene.Net.Analysis.SimpleAnalyzer());
 			Query q = qp.Parse(args[0]);
 			System.Console.Out.WriteLine(q.ToString("field"));
 		}
@@ -1962,6 +1997,15 @@
 			}
 		}
 		
+		private bool Jj_3R_2()
+		{
+			if (Jj_scan_token(Lucene.Net.QueryParsers.QueryParserConstants.TERM))
+				return true;
+			if (Jj_scan_token(Lucene.Net.QueryParsers.QueryParserConstants.COLON))
+				return true;
+			return false;
+		}
+		
 		private bool Jj_3_1()
 		{
 			Token xsp;
@@ -1984,15 +2028,6 @@
 			return false;
 		}
 		
-		private bool Jj_3R_2()
-		{
-			if (Jj_scan_token(Lucene.Net.QueryParsers.QueryParserConstants.TERM))
-				return true;
-			if (Jj_scan_token(Lucene.Net.QueryParsers.QueryParserConstants.COLON))
-				return true;
-			return false;
-		}
-		
 		/// <summary>Generated Token Manager. </summary>
 		public QueryParserTokenManager token_source;
 		/// <summary>Current token. </summary>
@@ -2019,7 +2054,7 @@
 		private int jj_gc = 0;
 		
 		/// <summary>Constructor with user supplied CharStream. </summary>
-		public QueryParser(CharStream stream)
+		protected internal QueryParser(CharStream stream)
 		{
 			InitBlock();
 			token_source = new QueryParserTokenManager(stream);
@@ -2046,7 +2081,7 @@
 		}
 		
 		/// <summary>Constructor with generated Token Manager. </summary>
-		public QueryParser(QueryParserTokenManager tm)
+		protected internal QueryParser(QueryParserTokenManager tm)
 		{
 			InitBlock();
 			token_source = tm;

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParserTokenManager.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/QueryParser/QueryParserTokenManager.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParserTokenManager.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/QueryParserTokenManager.cs Tue Nov 17 01:13:56 2009
@@ -40,6 +40,7 @@
 using TermQuery = Lucene.Net.Search.TermQuery;
 using TermRangeQuery = Lucene.Net.Search.TermRangeQuery;
 using WildcardQuery = Lucene.Net.Search.WildcardQuery;
+using Version = Lucene.Net.Util.Version;
 
 namespace Lucene.Net.QueryParsers
 {

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/FuzzyQuery.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Search/FuzzyQuery.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/FuzzyQuery.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/FuzzyQuery.cs Tue Nov 17 01:13:56 2009
@@ -133,8 +133,8 @@
 		{
 			if (!termLongEnough)
 			{
-				// can't match
-				return new BooleanQuery();
+				// can only match if it's exact
+				return new TermQuery(term);
 			}
 			
 			FilteredTermEnum enumerator = GetEnum(reader);

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Hits.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Search/Hits.cs?rev=881077&r1=881076&r2=881077&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Hits.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Search/Hits.cs Tue Nov 17 01:13:56 2009
@@ -23,30 +23,33 @@
 namespace Lucene.Net.Search
 {
 	
-	/// <summary>A ranked list of documents, used to hold search results.
+	/// <summary> A ranked list of documents, used to hold search results.
 	/// <p>
-	/// <b>Caution:</b> Iterate only over the hits needed.  Iterating over all
-	/// hits is generally not desirable and may be the source of
-	/// performance issues. If you need to iterate over many or all hits, consider
-	/// using the search method that takes a {@link HitCollector}.
+	/// <b>Caution:</b> Iterate only over the hits needed. Iterating over all hits is
+	/// generally not desirable and may be the source of performance issues. If you
+	/// need to iterate over many or all hits, consider using the search method that
+	/// takes a {@link HitCollector}.
 	/// </p>
-	/// <p><b>Note:</b> Deleting matching documents concurrently with traversing 
-	/// the hits, might, when deleting hits that were not yet retrieved, decrease
-	/// {@link #Length()}. In such case, 
-	/// {@link java.util.ConcurrentModificationException ConcurrentModificationException}
-	/// is thrown when accessing hit <code>n</code> &ge; current_{@link #Length()} 
-	/// (but <code>n</code> &lt; {@link #Length()}_at_start). 
+	/// <p>
+	/// <b>Note:</b> Deleting matching documents concurrently with traversing the
+	/// hits, might, when deleting hits that were not yet retrieved, decrease
+	/// {@link #Length()}. In such case,
+	/// {@link java.util.ConcurrentModificationException
+	/// ConcurrentModificationException} is thrown when accessing hit <code>n</code>
+	/// &ge; current_{@link #Length()} (but <code>n</code> &lt; {@link #Length()}
+	/// _at_start).
 	/// 
 	/// </summary>
-	/// <deprecated>
-	/// see {@link TopScoreDocCollector} and {@link TopDocs} :<br>
+	/// <deprecated> see {@link Searcher#Search(Query, int)},
+	/// {@link Searcher#Search(Query, Filter, int)} and
+	/// {@link Searcher#Search(Query, Filter, int, Sort)}:<br>
+	/// 
 	/// <pre>
-	/// TopScoreDocCollector collector = new TopScoreDocCollector(hitsPerPage);
-	/// searcher.search(query, collector);
-	/// ScoreDoc[] hits = collector.topDocs().scoreDocs;
-	/// for (int i = 0; i < hits.length; i++) {
+	/// TopDocs topDocs = searcher.Search(query, numHits);
+	/// ScoreDoc[] hits = topDocs.scoreDocs;
+	/// for (int i = 0; i &lt; hits.Length; i++) {
 	/// int docId = hits[i].doc;
-	/// Document d = searcher.doc(docId);
+	/// Document d = searcher.Doc(docId);
 	/// // do something with current hit
 	/// ...
 	/// </pre>



Mime
View raw message