lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Muir <rcm...@gmail.com>
Subject Re: svn commit: r829206 [1/3] - in /lucene/java/trunk: ./ contrib/ contrib/analyzers/common/src/java/org/apache/lucene/analysis/ar/ contrib/analyzers/common/src/java/org/apache/lucene/analysis/br/ contrib/analyzers/common/src/java/org/apache/lucene/a
Date Fri, 30 Oct 2009 12:35:50 GMT
hey I just now noticed this.
I don't think we should remove RussianLowerCaseFilter.. can we restore it?
it was just marked as deprecated in 3.0 ... to be removed in a future
version :)

 * LUCENE-1936: Deprecated RussianLowerCaseFilter, because it transforms
   text exactly the same as LowerCaseFilter. Please use LowerCaseFilter
   instead, which has the same functionality.

again, sorry i didn't figure this out in 2.9 so we could have deprecated it
back then...

On Fri, Oct 23, 2009 at 4:25 PM, <mikemccand@apache.org> wrote:

> Author: mikemccand
> Date: Fri Oct 23 20:25:17 2009
> New Revision: 829206
>
> URL: http://svn.apache.org/viewvc?rev=829206&view=rev
> Log:
> LUCENE-2002: add Version to QueryParser & contrib analyzers
>
> Removed:
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/ru/RussianLowerCaseFilter.java
> Modified:
>    lucene/java/trunk/   (props changed)
>    lucene/java/trunk/CHANGES.txt
>    lucene/java/trunk/build.xml
>    lucene/java/trunk/common-build.xml
>    lucene/java/trunk/contrib/CHANGES.txt
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/ar/ArabicAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/br/BrazilianAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/cjk/CJKAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/cz/CzechAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/de/GermanAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/el/GreekAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/fa/PersianAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/fr/FrenchAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/nl/DutchAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/query/QueryAutoStopWordAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/ru/RussianAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/th/ThaiAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/ar/TestArabicAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/br/TestBrazilianStemmer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/cjk/TestCJKTokenizer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/cz/TestCzechAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/de/TestGermanStemFilter.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/el/GreekAnalyzerTest.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fa/TestPersianAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fr/TestElision.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fr/TestFrenchAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/nl/TestDutchStemmer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/query/QueryAutoStopWordAnalyzerTest.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/ru/TestRussianAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/shingle/ShingleAnalyzerWrapperTest.java
>
>  lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/th/TestThaiAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/smartcn/src/java/org/apache/lucene/analysis/cn/smart/SmartChineseAnalyzer.java
>
>  lucene/java/trunk/contrib/analyzers/smartcn/src/test/org/apache/lucene/analysis/cn/smart/TestSmartChineseAnalyzer.java
>
>  lucene/java/trunk/contrib/ant/src/test/org/apache/lucene/ant/IndexTaskTest.java
>
>  lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/feeds/EnwikiQueryMaker.java
>
>  lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/feeds/FileBasedQueryMaker.java
>
>  lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/feeds/ReutersQueryMaker.java
>
>  lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/byTask/feeds/SimpleQueryMaker.java
>
>  lucene/java/trunk/contrib/benchmark/src/java/org/apache/lucene/benchmark/quality/utils/SimpleQQParser.java
>
>  lucene/java/trunk/contrib/collation/src/test/org/apache/lucene/collation/CollationTestBase.java
>
>  lucene/java/trunk/contrib/fast-vector-highlighter/src/java/org/apache/lucene/search/vectorhighlight/FieldTermStack.java
>
>  lucene/java/trunk/contrib/fast-vector-highlighter/src/test/org/apache/lucene/search/vectorhighlight/AbstractTestCase.java
>
>  lucene/java/trunk/contrib/highlighter/src/test/org/apache/lucene/search/highlight/HighlighterTest.java
>    lucene/java/trunk/contrib/lucli/src/java/lucli/LuceneMethods.java
>
>  lucene/java/trunk/contrib/memory/src/java/org/apache/lucene/index/memory/PatternAnalyzer.java
>
>  lucene/java/trunk/contrib/memory/src/test/org/apache/lucene/index/memory/MemoryIndexTest.java
>
>  lucene/java/trunk/contrib/memory/src/test/org/apache/lucene/index/memory/PatternAnalyzerTest.java
>
>  lucene/java/trunk/contrib/misc/src/java/org/apache/lucene/queryParser/analyzing/AnalyzingQueryParser.java
>
>  lucene/java/trunk/contrib/misc/src/java/org/apache/lucene/queryParser/complexPhrase/ComplexPhraseQueryParser.java
>
>  lucene/java/trunk/contrib/misc/src/test/org/apache/lucene/queryParser/analyzing/TestAnalyzingQueryParser.java
>
>  lucene/java/trunk/contrib/misc/src/test/org/apache/lucene/queryParser/complexPhrase/TestComplexPhraseQuery.java
>
>  lucene/java/trunk/contrib/queryparser/src/test/org/apache/lucene/queryParser/standard/TestMultiAnalyzerQPHelper.java
>
>  lucene/java/trunk/contrib/queryparser/src/test/org/apache/lucene/queryParser/standard/TestMultiAnalyzerWrapper.java
>
>  lucene/java/trunk/contrib/queryparser/src/test/org/apache/lucene/queryParser/standard/TestQPHelper.java
>
>  lucene/java/trunk/contrib/queryparser/src/test/org/apache/lucene/queryParser/standard/TestQueryParserWrapper.java
>
>  lucene/java/trunk/contrib/snowball/src/java/org/apache/lucene/analysis/snowball/SnowballAnalyzer.java
>
>  lucene/java/trunk/contrib/snowball/src/test/org/apache/lucene/analysis/snowball/TestSnowball.java
>
>  lucene/java/trunk/contrib/swing/src/java/org/apache/lucene/swing/models/ListSearcher.java
>
>  lucene/java/trunk/contrib/swing/src/java/org/apache/lucene/swing/models/TableSearcher.java
>
>  lucene/java/trunk/contrib/xml-query-parser/src/java/org/apache/lucene/xmlparser/builders/UserInputQueryBuilder.java
>    lucene/java/trunk/src/demo/org/apache/lucene/demo/SearchFiles.java
>    lucene/java/trunk/src/java/org/apache/lucene/analysis/StopAnalyzer.java
>    lucene/java/trunk/src/java/org/apache/lucene/analysis/StopFilter.java
>
>  lucene/java/trunk/src/java/org/apache/lucene/analysis/standard/StandardAnalyzer.java
>
>  lucene/java/trunk/src/java/org/apache/lucene/analysis/standard/StandardTokenizer.java
>
>  lucene/java/trunk/src/java/org/apache/lucene/queryParser/MultiFieldQueryParser.java
>
>  lucene/java/trunk/src/java/org/apache/lucene/queryParser/QueryParser.java
>    lucene/java/trunk/src/java/org/apache/lucene/queryParser/QueryParser.jj
>
>  lucene/java/trunk/src/java/org/apache/lucene/queryParser/QueryParserTokenManager.java
>    lucene/java/trunk/src/test/org/apache/lucene/TestDemo.java
>    lucene/java/trunk/src/test/org/apache/lucene/TestSearch.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/TestSearchForDuplicates.java
>    lucene/java/trunk/src/test/org/apache/lucene/analysis/TestAnalyzers.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/analysis/TestISOLatin1AccentFilter.java
>   (props changed)
>
>  lucene/java/trunk/src/test/org/apache/lucene/analysis/TestKeywordAnalyzer.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/analysis/TestStandardAnalyzer.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/analysis/TestStopAnalyzer.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/analysis/TestTeeSinkTokenFilter.java
>    lucene/java/trunk/src/test/org/apache/lucene/document/TestDateTools.java
>   (props changed)
>
>  lucene/java/trunk/src/test/org/apache/lucene/document/TestNumberTools.java
>   (props changed)
>
>  lucene/java/trunk/src/test/org/apache/lucene/index/TestBackwardsCompatibility.java
>   (props changed)
>    lucene/java/trunk/src/test/org/apache/lucene/index/TestIndexWriter.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/queryParser/TestMultiAnalyzer.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/queryParser/TestMultiFieldQueryParser.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/queryParser/TestQueryParser.java
>    lucene/java/trunk/src/test/org/apache/lucene/search/TestBoolean2.java
>    lucene/java/trunk/src/test/org/apache/lucene/search/TestDateSort.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/search/TestExplanations.java
>    lucene/java/trunk/src/test/org/apache/lucene/search/TestFuzzyQuery.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/search/TestMatchAllDocsQuery.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/search/TestMultiSearcher.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/search/TestMultiSearcherRanking.java
>    lucene/java/trunk/src/test/org/apache/lucene/search/TestNot.java
>    lucene/java/trunk/src/test/org/apache/lucene/search/TestPhraseQuery.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/search/TestPositionIncrement.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/search/TestSimpleExplanations.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/search/TestTimeLimitingCollector.java
>    lucene/java/trunk/src/test/org/apache/lucene/search/TestWildcard.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/search/function/TestCustomScoreQuery.java
>
>  lucene/java/trunk/src/test/org/apache/lucene/search/spans/TestNearSpansOrdered.java
>
> Propchange: lucene/java/trunk/
>
> ------------------------------------------------------------------------------
> --- svn:mergeinfo (original)
> +++ svn:mergeinfo Fri Oct 23 20:25:17 2009
> @@ -1,3 +1,3 @@
>  /lucene/java/branches/lucene_2_4:748824
> -/lucene/java/branches/lucene_2_9:817269-818600,825998
> +/lucene/java/branches/lucene_2_9:817269-818600,825998,829134
>  /lucene/java/branches/lucene_2_9_back_compat_tests:818601-821336
>
> Modified: lucene/java/trunk/CHANGES.txt
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/CHANGES.txt?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> --- lucene/java/trunk/CHANGES.txt (original)
> +++ lucene/java/trunk/CHANGES.txt Fri Oct 23 20:25:17 2009
> @@ -137,6 +137,11 @@
>  * LUCENE-1183: Optimize Levenshtein Distance computation in
>   FuzzyQuery.  (Cédrik Lime via Mike McCandless)
>
> + * LUCENE-2002: Add required Version matchVersion argument when
> +   constructing QueryParser or MultiFieldQueryParser and, default (as
> +   of 2.9) enablePositionIncrements to true to match
> +   StandardAnalyzer's 2.9 default (Uwe Schindler, Mike McCandless)
> +
>  Documentation
>
>  Build
>
> Modified: lucene/java/trunk/build.xml
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/build.xml?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> --- lucene/java/trunk/build.xml (original)
> +++ lucene/java/trunk/build.xml Fri Oct 23 20:25:17 2009
> @@ -580,9 +580,21 @@
>   <target name="javacc"
> depends="clean-javacc,javacc-QueryParser,javacc-HTMLParser,javacc-contrib-queryparser,
> javacc-contrib-surround, javacc-contrib-precedence"/>
>
>   <target name="javacc-QueryParser" depends="init,javacc-check"
> if="javacc.present">
> -    <invoke-javacc
> target="src/java/org/apache/lucene/queryParser/QueryParser.jj"
> -                   outputDir="src/java/org/apache/lucene/queryParser"
> -    />
> +    <sequential>
> +      <invoke-javacc
> target="src/java/org/apache/lucene/queryParser/QueryParser.jj"
> +                     outputDir="src/java/org/apache/lucene/queryParser"/>
> +
> +      <!-- Change the inccorrect public ctors for QueryParser to be
> protected instead -->
> +      <replaceregexp
> file="src/java/org/apache/lucene/queryParser/QueryParser.java"
> +                    byline="true"
> +                    match="public QueryParser\(CharStream "
> +                    replace="protected QueryParser(CharStream "/>
> +      <replaceregexp
> file="src/java/org/apache/lucene/queryParser/QueryParser.java"
> +                    byline="true"
> +                    match="public QueryParser\(QueryParserTokenManager "
> +                    replace="protected QueryParser(QueryParserTokenManager
> "/>
> +
> +    </sequential>
>   </target>
>
>   <target name="javacc-HTMLParser" depends="init,javacc-check"
> if="javacc.present">
>
> Modified: lucene/java/trunk/common-build.xml
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/common-build.xml?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> --- lucene/java/trunk/common-build.xml (original)
> +++ lucene/java/trunk/common-build.xml Fri Oct 23 20:25:17 2009
> @@ -42,7 +42,7 @@
>   <property name="Name" value="Lucene"/>
>   <property name="dev.version" value="3.0-dev"/>
>   <property name="version" value="${dev.version}"/>
> -  <property name="compatibility.tag"
> value="lucene_2_9_back_compat_tests_20091023"/>
> +  <property name="compatibility.tag"
> value="lucene_2_9_back_compat_tests_20091023a"/>
>   <property name="spec.version" value="${version}"/>
>   <property name="year" value="2000-${current.year}"/>
>   <property name="final.name" value="lucene-${name}-${version}"/>
>
> Modified: lucene/java/trunk/contrib/CHANGES.txt
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/CHANGES.txt?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> --- lucene/java/trunk/contrib/CHANGES.txt (original)
> +++ lucene/java/trunk/contrib/CHANGES.txt Fri Oct 23 20:25:17 2009
> @@ -25,6 +25,12 @@
>    text exactly the same as LowerCaseFilter. Please use LowerCaseFilter
>    instead, which has the same functionality.  (Robert Muir)
>
> + * LUCENE-2002: Add required Version matchVersion argument when
> +   constructing ComplexPhraseQueryParser and default (as of 2.9)
> +   enablePositionIncrements to true to match StandardAnalyzer's
> +   default.  Also added required matchVersion to most of the analyzers
> +   (Uwe Schindler, Mike McCandless)
> +
>  Bug fixes
>
>  * LUCENE-1781: Fixed various issues with the lat/lng bounding box
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/ar/ArabicAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/ar/ArabicAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/ar/ArabicAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/ar/ArabicAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -33,6 +33,7 @@
>  import org.apache.lucene.analysis.TokenStream;
>  import org.apache.lucene.analysis.Tokenizer;
>  import org.apache.lucene.analysis.WordlistLoader;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * {@link Analyzer} for Arabic.
> @@ -109,32 +110,38 @@
>     }
>   }
>
> +  private final Version matchVersion;
> +
>   /**
>    * Builds an analyzer with the default stop words: {@link
> #DEFAULT_STOPWORD_FILE}.
>    */
> -  public ArabicAnalyzer() {
> +  public ArabicAnalyzer(Version matchVersion) {
> +    this.matchVersion = matchVersion;
>     stoptable = DefaultSetHolder.DEFAULT_STOP_SET;
>   }
>
>   /**
>    * Builds an analyzer with the given stop words.
>    */
> -  public ArabicAnalyzer( String... stopwords ) {
> +  public ArabicAnalyzer( Version matchVersion, String... stopwords ) {
>     stoptable = StopFilter.makeStopSet( stopwords );
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
>    * Builds an analyzer with the given stop words.
>    */
> -  public ArabicAnalyzer( Hashtable<?,?> stopwords ) {
> -    stoptable = new HashSet( stopwords.keySet() );
> +  public ArabicAnalyzer( Version matchVersion, Hashtable<?,?> stopwords )
> {
> +    stoptable = new HashSet(stopwords.keySet());
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
>    * Builds an analyzer with the given stop words.  Lines can be commented
> out using {@link #STOPWORDS_COMMENT}
>    */
> -  public ArabicAnalyzer( File stopwords ) throws IOException {
> +  public ArabicAnalyzer( Version matchVersion, File stopwords ) throws
> IOException {
>     stoptable = WordlistLoader.getWordSet( stopwords, STOPWORDS_COMMENT);
> +    this.matchVersion = matchVersion;
>   }
>
>
> @@ -149,7 +156,8 @@
>     TokenStream result = new ArabicLetterTokenizer( reader );
>     result = new LowerCaseFilter(result);
>     // the order here is important: the stopword list is not normalized!
> -    result = new StopFilter(false, result, stoptable );
> +    result = new StopFilter(
> StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                             result, stoptable );
>     result = new ArabicNormalizationFilter( result );
>     result = new ArabicStemFilter( result );
>
> @@ -177,7 +185,8 @@
>       streams.source = new ArabicLetterTokenizer(reader);
>       streams.result = new LowerCaseFilter(streams.source);
>       // the order here is important: the stopword list is not normalized!
> -      streams.result = new StopFilter(false, streams.result, stoptable);
> +      streams.result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                      streams.result, stoptable);
>       streams.result = new ArabicNormalizationFilter(streams.result);
>       streams.result = new ArabicStemFilter(streams.result);
>       setPreviousTokenStream(streams);
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/br/BrazilianAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/br/BrazilianAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/br/BrazilianAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/br/BrazilianAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -33,6 +33,7 @@
>  import org.apache.lucene.analysis.WordlistLoader;
>  import org.apache.lucene.analysis.standard.StandardFilter;
>  import org.apache.lucene.analysis.standard.StandardTokenizer;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * {@link Analyzer} for Brazilian Portuguese language.
> @@ -41,6 +42,9 @@
>  * will not be indexed at all) and an external list of exclusions (words
> that will
>  * not be stemmed, but indexed).
>  * </p>
> + *
> + * <p><b>NOTE</b>: This class uses the same {@link Version}
> + * dependent settings as {@link StandardAnalyzer}.</p>
>  */
>  public final class BrazilianAnalyzer extends Analyzer {
>
> @@ -78,33 +82,38 @@
>         * Contains words that should be indexed but not stemmed.
>         */
>        private Set excltable = Collections.emptySet();
> +        private final Version matchVersion;
>
>        /**
>         * Builds an analyzer with the default stop words ({@link
> #BRAZILIAN_STOP_WORDS}).
>         */
> -       public BrazilianAnalyzer() {
> -               stoptable = StopFilter.makeStopSet( BRAZILIAN_STOP_WORDS );
> +       public BrazilianAnalyzer(Version matchVersion) {
> +          stoptable = StopFilter.makeStopSet( BRAZILIAN_STOP_WORDS );
> +          this.matchVersion = matchVersion;
>        }
>
>        /**
>         * Builds an analyzer with the given stop words.
>         */
> -       public BrazilianAnalyzer( String... stopwords ) {
> -               stoptable = StopFilter.makeStopSet( stopwords );
> +        public BrazilianAnalyzer( Version matchVersion, String...
> stopwords ) {
> +          stoptable = StopFilter.makeStopSet( stopwords );
> +          this.matchVersion = matchVersion;
>        }
>
>        /**
>         * Builds an analyzer with the given stop words.
>         */
> -       public BrazilianAnalyzer( Map stopwords ) {
> -               stoptable = new HashSet(stopwords.keySet());
> +        public BrazilianAnalyzer( Version matchVersion, Map stopwords ) {
> +          stoptable = new HashSet(stopwords.keySet());
> +          this.matchVersion = matchVersion;
>        }
>
>        /**
>         * Builds an analyzer with the given stop words.
>         */
> -       public BrazilianAnalyzer( File stopwords ) throws IOException {
> -               stoptable = WordlistLoader.getWordSet( stopwords );
> +        public BrazilianAnalyzer( Version matchVersion, File stopwords )
> throws IOException {
> +          stoptable = WordlistLoader.getWordSet( stopwords );
> +          this.matchVersion = matchVersion;
>        }
>
>        /**
> @@ -137,10 +146,11 @@
>         *          {@link BrazilianStemFilter}.
>         */
>        public final TokenStream tokenStream(String fieldName, Reader
> reader) {
> -               TokenStream result = new StandardTokenizer( reader );
> +                TokenStream result = new StandardTokenizer( matchVersion,
> reader );
>                result = new LowerCaseFilter( result );
>                result = new StandardFilter( result );
> -               result = new StopFilter( false, result, stoptable );
> +               result = new StopFilter(
> StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                         result, stoptable );
>                result = new BrazilianStemFilter( result, excltable );
>                return result;
>        }
> @@ -163,10 +173,11 @@
>       SavedStreams streams = (SavedStreams) getPreviousTokenStream();
>       if (streams == null) {
>         streams = new SavedStreams();
> -        streams.source = new StandardTokenizer(reader);
> +        streams.source = new StandardTokenizer(matchVersion, reader);
>         streams.result = new LowerCaseFilter(streams.source);
>         streams.result = new StandardFilter(streams.result);
> -        streams.result = new StopFilter(false, streams.result, stoptable);
> +        streams.result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                        streams.result, stoptable);
>         streams.result = new BrazilianStemFilter(streams.result,
> excltable);
>         setPreviousTokenStream(streams);
>       } else {
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/cjk/CJKAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/cjk/CJKAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/cjk/CJKAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/cjk/CJKAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -21,6 +21,7 @@
>  import org.apache.lucene.analysis.StopFilter;
>  import org.apache.lucene.analysis.TokenStream;
>  import org.apache.lucene.analysis.Tokenizer;
> +import org.apache.lucene.util.Version;
>
>  import java.io.IOException;
>  import java.io.Reader;
> @@ -56,14 +57,16 @@
>    * stop word list
>    */
>   private final Set stopTable;
> +  private final Version matchVersion;
>
>   //~ Constructors
> -----------------------------------------------------------
>
>   /**
>    * Builds an analyzer which removes words in {@link #STOP_WORDS}.
>    */
> -  public CJKAnalyzer() {
> +  public CJKAnalyzer(Version matchVersion) {
>     stopTable = StopFilter.makeStopSet(STOP_WORDS);
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
> @@ -71,8 +74,9 @@
>    *
>    * @param stopWords stop word array
>    */
> -  public CJKAnalyzer(String... stopWords) {
> +  public CJKAnalyzer(Version matchVersion, String... stopWords) {
>     stopTable = StopFilter.makeStopSet(stopWords);
> +    this.matchVersion = matchVersion;
>   }
>
>   //~ Methods
> ----------------------------------------------------------------
> @@ -86,7 +90,8 @@
>    *    {@link StopFilter}
>    */
>   public final TokenStream tokenStream(String fieldName, Reader reader) {
> -    return new StopFilter(false, new CJKTokenizer(reader), stopTable);
> +    return new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                          new CJKTokenizer(reader), stopTable);
>   }
>
>   private class SavedStreams {
> @@ -109,7 +114,8 @@
>     if (streams == null) {
>       streams = new SavedStreams();
>       streams.source = new CJKTokenizer(reader);
> -      streams.result = new StopFilter(false, streams.source, stopTable);
> +      streams.result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                      streams.source, stopTable);
>       setPreviousTokenStream(streams);
>     } else {
>       streams.source.reset(reader);
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/cz/CzechAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/cz/CzechAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/cz/CzechAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/cz/CzechAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -25,6 +25,7 @@
>  import org.apache.lucene.analysis.WordlistLoader;
>  import org.apache.lucene.analysis.standard.StandardFilter;
>  import org.apache.lucene.analysis.standard.StandardTokenizer;
> +import org.apache.lucene.util.Version;
>
>  import java.io.*;
>  import java.util.HashSet;
> @@ -38,6 +39,9 @@
>  * will not be indexed at all).
>  * A default set of stopwords is used unless an alternative list is
> specified.
>  * </p>
> + *
> + * <p><b>NOTE</b>: This class uses the same {@link Version}
> + * dependent settings as {@link StandardAnalyzer}.</p>
>  */
>  public final class CzechAnalyzer extends Analyzer {
>
> @@ -69,30 +73,35 @@
>         * Contains the stopwords used with the {@link StopFilter}.
>         */
>        private Set stoptable;
> +        private final Version matchVersion;
>
>        /**
>         * Builds an analyzer with the default stop words ({@link
> #CZECH_STOP_WORDS}).
>         */
> -       public CzechAnalyzer() {
> -               stoptable = StopFilter.makeStopSet( CZECH_STOP_WORDS );
> +       public CzechAnalyzer(Version matchVersion) {
> +          stoptable = StopFilter.makeStopSet( CZECH_STOP_WORDS );
> +          this.matchVersion = matchVersion;
>        }
>
>        /**
>         * Builds an analyzer with the given stop words.
>         */
> -       public CzechAnalyzer( String... stopwords ) {
> -               stoptable = StopFilter.makeStopSet( stopwords );
> +        public CzechAnalyzer(Version matchVersion, String... stopwords) {
> +          stoptable = StopFilter.makeStopSet( stopwords );
> +          this.matchVersion = matchVersion;
>        }
>
> -       public CzechAnalyzer( HashSet stopwords ) {
> -               stoptable = stopwords;
> +        public CzechAnalyzer(Version matchVersion, HashSet stopwords) {
> +          stoptable = stopwords;
> +          this.matchVersion = matchVersion;
>        }
>
>        /**
>         * Builds an analyzer with the given stop words.
>         */
> -       public CzechAnalyzer( File stopwords ) throws IOException {
> -               stoptable = WordlistLoader.getWordSet( stopwords );
> +        public CzechAnalyzer(Version matchVersion, File stopwords ) throws
> IOException {
> +          stoptable = WordlistLoader.getWordSet( stopwords );
> +          this.matchVersion = matchVersion;
>        }
>
>     /**
> @@ -131,10 +140,11 @@
>         *                      {@link StandardFilter}, {@link
> LowerCaseFilter}, and {@link StopFilter}
>         */
>        public final TokenStream tokenStream( String fieldName, Reader
> reader ) {
> -               TokenStream result = new StandardTokenizer( reader );
> +                TokenStream result = new StandardTokenizer( matchVersion,
> reader );
>                result = new StandardFilter( result );
>                result = new LowerCaseFilter( result );
> -               result = new StopFilter(false, result, stoptable );
> +               result = new StopFilter(
> StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                         result, stoptable );
>                return result;
>        }
>
> @@ -155,10 +165,11 @@
>       SavedStreams streams = (SavedStreams) getPreviousTokenStream();
>       if (streams == null) {
>         streams = new SavedStreams();
> -        streams.source = new StandardTokenizer(reader);
> +        streams.source = new StandardTokenizer(matchVersion, reader);
>         streams.result = new StandardFilter(streams.source);
>         streams.result = new LowerCaseFilter(streams.result);
> -        streams.result = new StopFilter(false, streams.result, stoptable);
> +        streams.result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                        streams.result, stoptable);
>         setPreviousTokenStream(streams);
>       } else {
>         streams.source.reset(reader);
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/de/GermanAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/de/GermanAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/de/GermanAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/de/GermanAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -33,6 +33,7 @@
>  import org.apache.lucene.analysis.WordlistLoader;
>  import org.apache.lucene.analysis.standard.StandardFilter;
>  import org.apache.lucene.analysis.standard.StandardTokenizer;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * {@link Analyzer} for German language.
> @@ -43,6 +44,9 @@
>  * A default set of stopwords is used unless an alternative list is
> specified, but the
>  * exclusion list is empty by default.
>  * </p>
> + *
> + * <p><b>NOTE</b>: This class uses the same {@link Version}
> + * dependent settings as {@link StandardAnalyzer}.</p>
>  */
>  public class GermanAnalyzer extends Analyzer {
>
> @@ -74,37 +78,43 @@
>    */
>   private Set exclusionSet = new HashSet();
>
> +  private final Version matchVersion;
> +
>   /**
>    * Builds an analyzer with the default stop words:
>    * {@link #GERMAN_STOP_WORDS}.
>    */
> -  public GermanAnalyzer() {
> +  public GermanAnalyzer(Version matchVersion) {
>     stopSet = StopFilter.makeStopSet(GERMAN_STOP_WORDS);
>     setOverridesTokenStreamMethod(GermanAnalyzer.class);
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
>    * Builds an analyzer with the given stop words.
>    */
> -  public GermanAnalyzer(String... stopwords) {
> +  public GermanAnalyzer(Version matchVersion, String... stopwords) {
>     stopSet = StopFilter.makeStopSet(stopwords);
>     setOverridesTokenStreamMethod(GermanAnalyzer.class);
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
>    * Builds an analyzer with the given stop words.
>    */
> -  public GermanAnalyzer(Map stopwords) {
> +  public GermanAnalyzer(Version matchVersion, Map stopwords) {
>     stopSet = new HashSet(stopwords.keySet());
>     setOverridesTokenStreamMethod(GermanAnalyzer.class);
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
>    * Builds an analyzer with the given stop words.
>    */
> -  public GermanAnalyzer(File stopwords) throws IOException {
> +  public GermanAnalyzer(Version matchVersion, File stopwords) throws
> IOException {
>     stopSet = WordlistLoader.getWordSet(stopwords);
>     setOverridesTokenStreamMethod(GermanAnalyzer.class);
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
> @@ -139,10 +149,11 @@
>    *         {@link GermanStemFilter}
>    */
>   public TokenStream tokenStream(String fieldName, Reader reader) {
> -    TokenStream result = new StandardTokenizer(reader);
> +    TokenStream result = new StandardTokenizer(matchVersion, reader);
>     result = new StandardFilter(result);
>     result = new LowerCaseFilter(result);
> -    result = new StopFilter(false, result, stopSet);
> +    result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                            result, stopSet);
>     result = new GermanStemFilter(result, exclusionSet);
>     return result;
>   }
> @@ -171,10 +182,11 @@
>     SavedStreams streams = (SavedStreams) getPreviousTokenStream();
>     if (streams == null) {
>       streams = new SavedStreams();
> -      streams.source = new StandardTokenizer(reader);
> +      streams.source = new StandardTokenizer(matchVersion, reader);
>       streams.result = new StandardFilter(streams.source);
>       streams.result = new LowerCaseFilter(streams.result);
> -      streams.result = new StopFilter(false, streams.result, stopSet);
> +      streams.result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                      streams.result, stopSet);
>       streams.result = new GermanStemFilter(streams.result, exclusionSet);
>       setPreviousTokenStream(streams);
>     } else {
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/el/GreekAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/el/GreekAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/el/GreekAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/el/GreekAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -22,6 +22,7 @@
>  import org.apache.lucene.analysis.TokenStream;
>  import org.apache.lucene.analysis.Tokenizer;
>  import org.apache.lucene.analysis.standard.StandardTokenizer;
> +import org.apache.lucene.util.Version;
>
>  import java.io.IOException;
>  import java.io.Reader;
> @@ -36,6 +37,9 @@
>  * that will not be indexed at all).
>  * A default set of stopwords is used unless an alternative list is
> specified.
>  * </p>
> + *
> + * <p><b>NOTE</b>: This class uses the same {@link Version}
> + * dependent settings as {@link StandardAnalyzer}.</p>
>  */
>  public final class GreekAnalyzer extends Analyzer
>  {
> @@ -59,27 +63,33 @@
>      */
>     private Set stopSet = new HashSet();
>
> -    public GreekAnalyzer() {
> -        this(GREEK_STOP_WORDS);
> +    private final Version matchVersion;
> +
> +    public GreekAnalyzer(Version matchVersion) {
> +      super();
> +      stopSet = StopFilter.makeStopSet(GREEK_STOP_WORDS);
> +      this.matchVersion = matchVersion;
>     }
> -
> +
>     /**
>      * Builds an analyzer with the given stop words.
>      * @param stopwords Array of stopwords to use.
>      */
> -    public GreekAnalyzer(String... stopwords)
> +    public GreekAnalyzer(Version matchVersion, String... stopwords)
>     {
> -        super();
> -       stopSet = StopFilter.makeStopSet(stopwords);
> +      super();
> +      stopSet = StopFilter.makeStopSet(stopwords);
> +      this.matchVersion = matchVersion;
>     }
> -
> +
>     /**
>      * Builds an analyzer with the given stop words.
>      */
> -    public GreekAnalyzer(Map stopwords)
> +    public GreekAnalyzer(Version matchVersion, Map stopwords)
>     {
> -        super();
> -       stopSet = new HashSet(stopwords.keySet());
> +      super();
> +      stopSet = new HashSet(stopwords.keySet());
> +      this.matchVersion = matchVersion;
>     }
>
>     /**
> @@ -90,9 +100,10 @@
>      */
>     public TokenStream tokenStream(String fieldName, Reader reader)
>     {
> -       TokenStream result = new StandardTokenizer(reader);
> +        TokenStream result = new StandardTokenizer(matchVersion, reader);
>         result = new GreekLowerCaseFilter(result);
> -        result = new StopFilter(false, result, stopSet);
> +        result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                result, stopSet);
>         return result;
>     }
>
> @@ -113,9 +124,10 @@
>       SavedStreams streams = (SavedStreams) getPreviousTokenStream();
>       if (streams == null) {
>         streams = new SavedStreams();
> -        streams.source = new StandardTokenizer(reader);
> +        streams.source = new StandardTokenizer(matchVersion, reader);
>         streams.result = new GreekLowerCaseFilter(streams.source);
> -        streams.result = new StopFilter(false, streams.result, stopSet);
> +        streams.result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                        streams.result, stopSet);
>         setPreviousTokenStream(streams);
>       } else {
>         streams.source.reset(reader);
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/fa/PersianAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/fa/PersianAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/fa/PersianAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/fa/PersianAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -35,6 +35,7 @@
>  import org.apache.lucene.analysis.WordlistLoader;
>  import org.apache.lucene.analysis.ar.ArabicLetterTokenizer;
>  import org.apache.lucene.analysis.ar.ArabicNormalizationFilter;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * {@link Analyzer} for Persian.
> @@ -106,36 +107,40 @@
>     }
>   }
>
> -
> +  private final Version matchVersion;
>
>   /**
>    * Builds an analyzer with the default stop words:
>    * {@link #DEFAULT_STOPWORD_FILE}.
>    */
> -  public PersianAnalyzer() {
> +  public PersianAnalyzer(Version matchVersion) {
>     stoptable = DefaultSetHolder.DEFAULT_STOP_SET;
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
>    * Builds an analyzer with the given stop words.
>    */
> -  public PersianAnalyzer(String[] stopwords) {
> +  public PersianAnalyzer(Version matchVersion, String[] stopwords) {
>     stoptable = StopFilter.makeStopSet(stopwords);
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
>    * Builds an analyzer with the given stop words.
>    */
> -  public PersianAnalyzer(Hashtable stopwords) {
> +  public PersianAnalyzer(Version matchVersion, Hashtable stopwords) {
>     stoptable = new HashSet(stopwords.keySet());
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
>    * Builds an analyzer with the given stop words. Lines can be commented
> out
>    * using {@link #STOPWORDS_COMMENT}
>    */
> -  public PersianAnalyzer(File stopwords) throws IOException {
> +  public PersianAnalyzer(Version matchVersion, File stopwords) throws
> IOException {
>     stoptable = WordlistLoader.getWordSet(stopwords, STOPWORDS_COMMENT);
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
> @@ -157,8 +162,8 @@
>      * the order here is important: the stopword list is normalized with
> the
>      * above!
>      */
> -    result = new StopFilter(false, result, stoptable);
> -
> +    result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                            result, stoptable);
>     return result;
>   }
>
> @@ -190,7 +195,8 @@
>        * the order here is important: the stopword list is normalized with
> the
>        * above!
>        */
> -      streams.result = new StopFilter(false, streams.result, stoptable);
> +      streams.result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                      streams.result, stoptable);
>       setPreviousTokenStream(streams);
>     } else {
>       streams.source.reset(reader);
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/fr/FrenchAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/fr/FrenchAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/fr/FrenchAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/fr/FrenchAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -25,6 +25,7 @@
>  import org.apache.lucene.analysis.WordlistLoader;
>  import org.apache.lucene.analysis.standard.StandardFilter;
>  import org.apache.lucene.analysis.standard.StandardTokenizer;
> +import org.apache.lucene.util.Version;
>
>  import java.io.File;
>  import java.io.IOException;
> @@ -42,6 +43,17 @@
>  * A default set of stopwords is used unless an alternative list is
> specified, but the
>  * exclusion list is empty by default.
>  * </p>
> + *
> + * <a name="version"/>
> + * <p>You must specify the required {@link Version}
> + * compatibility when creating FrenchAnalyzer:
> + * <ul>
> + *   <li> As of 2.9, StopFilter preserves position
> + *        increments
> + * </ul>
> + *
> + * <p><b>NOTE</b>: This class uses the same {@link Version}
> + * dependent settings as {@link StandardAnalyzer}.</p>
>  */
>  public final class FrenchAnalyzer extends Analyzer {
>
> @@ -82,26 +94,31 @@
>    */
>   private Set excltable = new HashSet();
>
> +  private final Version matchVersion;
> +
>   /**
>    * Builds an analyzer with the default stop words ({@link
> #FRENCH_STOP_WORDS}).
>    */
> -  public FrenchAnalyzer() {
> +  public FrenchAnalyzer(Version matchVersion) {
>     stoptable = StopFilter.makeStopSet(FRENCH_STOP_WORDS);
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
>    * Builds an analyzer with the given stop words.
>    */
> -  public FrenchAnalyzer(String... stopwords) {
> +  public FrenchAnalyzer(Version matchVersion, String... stopwords) {
>     stoptable = StopFilter.makeStopSet(stopwords);
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
>    * Builds an analyzer with the given stop words.
>    * @throws IOException
>    */
> -  public FrenchAnalyzer(File stopwords) throws IOException {
> +  public FrenchAnalyzer(Version matchVersion, File stopwords) throws
> IOException {
>     stoptable = new HashSet(WordlistLoader.getWordSet(stopwords));
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
> @@ -138,9 +155,10 @@
>    *         {@link FrenchStemFilter} and {@link LowerCaseFilter}
>    */
>   public final TokenStream tokenStream(String fieldName, Reader reader) {
> -    TokenStream result = new StandardTokenizer(reader);
> +    TokenStream result = new StandardTokenizer(matchVersion, reader);
>     result = new StandardFilter(result);
> -    result = new StopFilter(false, result, stoptable);
> +    result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                            result, stoptable);
>     result = new FrenchStemFilter(result, excltable);
>     // Convert to lowercase after stemming!
>     result = new LowerCaseFilter(result);
> @@ -165,9 +183,10 @@
>     SavedStreams streams = (SavedStreams) getPreviousTokenStream();
>     if (streams == null) {
>       streams = new SavedStreams();
> -      streams.source = new StandardTokenizer(reader);
> +      streams.source = new StandardTokenizer(matchVersion, reader);
>       streams.result = new StandardFilter(streams.source);
> -      streams.result = new StopFilter(false, streams.result, stoptable);
> +      streams.result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                      streams.result, stoptable);
>       streams.result = new FrenchStemFilter(streams.result, excltable);
>       // Convert to lowercase after stemming!
>       streams.result = new LowerCaseFilter(streams.result);
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/nl/DutchAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/nl/DutchAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/nl/DutchAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/nl/DutchAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -23,6 +23,7 @@
>  import org.apache.lucene.analysis.Tokenizer;
>  import org.apache.lucene.analysis.standard.StandardFilter;
>  import org.apache.lucene.analysis.standard.StandardTokenizer;
> +import org.apache.lucene.util.Version;
>
>  import java.io.File;
>  import java.io.IOException;
> @@ -42,6 +43,9 @@
>  * A default set of stopwords is used unless an alternative list is
> specified, but the
>  * exclusion list is empty by default.
>  * </p>
> + *
> + * <p><b>NOTE</b>: This class uses the same {@link Version}
> + * dependent settings as {@link StandardAnalyzer}.</p>
>  */
>  public class DutchAnalyzer extends Analyzer {
>   /**
> @@ -73,30 +77,33 @@
>   private Set excltable = new HashSet();
>
>   private Map stemdict = new HashMap();
> -
> +  private final Version matchVersion;
>
>   /**
>    * Builds an analyzer with the default stop words ({@link
> #DUTCH_STOP_WORDS})
>    * and a few default entries for the stem exclusion table.
>    *
>    */
> -  public DutchAnalyzer() {
> +  public DutchAnalyzer(Version matchVersion) {
>     setOverridesTokenStreamMethod(DutchAnalyzer.class);
>     stoptable = StopFilter.makeStopSet(DUTCH_STOP_WORDS);
>     stemdict.put("fiets", "fiets"); //otherwise fiet
>     stemdict.put("bromfiets", "bromfiets"); //otherwise bromfiet
>     stemdict.put("ei", "eier");
>     stemdict.put("kind", "kinder");
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
>    * Builds an analyzer with the given stop words.
>    *
> +   * @param matchVersion
>    * @param stopwords
>    */
> -  public DutchAnalyzer(String... stopwords) {
> +  public DutchAnalyzer(Version matchVersion, String... stopwords) {
>     setOverridesTokenStreamMethod(DutchAnalyzer.class);
>     stoptable = StopFilter.makeStopSet(stopwords);
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
> @@ -104,9 +111,10 @@
>    *
>    * @param stopwords
>    */
> -  public DutchAnalyzer(HashSet stopwords) {
> +  public DutchAnalyzer(Version matchVersion, HashSet stopwords) {
>     setOverridesTokenStreamMethod(DutchAnalyzer.class);
>     stoptable = stopwords;
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
> @@ -114,7 +122,7 @@
>    *
>    * @param stopwords
>    */
> -  public DutchAnalyzer(File stopwords) {
> +  public DutchAnalyzer(Version matchVersion, File stopwords) {
>     setOverridesTokenStreamMethod(DutchAnalyzer.class);
>     try {
>       stoptable =
> org.apache.lucene.analysis.WordlistLoader.getWordSet(stopwords);
> @@ -122,6 +130,7 @@
>       // TODO: throw IOException
>       throw new RuntimeException(e);
>     }
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
> @@ -179,9 +188,10 @@
>    *   and {@link DutchStemFilter}
>    */
>   public TokenStream tokenStream(String fieldName, Reader reader) {
> -    TokenStream result = new StandardTokenizer(reader);
> +    TokenStream result = new StandardTokenizer(matchVersion, reader);
>     result = new StandardFilter(result);
> -    result = new StopFilter(false, result, stoptable);
> +    result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                            result, stoptable);
>     result = new DutchStemFilter(result, excltable, stemdict);
>     return result;
>   }
> @@ -211,9 +221,10 @@
>     SavedStreams streams = (SavedStreams) getPreviousTokenStream();
>     if (streams == null) {
>       streams = new SavedStreams();
> -      streams.source = new StandardTokenizer(reader);
> +      streams.source = new StandardTokenizer(matchVersion, reader);
>       streams.result = new StandardFilter(streams.source);
> -      streams.result = new StopFilter(false, streams.result, stoptable);
> +      streams.result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                      streams.result, stoptable);
>       streams.result = new DutchStemFilter(streams.result, excltable,
> stemdict);
>       setPreviousTokenStream(streams);
>     } else {
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/query/QueryAutoStopWordAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/query/QueryAutoStopWordAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/query/QueryAutoStopWordAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/query/QueryAutoStopWordAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -23,6 +23,7 @@
>  import org.apache.lucene.analysis.TokenStream;
>  import org.apache.lucene.analysis.StopFilter;
>  import org.apache.lucene.util.StringHelper;
> +import org.apache.lucene.util.Version;
>
>  import java.io.IOException;
>  import java.io.Reader;
> @@ -48,15 +49,17 @@
>   //The default maximum percentage (40%) of index documents which
>   //can contain a term, after which the term is considered to be a stop
> word.
>   public static final float defaultMaxDocFreqPercent = 0.4f;
> +  private final Version matchVersion;
>
>   /**
>    * Initializes this analyzer with the Analyzer object that actually
> produces the tokens
>    *
>    * @param delegate The choice of {@link Analyzer} that is used to produce
> the token stream which needs filtering
>    */
> -  public QueryAutoStopWordAnalyzer(Analyzer delegate) {
> +  public QueryAutoStopWordAnalyzer(Version matchVersion, Analyzer
> delegate) {
>     this.delegate = delegate;
>     setOverridesTokenStreamMethod(QueryAutoStopWordAnalyzer.class);
> +    this.matchVersion = matchVersion;
>   }
>
>   /**
> @@ -175,7 +178,8 @@
>     }
>     HashSet stopWords = (HashSet) stopWordsPerField.get(fieldName);
>     if (stopWords != null) {
> -      result = new StopFilter(false, result, stopWords);
> +      result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                              result, stopWords);
>     }
>     return result;
>   }
> @@ -217,7 +221,8 @@
>       /* if there are any stopwords for the field, save the stopfilter */
>       HashSet stopWords = (HashSet) stopWordsPerField.get(fieldName);
>       if (stopWords != null)
> -        streams.withStopFilter = new StopFilter(false, streams.wrapped,
> stopWords);
> +        streams.withStopFilter = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                                streams.wrapped,
> stopWords);
>       else
>         streams.withStopFilter = streams.wrapped;
>
> @@ -238,7 +243,8 @@
>         streams.wrapped = result;
>         HashSet stopWords = (HashSet) stopWordsPerField.get(fieldName);
>         if (stopWords != null)
> -          streams.withStopFilter = new StopFilter(false, streams.wrapped,
> stopWords);
> +          streams.withStopFilter = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                                  streams.wrapped,
> stopWords);
>         else
>           streams.withStopFilter = streams.wrapped;
>       }
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/ru/RussianAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/ru/RussianAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/ru/RussianAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/ru/RussianAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -28,6 +28,7 @@
>  import org.apache.lucene.analysis.StopFilter;
>  import org.apache.lucene.analysis.TokenStream;
>  import org.apache.lucene.analysis.Tokenizer;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * {@link Analyzer} for Russian language.
> @@ -60,27 +61,31 @@
>      */
>     private Set stopSet = new HashSet();
>
> -    public RussianAnalyzer() {
> -        this(RUSSIAN_STOP_WORDS);
> +    private final Version matchVersion;
> +
> +    public RussianAnalyzer(Version matchVersion) {
> +      this(matchVersion, RUSSIAN_STOP_WORDS);
>     }
>
>     /**
>      * Builds an analyzer with the given stop words.
>      */
> -    public RussianAnalyzer(String... stopwords)
> +    public RussianAnalyzer(Version matchVersion, String... stopwords)
>     {
> -       super();
> -       stopSet = StopFilter.makeStopSet(stopwords);
> +      super();
> +      stopSet = StopFilter.makeStopSet(stopwords);
> +      this.matchVersion = matchVersion;
>     }
>
>     /**
>      * Builds an analyzer with the given stop words.
>      * TODO: create a Set version of this ctor
>      */
> -    public RussianAnalyzer(Map stopwords)
> +    public RussianAnalyzer(Version matchVersion, Map stopwords)
>     {
> -       super();
> -       stopSet = new HashSet(stopwords.keySet());
> +      super();
> +      stopSet = new HashSet(stopwords.keySet());
> +      this.matchVersion = matchVersion;
>     }
>
>     /**
> @@ -96,7 +101,8 @@
>     {
>         TokenStream result = new RussianLetterTokenizer(reader);
>         result = new LowerCaseFilter(result);
> -        result = new StopFilter(false, result, stopSet);
> +        result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                result, stopSet);
>         result = new RussianStemFilter(result);
>         return result;
>     }
> @@ -122,7 +128,8 @@
>       streams = new SavedStreams();
>       streams.source = new RussianLetterTokenizer(reader);
>       streams.result = new LowerCaseFilter(streams.source);
> -      streams.result = new StopFilter(false, streams.result, stopSet);
> +      streams.result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                      streams.result, stopSet);
>       streams.result = new RussianStemFilter(streams.result);
>       setPreviousTokenStream(streams);
>     } else {
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/th/ThaiAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/th/ThaiAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/th/ThaiAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/java/org/apache/lucene/analysis/th/ThaiAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -25,22 +25,29 @@
>  import org.apache.lucene.analysis.Tokenizer;
>  import org.apache.lucene.analysis.standard.StandardFilter;
>  import org.apache.lucene.analysis.standard.StandardTokenizer;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * {@link Analyzer} for Thai language. It uses {@link
> java.text.BreakIterator} to break words.
>  * @version 0.2
> + *
> + * <p><b>NOTE</b>: This class uses the same {@link Version}
> + * dependent settings as {@link StandardAnalyzer}.</p>
>  */
>  public class ThaiAnalyzer extends Analyzer {
> -
> -  public ThaiAnalyzer() {
> +  private final Version matchVersion;
> +
> +  public ThaiAnalyzer(Version matchVersion) {
>     setOverridesTokenStreamMethod(ThaiAnalyzer.class);
> +    this.matchVersion = matchVersion;
>   }
>
>   public TokenStream tokenStream(String fieldName, Reader reader) {
> -         TokenStream ts = new StandardTokenizer(reader);
> +    TokenStream ts = new StandardTokenizer(matchVersion, reader);
>     ts = new StandardFilter(ts);
>     ts = new ThaiWordFilter(ts);
> -    ts = new StopFilter(false, ts, StopAnalyzer.ENGLISH_STOP_WORDS_SET);
> +    ts = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                        ts, StopAnalyzer.ENGLISH_STOP_WORDS_SET);
>     return ts;
>   }
>
> @@ -60,10 +67,11 @@
>     SavedStreams streams = (SavedStreams) getPreviousTokenStream();
>     if (streams == null) {
>       streams = new SavedStreams();
> -      streams.source = new StandardTokenizer(reader);
> +      streams.source = new StandardTokenizer(matchVersion, reader);
>       streams.result = new StandardFilter(streams.source);
>       streams.result = new ThaiWordFilter(streams.result);
> -      streams.result = new StopFilter(false, streams.result,
> StopAnalyzer.ENGLISH_STOP_WORDS_SET);
> +      streams.result = new
> StopFilter(StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion),
> +                                      streams.result,
> StopAnalyzer.ENGLISH_STOP_WORDS_SET);
>       setPreviousTokenStream(streams);
>     } else {
>       streams.source.reset(reader);
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/ar/TestArabicAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/ar/TestArabicAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/ar/TestArabicAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/ar/TestArabicAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -22,6 +22,7 @@
>  import org.apache.lucene.analysis.Analyzer;
>  import org.apache.lucene.analysis.TokenStream;
>  import org.apache.lucene.analysis.BaseTokenStreamTestCase;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * Test the Arabic Analyzer
> @@ -32,14 +33,14 @@
>   /** This test fails with NPE when the
>    * stopwords file is missing in classpath */
>   public void testResourcesAvailable() {
> -    new ArabicAnalyzer();
> +    new ArabicAnalyzer(Version.LUCENE_CURRENT);
>   }
>
>   /**
>    * Some simple tests showing some features of the analyzer, how some
> regular forms will conflate
>    */
>   public void testBasicFeatures() throws Exception {
> -    ArabicAnalyzer a = new ArabicAnalyzer();
> +    ArabicAnalyzer a = new ArabicAnalyzer(Version.LUCENE_CURRENT);
>     assertAnalyzesTo(a, "كبير", new String[] { "كبير" });
>     assertAnalyzesTo(a, "كبيرة", new String[] { "كبير" }); //
> feminine marker
>
> @@ -60,7 +61,7 @@
>    * Simple tests to show things are getting reset correctly, etc.
>    */
>   public void testReusableTokenStream() throws Exception {
> -    ArabicAnalyzer a = new ArabicAnalyzer();
> +    ArabicAnalyzer a = new ArabicAnalyzer(Version.LUCENE_CURRENT);
>     assertAnalyzesToReuse(a, "كبير", new String[] { "كبير" });
>     assertAnalyzesToReuse(a, "كبيرة", new String[] { "كبير" }); //
> feminine marker
>   }
> @@ -69,7 +70,7 @@
>    * Non-arabic text gets treated in a similar way as SimpleAnalyzer.
>    */
>   public void testEnglishInput() throws Exception {
> -    assertAnalyzesTo(new ArabicAnalyzer(), "English text.", new String[] {
> +    assertAnalyzesTo(new ArabicAnalyzer(Version.LUCENE_CURRENT), "English
> text.", new String[] {
>         "english", "text" });
>   }
>
> @@ -77,7 +78,7 @@
>    * Test that custom stopwords work, and are not case-sensitive.
>    */
>   public void testCustomStopwords() throws Exception {
> -    ArabicAnalyzer a = new ArabicAnalyzer(new String[] { "the", "and", "a"
> });
> +    ArabicAnalyzer a = new ArabicAnalyzer(Version.LUCENE_CURRENT, new
> String[] { "the", "and", "a" });
>     assertAnalyzesTo(a, "The quick brown fox.", new String[] { "quick",
>         "brown", "fox" });
>   }
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/br/TestBrazilianStemmer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/br/TestBrazilianStemmer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/br/TestBrazilianStemmer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/br/TestBrazilianStemmer.java
> Fri Oct 23 20:25:17 2009
> @@ -21,6 +21,7 @@
>  import org.apache.lucene.analysis.Analyzer;
>  import org.apache.lucene.analysis.TokenStream;
>  import org.apache.lucene.analysis.tokenattributes.TermAttribute;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * Test the Brazilian Stem Filter, which only modifies the term text.
> @@ -123,7 +124,7 @@
>   }
>
>   public void testReusableTokenStream() throws Exception {
> -    Analyzer a = new BrazilianAnalyzer();
> +    Analyzer a = new BrazilianAnalyzer(Version.LUCENE_CURRENT);
>     checkReuse(a, "boa", "boa");
>     checkReuse(a, "boainain", "boainain");
>     checkReuse(a, "boas", "boas");
> @@ -131,7 +132,7 @@
>   }
>
>   public void testStemExclusionTable() throws Exception {
> -    BrazilianAnalyzer a = new BrazilianAnalyzer();
> +    BrazilianAnalyzer a = new BrazilianAnalyzer(Version.LUCENE_CURRENT);
>     a.setStemExclusionTable(new String[] { "quintessência" });
>     checkReuse(a, "quintessência", "quintessência"); // excluded words
> will be completely unchanged.
>   }
> @@ -141,14 +142,14 @@
>    * when using reusable token streams.
>    */
>   public void testExclusionTableReuse() throws Exception {
> -    BrazilianAnalyzer a = new BrazilianAnalyzer();
> +    BrazilianAnalyzer a = new BrazilianAnalyzer(Version.LUCENE_CURRENT);
>     checkReuse(a, "quintessência", "quintessente");
>     a.setStemExclusionTable(new String[] { "quintessência" });
>     checkReuse(a, "quintessência", "quintessência");
>   }
>
>   private void check(final String input, final String expected) throws
> Exception {
> -    checkOneTerm(new BrazilianAnalyzer(), input, expected);
> +    checkOneTerm(new BrazilianAnalyzer(Version.LUCENE_CURRENT), input,
> expected);
>   }
>
>   private void checkReuse(Analyzer a, String input, String expected) throws
> Exception {
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/cjk/TestCJKTokenizer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/cjk/TestCJKTokenizer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/cjk/TestCJKTokenizer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/cjk/TestCJKTokenizer.java
> Fri Oct 23 20:25:17 2009
> @@ -26,7 +26,7 @@
>  import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;
>  import org.apache.lucene.analysis.tokenattributes.TermAttribute;
>  import org.apache.lucene.analysis.tokenattributes.TypeAttribute;
> -
> +import org.apache.lucene.util.Version;
>
>  public class TestCJKTokenizer extends BaseTokenStreamTestCase {
>
> @@ -218,7 +218,7 @@
>   }
>
>   public void testTokenStream() throws Exception {
> -    Analyzer analyzer = new CJKAnalyzer();
> +    Analyzer analyzer = new CJKAnalyzer(Version.LUCENE_CURRENT);
>     TokenStream ts = analyzer.tokenStream("dummy", new
> StringReader("\u4e00\u4e01\u4e02"));
>     TermAttribute termAtt = ts.getAttribute(TermAttribute.class);
>     assertTrue(ts.incrementToken());
> @@ -229,7 +229,7 @@
>   }
>
>   public void testReusableTokenStream() throws Exception {
> -    Analyzer analyzer = new CJKAnalyzer();
> +    Analyzer analyzer = new CJKAnalyzer(Version.LUCENE_CURRENT);
>     String str =
> "\u3042\u3044\u3046\u3048\u304aabc\u304b\u304d\u304f\u3051\u3053";
>
>     TestToken[] out_tokens = {
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/cz/TestCzechAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/cz/TestCzechAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/cz/TestCzechAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/cz/TestCzechAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -25,6 +25,7 @@
>  import org.apache.lucene.analysis.BaseTokenStreamTestCase;
>  import org.apache.lucene.analysis.Analyzer;
>  import org.apache.lucene.analysis.TokenStream;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * Test the CzechAnalyzer
> @@ -37,11 +38,11 @@
>   File customStopFile = new File(dataDir,
> "org/apache/lucene/analysis/cz/customStopWordFile.txt");
>
>   public void testStopWord() throws Exception {
> -    assertAnalyzesTo(new CzechAnalyzer(), "Pokud mluvime o volnem", new
> String[] { "mluvime", "volnem" });
> +    assertAnalyzesTo(new CzechAnalyzer(Version.LUCENE_CURRENT), "Pokud
> mluvime o volnem", new String[] { "mluvime", "volnem" });
>   }
>
>   public void testReusableTokenStream() throws Exception {
> -    Analyzer analyzer = new CzechAnalyzer();
> +    Analyzer analyzer = new CzechAnalyzer(Version.LUCENE_CURRENT);
>     assertAnalyzesToReuse(analyzer, "Pokud mluvime o volnem", new String[]
> { "mluvime", "volnem" });
>     assertAnalyzesToReuse(analyzer, "ÄŒeská Republika", new String[] { "Ä
> eská", "republika" });
>   }
> @@ -61,7 +62,7 @@
>    * this would cause a NPE when it is time to create the StopFilter.
>    */
>   public void testInvalidStopWordFile() throws Exception {
> -    CzechAnalyzer cz = new CzechAnalyzer();
> +    CzechAnalyzer cz = new CzechAnalyzer(Version.LUCENE_CURRENT);
>     cz.loadStopWords(new UnreliableInputStream(), "UTF-8");
>     assertAnalyzesTo(cz, "Pokud mluvime o volnem",
>         new String[] { "pokud", "mluvime", "o", "volnem" });
> @@ -72,7 +73,7 @@
>    * when using reusable token streams.
>    */
>   public void testStopWordFileReuse() throws Exception {
> -    CzechAnalyzer cz = new CzechAnalyzer();
> +    CzechAnalyzer cz = new CzechAnalyzer(Version.LUCENE_CURRENT);
>     assertAnalyzesToReuse(cz, "Česká Republika",
>       new String[] { "Ä eská", "republika" });
>
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/de/TestGermanStemFilter.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/de/TestGermanStemFilter.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/de/TestGermanStemFilter.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/de/TestGermanStemFilter.java
> Fri Oct 23 20:25:17 2009
> @@ -28,6 +28,7 @@
>  import org.apache.lucene.analysis.TokenStream;
>  import org.apache.lucene.analysis.WhitespaceTokenizer;
>  import org.apache.lucene.analysis.standard.StandardTokenizer;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * Test the German stemmer. The stemming algorithm is known to work less
> @@ -61,7 +62,7 @@
>   }
>
>   public void testReusableTokenStream() throws Exception {
> -    Analyzer a = new GermanAnalyzer();
> +    Analyzer a = new GermanAnalyzer(Version.LUCENE_CURRENT);
>     checkReuse(a, "Tisch", "tisch");
>     checkReuse(a, "Tische", "tisch");
>     checkReuse(a, "Tischen", "tisch");
> @@ -71,13 +72,17 @@
>    * subclass that acts just like whitespace analyzer for testing
>    */
>   private class GermanSubclassAnalyzer extends GermanAnalyzer {
> +    public GermanSubclassAnalyzer(Version matchVersion) {
> +      super(matchVersion);
> +    }
> +
>     public TokenStream tokenStream(String fieldName, Reader reader) {
>       return new WhitespaceTokenizer(reader);
>     }
>   }
>
>   public void testLUCENE1678BWComp() throws Exception {
> -    checkReuse(new GermanSubclassAnalyzer(), "Tischen", "Tischen");
> +    checkReuse(new GermanSubclassAnalyzer(Version.LUCENE_CURRENT),
> "Tischen", "Tischen");
>   }
>
>   /*
> @@ -85,14 +90,14 @@
>    * when using reusable token streams.
>    */
>   public void testExclusionTableReuse() throws Exception {
> -    GermanAnalyzer a = new GermanAnalyzer();
> +    GermanAnalyzer a = new GermanAnalyzer(Version.LUCENE_CURRENT);
>     checkReuse(a, "tischen", "tisch");
>     a.setStemExclusionTable(new String[] { "tischen" });
>     checkReuse(a, "tischen", "tischen");
>   }
>
>   private void check(final String input, final String expected) throws
> Exception {
> -    checkOneTerm(new GermanAnalyzer(), input, expected);
> +    checkOneTerm(new GermanAnalyzer(Version.LUCENE_CURRENT), input,
> expected);
>   }
>
>   private void checkReuse(Analyzer a, String input, String expected) throws
> Exception {
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/el/GreekAnalyzerTest.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/el/GreekAnalyzerTest.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/el/GreekAnalyzerTest.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/el/GreekAnalyzerTest.java
> Fri Oct 23 20:25:17 2009
> @@ -19,7 +19,7 @@
>  import org.apache.lucene.analysis.BaseTokenStreamTestCase;
>  import org.apache.lucene.analysis.Analyzer;
>  import org.apache.lucene.analysis.TokenStream;
> -
> +import org.apache.lucene.util.Version;
>
>  /**
>  * A unit test class for verifying the correct operation of the
> GreekAnalyzer.
> @@ -33,7 +33,7 @@
>         * @throws Exception in case an error occurs
>         */
>        public void testAnalyzer() throws Exception {
> -               Analyzer a = new GreekAnalyzer();
> +               Analyzer a = new GreekAnalyzer(Version.LUCENE_CURRENT);
>                // Verify the correct analysis of capitals and small
> accented letters
>                assertAnalyzesTo(a, "\u039c\u03af\u03b1
> \u03b5\u03be\u03b1\u03b9\u03c1\u03b5\u03c4\u03b9\u03ba\u03ac
> \u03ba\u03b1\u03bb\u03ae \u03ba\u03b1\u03b9
> \u03c0\u03bb\u03bf\u03cd\u03c3\u03b9\u03b1 \u03c3\u03b5\u03b9\u03c1\u03ac
> \u03c7\u03b1\u03c1\u03b1\u03ba\u03c4\u03ae\u03c1\u03c9\u03bd
> \u03c4\u03b7\u03c2 \u0395\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03ae\u03c2
> \u03b3\u03bb\u03ce\u03c3\u03c3\u03b1\u03c2",
>                                new String[] { "\u03bc\u03b9\u03b1",
> "\u03b5\u03be\u03b1\u03b9\u03c1\u03b5\u03c4\u03b9\u03ba\u03b1",
> "\u03ba\u03b1\u03bb\u03b7", "\u03c0\u03bb\u03bf\u03c5\u03c3\u03b9\u03b1",
> "\u03c3\u03b5\u03b9\u03c1\u03b1",
> "\u03c7\u03b1\u03c1\u03b1\u03ba\u03c4\u03b7\u03c1\u03c9\u03bd",
> @@ -49,7 +49,7 @@
>        }
>
>        public void testReusableTokenStream() throws Exception {
> -           Analyzer a = new GreekAnalyzer();
> +           Analyzer a = new GreekAnalyzer(Version.LUCENE_CURRENT);
>            // Verify the correct analysis of capitals and small accented
> letters
>            assertAnalyzesToReuse(a, "\u039c\u03af\u03b1
> \u03b5\u03be\u03b1\u03b9\u03c1\u03b5\u03c4\u03b9\u03ba\u03ac
> \u03ba\u03b1\u03bb\u03ae \u03ba\u03b1\u03b9
> \u03c0\u03bb\u03bf\u03cd\u03c3\u03b9\u03b1 \u03c3\u03b5\u03b9\u03c1\u03ac
> \u03c7\u03b1\u03c1\u03b1\u03ba\u03c4\u03ae\u03c1\u03c9\u03bd
> \u03c4\u03b7\u03c2 \u0395\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03ae\u03c2
> \u03b3\u03bb\u03ce\u03c3\u03c3\u03b1\u03c2",
>                    new String[] { "\u03bc\u03b9\u03b1",
> "\u03b5\u03be\u03b1\u03b9\u03c1\u03b5\u03c4\u03b9\u03ba\u03b1",
> "\u03ba\u03b1\u03bb\u03b7", "\u03c0\u03bb\u03bf\u03c5\u03c3\u03b9\u03b1",
> "\u03c3\u03b5\u03b9\u03c1\u03b1",
> "\u03c7\u03b1\u03c1\u03b1\u03ba\u03c4\u03b7\u03c1\u03c9\u03bd",
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fa/TestPersianAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fa/TestPersianAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fa/TestPersianAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fa/TestPersianAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -22,6 +22,7 @@
>  import org.apache.lucene.analysis.BaseTokenStreamTestCase;
>  import org.apache.lucene.analysis.Analyzer;
>  import org.apache.lucene.analysis.TokenStream;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * Test the Persian Analyzer
> @@ -33,7 +34,7 @@
>    * This test fails with NPE when the stopwords file is missing in
> classpath
>    */
>   public void testResourcesAvailable() {
> -    new PersianAnalyzer();
> +    new PersianAnalyzer(Version.LUCENE_CURRENT);
>   }
>
>   /**
> @@ -44,7 +45,7 @@
>    * These verb forms are from
> http://en.wikipedia.org/wiki/Persian_grammar
>    */
>   public void testBehaviorVerbs() throws Exception {
> -    Analyzer a = new PersianAnalyzer();
> +    Analyzer a = new PersianAnalyzer(Version.LUCENE_CURRENT);
>     // active present indicative
>     assertAnalyzesTo(a, "می‌خورد", new String[] { "خورد" });
>     // active preterite indicative
> @@ -120,7 +121,7 @@
>    * These verb forms are from
> http://en.wikipedia.org/wiki/Persian_grammar
>    */
>   public void testBehaviorVerbsDefective() throws Exception {
> -    Analyzer a = new PersianAnalyzer();
> +    Analyzer a = new PersianAnalyzer(Version.LUCENE_CURRENT);
>     // active present indicative
>     assertAnalyzesTo(a, "مي خورد", new String[] { "خورد" });
>     // active preterite indicative
> @@ -191,7 +192,7 @@
>    * nouns, removing the plural -ha.
>    */
>   public void testBehaviorNouns() throws Exception {
> -    Analyzer a = new PersianAnalyzer();
> +    Analyzer a = new PersianAnalyzer(Version.LUCENE_CURRENT);
>     assertAnalyzesTo(a, "برگ ها", new String[] { "برگ" });
>     assertAnalyzesTo(a, "برگ‌ها", new String[] { "برگ" });
>   }
> @@ -201,7 +202,7 @@
>    * (lowercased, etc)
>    */
>   public void testBehaviorNonPersian() throws Exception {
> -    Analyzer a = new PersianAnalyzer();
> +    Analyzer a = new PersianAnalyzer(Version.LUCENE_CURRENT);
>     assertAnalyzesTo(a, "English test.", new String[] { "english", "test"
> });
>   }
>
> @@ -209,7 +210,7 @@
>    * Basic test ensuring that reusableTokenStream works correctly.
>    */
>   public void testReusableTokenStream() throws Exception {
> -    Analyzer a = new PersianAnalyzer();
> +    Analyzer a = new PersianAnalyzer(Version.LUCENE_CURRENT);
>     assertAnalyzesToReuse(a, "خورده مي شده بوده باشد",
> new String[] { "خورده" });
>     assertAnalyzesToReuse(a, "برگ‌ها", new String[] { "برگ" });
>   }
> @@ -218,7 +219,7 @@
>    * Test that custom stopwords work, and are not case-sensitive.
>    */
>   public void testCustomStopwords() throws Exception {
> -    PersianAnalyzer a = new PersianAnalyzer(new String[] { "the", "and",
> "a" });
> +    PersianAnalyzer a = new PersianAnalyzer(Version.LUCENE_CURRENT, new
> String[] { "the", "and", "a" });
>     assertAnalyzesTo(a, "The quick brown fox.", new String[] { "quick",
>         "brown", "fox" });
>   }
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fr/TestElision.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fr/TestElision.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fr/TestElision.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fr/TestElision.java
> Fri Oct 23 20:25:17 2009
> @@ -29,6 +29,7 @@
>  import org.apache.lucene.analysis.Tokenizer;
>  import org.apache.lucene.analysis.standard.StandardTokenizer;
>  import org.apache.lucene.analysis.tokenattributes.TermAttribute;
> +import org.apache.lucene.util.Version;
>
>  /**
>  *
> @@ -37,7 +38,7 @@
>
>   public void testElision() throws Exception {
>     String test = "Plop, juste pour voir l'embrouille avec O'brian.
> M'enfin.";
> -    Tokenizer tokenizer = new StandardTokenizer(new StringReader(test));
> +    Tokenizer tokenizer = new StandardTokenizer(Version.LUCENE_CURRENT,
> new StringReader(test));
>     Set articles = new HashSet();
>     articles.add("l");
>     articles.add("M");
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fr/TestFrenchAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fr/TestFrenchAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fr/TestFrenchAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/fr/TestFrenchAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -22,6 +22,7 @@
>  import org.apache.lucene.analysis.BaseTokenStreamTestCase;
>  import org.apache.lucene.analysis.Analyzer;
>  import org.apache.lucene.analysis.TokenStream;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * Test case for FrenchAnalyzer.
> @@ -32,7 +33,7 @@
>  public class TestFrenchAnalyzer extends BaseTokenStreamTestCase {
>
>        public void testAnalyzer() throws Exception {
> -               FrenchAnalyzer fa = new FrenchAnalyzer();
> +               FrenchAnalyzer fa = new
> FrenchAnalyzer(Version.LUCENE_CURRENT);
>
>                assertAnalyzesTo(fa, "", new String[] {
>                });
> @@ -116,7 +117,7 @@
>        }
>
>        public void testReusableTokenStream() throws Exception {
> -         FrenchAnalyzer fa = new FrenchAnalyzer();
> +         FrenchAnalyzer fa = new FrenchAnalyzer(Version.LUCENE_CURRENT);
>          // stopwords
>       assertAnalyzesToReuse(
>           fa,
> @@ -141,7 +142,7 @@
>         * when using reusable token streams.
>         */
>        public void testExclusionTableReuse() throws Exception {
> -         FrenchAnalyzer fa = new FrenchAnalyzer();
> +         FrenchAnalyzer fa = new FrenchAnalyzer(Version.LUCENE_CURRENT);
>          assertAnalyzesToReuse(fa, "habitable", new String[] { "habit" });
>          fa.setStemExclusionTable(new String[] { "habitable" });
>          assertAnalyzesToReuse(fa, "habitable", new String[] { "habitable"
> });
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/nl/TestDutchStemmer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/nl/TestDutchStemmer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/nl/TestDutchStemmer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/nl/TestDutchStemmer.java
> Fri Oct 23 20:25:17 2009
> @@ -24,6 +24,7 @@
>  import org.apache.lucene.analysis.Analyzer;
>  import org.apache.lucene.analysis.TokenStream;
>  import org.apache.lucene.analysis.WhitespaceTokenizer;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * Test the Dutch Stem Filter, which only modifies the term text.
> @@ -119,7 +120,7 @@
>   }
>
>   public void testReusableTokenStream() throws Exception {
> -    Analyzer a = new DutchAnalyzer();
> +    Analyzer a = new DutchAnalyzer(Version.LUCENE_CURRENT);
>     checkOneTermReuse(a, "lichaamsziek", "lichaamsziek");
>     checkOneTermReuse(a, "lichamelijk", "licham");
>     checkOneTermReuse(a, "lichamelijke", "licham");
> @@ -130,13 +131,16 @@
>    * subclass that acts just like whitespace analyzer for testing
>    */
>   private class DutchSubclassAnalyzer extends DutchAnalyzer {
> +    public DutchSubclassAnalyzer(Version matchVersion) {
> +      super(matchVersion);
> +    }
>     public TokenStream tokenStream(String fieldName, Reader reader) {
>       return new WhitespaceTokenizer(reader);
>     }
>   }
>
>   public void testLUCENE1678BWComp() throws Exception {
> -    Analyzer a = new DutchSubclassAnalyzer();
> +    Analyzer a = new DutchSubclassAnalyzer(Version.LUCENE_CURRENT);
>     checkOneTermReuse(a, "lichaamsziek", "lichaamsziek");
>     checkOneTermReuse(a, "lichamelijk", "lichamelijk");
>     checkOneTermReuse(a, "lichamelijke", "lichamelijke");
> @@ -148,7 +152,7 @@
>    * when using reusable token streams.
>    */
>   public void testExclusionTableReuse() throws Exception {
> -    DutchAnalyzer a = new DutchAnalyzer();
> +    DutchAnalyzer a = new DutchAnalyzer(Version.LUCENE_CURRENT);
>     checkOneTermReuse(a, "lichamelijk", "licham");
>     a.setStemExclusionTable(new String[] { "lichamelijk" });
>     checkOneTermReuse(a, "lichamelijk", "lichamelijk");
> @@ -159,14 +163,14 @@
>    * when using reusable token streams.
>    */
>   public void testStemDictionaryReuse() throws Exception {
> -    DutchAnalyzer a = new DutchAnalyzer();
> +    DutchAnalyzer a = new DutchAnalyzer(Version.LUCENE_CURRENT);
>     checkOneTermReuse(a, "lichamelijk", "licham");
>     a.setStemDictionary(customDictFile);
>     checkOneTermReuse(a, "lichamelijk", "somethingentirelydifferent");
>   }
>
>   private void check(final String input, final String expected) throws
> Exception {
> -    checkOneTerm(new DutchAnalyzer(), input, expected);
> +    checkOneTerm(new DutchAnalyzer(Version.LUCENE_CURRENT), input,
> expected);
>   }
>
>  }
> \ No newline at end of file
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/query/QueryAutoStopWordAnalyzerTest.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/query/QueryAutoStopWordAnalyzerTest.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/query/QueryAutoStopWordAnalyzerTest.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/query/QueryAutoStopWordAnalyzerTest.java
> Fri Oct 23 20:25:17 2009
> @@ -37,6 +37,7 @@
>  import org.apache.lucene.search.IndexSearcher;
>  import org.apache.lucene.search.Query;
>  import org.apache.lucene.store.RAMDirectory;
> +import org.apache.lucene.util.Version;
>
>  public class QueryAutoStopWordAnalyzerTest extends BaseTokenStreamTestCase
> {
>   String variedFieldValues[] = {"the", "quick", "brown", "fox", "jumped",
> "over", "the", "lazy", "boring", "dog"};
> @@ -62,7 +63,7 @@
>     }
>     writer.close();
>     reader = IndexReader.open(dir, true);
> -    protectedAnalyzer = new QueryAutoStopWordAnalyzer(appAnalyzer);
> +    protectedAnalyzer = new
> QueryAutoStopWordAnalyzer(Version.LUCENE_CURRENT, appAnalyzer);
>   }
>
>   protected void tearDown() throws Exception {
> @@ -72,7 +73,7 @@
>
>   //Helper method to query
>   private int search(Analyzer a, String queryString) throws IOException,
> ParseException {
> -    QueryParser qp = new QueryParser("repetitiveField", a);
> +    QueryParser qp = new QueryParser(Version.LUCENE_CURRENT,
> "repetitiveField", a);
>     Query q = qp.parse(queryString);
>     return new IndexSearcher(reader).search(q, null, 1000).totalHits;
>   }
> @@ -149,8 +150,8 @@
>    * subclass that acts just like whitespace analyzer for testing
>    */
>   private class QueryAutoStopWordSubclassAnalyzer extends
> QueryAutoStopWordAnalyzer {
> -    public QueryAutoStopWordSubclassAnalyzer() {
> -      super(new WhitespaceAnalyzer());
> +    public QueryAutoStopWordSubclassAnalyzer(Version matchVersion) {
> +      super(matchVersion, new WhitespaceAnalyzer());
>     }
>
>     public TokenStream tokenStream(String fieldName, Reader reader) {
> @@ -159,7 +160,7 @@
>   }
>
>   public void testLUCENE1678BWComp() throws Exception {
> -    QueryAutoStopWordAnalyzer a = new QueryAutoStopWordSubclassAnalyzer();
> +    QueryAutoStopWordAnalyzer a = new
> QueryAutoStopWordSubclassAnalyzer(Version.LUCENE_CURRENT);
>     a.addStopWords(reader, "repetitiveField", 10);
>     int numHits = search(a, "repetitiveField:boring");
>     assertFalse(numHits == 0);
> @@ -180,7 +181,7 @@
>   }
>
>   public void testWrappingNonReusableAnalyzer() throws Exception {
> -    QueryAutoStopWordAnalyzer a = new QueryAutoStopWordAnalyzer(new
> NonreusableAnalyzer());
> +    QueryAutoStopWordAnalyzer a = new
> QueryAutoStopWordAnalyzer(Version.LUCENE_CURRENT, new
> NonreusableAnalyzer());
>     a.addStopWords(reader, 10);
>     int numHits = search(a, "repetitiveField:boring");
>     assertTrue(numHits == 0);
> @@ -189,7 +190,7 @@
>   }
>
>   public void testTokenStream() throws Exception {
> -    QueryAutoStopWordAnalyzer a = new QueryAutoStopWordAnalyzer(new
> WhitespaceAnalyzer());
> +    QueryAutoStopWordAnalyzer a = new
> QueryAutoStopWordAnalyzer(Version.LUCENE_CURRENT, new WhitespaceAnalyzer());
>     a.addStopWords(reader, 10);
>     TokenStream ts = a.tokenStream("repetitiveField", new
> StringReader("this boring"));
>     TermAttribute termAtt = ts.getAttribute(TermAttribute.class);
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/ru/TestRussianAnalyzer.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/ru/TestRussianAnalyzer.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/ru/TestRussianAnalyzer.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/ru/TestRussianAnalyzer.java
> Fri Oct 23 20:25:17 2009
> @@ -28,6 +28,7 @@
>  import org.apache.lucene.analysis.Analyzer;
>  import org.apache.lucene.analysis.TokenStream;
>  import org.apache.lucene.analysis.tokenattributes.TermAttribute;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * Test case for RussianAnalyzer.
> @@ -49,7 +50,7 @@
>
>     public void testUnicode() throws IOException
>     {
> -        RussianAnalyzer ra = new RussianAnalyzer();
> +        RussianAnalyzer ra = new RussianAnalyzer(Version.LUCENE_CURRENT);
>         inWords =
>             new InputStreamReader(
>                 new FileInputStream(new File(dataDir,
> "/org/apache/lucene/analysis/ru/testUTF8.txt")),
> @@ -90,7 +91,7 @@
>     public void testDigitsInRussianCharset()
>     {
>         Reader reader = new StringReader("text 1000");
> -        RussianAnalyzer ra = new RussianAnalyzer();
> +        RussianAnalyzer ra = new RussianAnalyzer(Version.LUCENE_CURRENT);
>         TokenStream stream = ra.tokenStream("", reader);
>
>         TermAttribute termText = stream.getAttribute(TermAttribute.class);
> @@ -108,7 +109,7 @@
>     }
>
>     public void testReusableTokenStream() throws Exception {
> -      Analyzer a = new RussianAnalyzer();
> +      Analyzer a = new RussianAnalyzer(Version.LUCENE_CURRENT);
>       assertAnalyzesToReuse(a, "Ð’Ð¼ÐµÑ Ñ‚Ðµ Ñ  тем о Ñ Ð¸Ð»Ðµ Ñ
> лектромагнитной Ñ Ð½ÐµÑ€Ð³Ð¸Ð¸ имели предÑ
> тавление еще",
>           new String[] { "Ð²Ð¼ÐµÑ Ñ‚", "Ñ Ð¸Ð»", "Ñ
> лектромагнитн", "Ñ Ð½ÐµÑ€Ð³", "имел", "предÑ
> тавлен" });
>       assertAnalyzesToReuse(a, "Ро знание Ñ Ñ‚Ð¾ Ñ…Ñ€Ð°Ð½Ð¸Ð»Ð¾Ñ ÑŒ
> в тайне",
>
> Modified:
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/shingle/ShingleAnalyzerWrapperTest.java
> URL:
> http://svn.apache.org/viewvc/lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/shingle/ShingleAnalyzerWrapperTest.java?rev=829206&r1=829205&r2=829206&view=diff
>
> ==============================================================================
> ---
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/shingle/ShingleAnalyzerWrapperTest.java
> (original)
> +++
> lucene/java/trunk/contrib/analyzers/common/src/test/org/apache/lucene/analysis/shingle/ShingleAnalyzerWrapperTest.java
> Fri Oct 23 20:25:17 2009
> @@ -42,6 +42,7 @@
>  import org.apache.lucene.search.TermQuery;
>  import org.apache.lucene.store.Directory;
>  import org.apache.lucene.store.RAMDirectory;
> +import org.apache.lucene.util.Version;
>
>  /**
>  * A test class for ShingleAnalyzerWrapper as regards queries and scoring.
> @@ -85,7 +86,7 @@
>   protected ScoreDoc[] queryParsingTest(Analyzer analyzer, String qs)
> throws Exception {
>     searcher = setUpSearcher(analyzer);
>
> -    QueryParser qp = new QueryParser("content", analyzer);
> +    QueryParser qp = new QueryParser(Version.LUCENE_CURRENT, "content",
> analyzer);
>
>     Query q = qp.parse(qs);
>
>
>
>


-- 
Robert Muir
rcmuir@gmail.com

Mime
View raw message