lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uwe Schindler" <...@thetaphi.de>
Subject RE: svn commit: r1311920 - /lucene/dev/branches/lucene3969/modules/analysis/common/src/test/org/apache/lucene/analysis/core/TestRandomChains.java
Date Tue, 10 Apr 2012 19:07:53 GMT
No problem,

I mainly readded the missing newlines between methods.

The other indenting was not so important, but it took too much space to the
right. Why does Emacs change the indenting of unrelated code? My favorite
Notepad++ (or Eclipse if I also do refactoring) only does this on the block
you are working on! It seems your Emacs sometimes changes the whole file
formatting?

Uwe

-----
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: uwe@thetaphi.de

> -----Original Message-----
> From: Michael McCandless [mailto:lucene@mikemccandless.com]
> Sent: Tuesday, April 10, 2012 9:01 PM
> To: dev@lucene.apache.org
> Subject: Re: svn commit: r1311920 -
> /lucene/dev/branches/lucene3969/modules/analysis/common/src/test/org/apa
> che/lucene/analysis/core/TestRandomChains.java
> 
> Sorry Uwe :)
> 
> I guess Emacs indents differently from Eclipse!
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com
> 
> On Tue, Apr 10, 2012 at 2:50 PM,  <uschindler@apache.org> wrote:
> > Author: uschindler
> > Date: Tue Apr 10 18:50:54 2012
> > New Revision: 1311920
> >
> > URL: http://svn.apache.org/viewvc?rev=1311920&view=rev
> > Log:
> > LUCENE-3969: revert Whitespace
> >
> > Modified:
> >
> > lucene/dev/branches/lucene3969/modules/analysis/common/src/test/org/ap
> > ache/lucene/analysis/core/TestRandomChains.java
> >
> > Modified:
> > lucene/dev/branches/lucene3969/modules/analysis/common/src/test/org/ap
> > ache/lucene/analysis/core/TestRandomChains.java
> > URL:
> > http://svn.apache.org/viewvc/lucene/dev/branches/lucene3969/modules/an
> > alysis/common/src/test/org/apache/lucene/analysis/core/TestRandomChain
> > s.java?rev=1311920&r1=1311919&r2=1311920&view=diff
> >
> ================================================================
> ======
> > ========
> > ---
> > lucene/dev/branches/lucene3969/modules/analysis/common/src/test/org/ap
> > ache/lucene/analysis/core/TestRandomChains.java (original)
> > +++ lucene/dev/branches/lucene3969/modules/analysis/common/src/test/or
> > +++ g/apache/lucene/analysis/core/TestRandomChains.java Tue Apr 10
> > +++ 18:50:54 2012
> > @@ -105,30 +105,30 @@ public class TestRandomChains extends Ba
> >     // nocommit can we promote some of these to be only
> >     // offsets offenders?
> >     Collections.<Class<?>>addAll(brokenComponents,
> > -                                 // TODO: fix basetokenstreamtestcase
> > not to trip because this one has no CharTermAtt
> > -                                 EmptyTokenizer.class,
> > -                                 // doesn't actual reset itself!
> > -                                 CachingTokenFilter.class,
> > -                                 // doesn't consume whole stream!
> > -                                 LimitTokenCountFilter.class,
> > -                                 // Not broken: we forcefully add
> > this, so we shouldn't
> > -                                 // also randomly pick it:
> > -                                 ValidatingTokenFilter.class,
> > -                                 // NOTE: these by themselves won't
cause any 'basic
> assertions' to fail.
> > -                                 // but see
> > https://issues.apache.org/jira/browse/LUCENE-3920, if any
> > -                                 // tokenfilter that combines words
> > (e.g. shingles) comes after them,
> > -                                 // this will create bogus offsets
> > because their 'offsets go backwards',
> > -                                 // causing shingle or whatever
to
> > make a single token with a
> > -                                 // startOffset thats > its
endOffset
> > -                                 // (see LUCENE-3738 for a list
of
> > other offenders here)
> > -                                 // broken!
> > -                                 NGramTokenizer.class,
> > -                                 // broken!
> > -                                 NGramTokenFilter.class,
> > -                                 // broken!
> > -                                 EdgeNGramTokenizer.class,
> > -                                 // broken!
> > -                                 EdgeNGramTokenFilter.class
> > +      // TODO: fix basetokenstreamtestcase not to trip because this
> > + one has no CharTermAtt
> > +      EmptyTokenizer.class,
> > +      // doesn't actual reset itself!
> > +      CachingTokenFilter.class,
> > +      // doesn't consume whole stream!
> > +      LimitTokenCountFilter.class,
> > +      // Not broken: we forcefully add this, so we shouldn't
> > +      // also randomly pick it:
> > +      ValidatingTokenFilter.class,
> > +      // NOTE: these by themselves won't cause any 'basic assertions'
to fail.
> > +      // but see https://issues.apache.org/jira/browse/LUCENE-3920,
> > + if any
> > +      // tokenfilter that combines words (e.g. shingles) comes after
> > + them,
> > +      // this will create bogus offsets because their 'offsets go
> > + backwards',
> > +      // causing shingle or whatever to make a single token with a
> > +      // startOffset thats > its endOffset
> > +      // (see LUCENE-3738 for a list of other offenders here)
> > +      // broken!
> > +      NGramTokenizer.class,
> > +      // broken!
> > +      NGramTokenFilter.class,
> > +      // broken!
> > +      EdgeNGramTokenizer.class,
> > +      // broken!
> > +      EdgeNGramTokenFilter.class
> >     );
> >   }
> >
> > @@ -137,18 +137,19 @@ public class TestRandomChains extends Ba
> >   private static final Set<Class<?>> brokenOffsetsComponents =
> > Collections.newSetFromMap(new IdentityHashMap<Class<?>,Boolean>());
> >   static {
> >     Collections.<Class<?>>addAll(brokenOffsetsComponents,
> > -                                 WordDelimiterFilter.class,
> > -                                 TrimFilter.class,
> > -                                 ReversePathHierarchyTokenizer.class,
> > -                                 PathHierarchyTokenizer.class,
> > -
> > HyphenationCompoundWordTokenFilter.class,
> > -
> > DictionaryCompoundWordTokenFilter.class,
> > -                                 // nocommit: corrumpts graphs
(offset
consistency
> check):
> > -                                 PositionFilter.class,
> > -                                 // nocommit it seems to mess up
offsets!?
> > -                                 WikipediaTokenizer.class
> > -                                 );
> > +      WordDelimiterFilter.class,
> > +      TrimFilter.class,
> > +      ReversePathHierarchyTokenizer.class,
> > +      PathHierarchyTokenizer.class,
> > +      HyphenationCompoundWordTokenFilter.class,
> > +      DictionaryCompoundWordTokenFilter.class,
> > +      // nocommit: corrumpts graphs (offset consistency check):
> > +      PositionFilter.class,
> > +      // nocommit it seems to mess up offsets!?
> > +      WikipediaTokenizer.class
> > +    );
> >   }
> > +
> >   @BeforeClass
> >   public static void beforeClass() throws Exception {
> >     List<Class<?>> analysisClasses = new ArrayList<Class<?>>();
@@
> > -168,6 +169,7 @@ public class TestRandomChains extends Ba
> >       ) {
> >         continue;
> >       }
> > +
> >       for (final Constructor<?> ctor : c.getConstructors()) {
> >         // don't test synthetic or deprecated ctors, they likely have
known bugs:
> >         if (ctor.isSynthetic() ||
> > ctor.isAnnotationPresent(Deprecated.class)) { @@ -175,21 +177,22 @@
> > public class TestRandomChains extends Ba
> >         }
> >         if (Tokenizer.class.isAssignableFrom(c)) {
> >           assertTrue(ctor.toGenericString() + " has unsupported
> > parameter types",
> > -
> > allowedTokenizerArgs.containsAll(Arrays.asList(ctor.getParameterTypes(
> > ))));
> > +
> > + allowedTokenizerArgs.containsAll(Arrays.asList(ctor.getParameterType
> > + s())));
> >           tokenizers.add(castConstructor(Tokenizer.class, ctor));
> >         } else if (TokenFilter.class.isAssignableFrom(c)) {
> >           assertTrue(ctor.toGenericString() + " has unsupported
> > parameter types",
> > -
> > allowedTokenFilterArgs.containsAll(Arrays.asList(ctor.getParameterType
> > s())));
> > +
> > + allowedTokenFilterArgs.containsAll(Arrays.asList(ctor.getParameterTy
> > + pes())));
> >           tokenfilters.add(castConstructor(TokenFilter.class, ctor));
> >         } else if (CharStream.class.isAssignableFrom(c)) {
> >           assertTrue(ctor.toGenericString() + " has unsupported
> > parameter types",
> > -
> > allowedCharFilterArgs.containsAll(Arrays.asList(ctor.getParameterTypes
> > ())));
> > +
> > + allowedCharFilterArgs.containsAll(Arrays.asList(ctor.getParameterTyp
> > + es())));
> >           charfilters.add(castConstructor(CharStream.class, ctor));
> >         } else {
> >           fail("Cannot get here");
> >         }
> >       }
> >     }
> > +
> >     final Comparator<Constructor<?>> ctorComp = new
> > Comparator<Constructor<?>>() {
> >       @Override
> >       public int compare(Constructor<?> arg0, Constructor<?> arg1)
{
> > @@ -205,12 +208,14 @@ public class TestRandomChains extends Ba
> >       System.out.println("charfilters = " + charfilters);
> >     }
> >   }
> > +
> >   @AfterClass
> >   public static void afterClass() throws Exception {
> >     tokenizers = null;
> >     tokenfilters = null;
> >     charfilters = null;
> >   }
> > +
> >   /** Hack to work around the stupidness of Oracle's strict Java
backwards
> compatibility.
> >    * {@code Class<T>#getConstructors()} should return unmodifiable
> > {@code List<Constructor<T>>} not array! */
> >   @SuppressWarnings("unchecked")
> >
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org For additional
> commands, e-mail: dev-help@lucene.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message