lucenenet-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ccurr...@apache.org
Subject [Lucene.Net] svn commit: r1201243 [1/8] - in /incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk: src/core/ src/core/Analysis/ src/core/Analysis/Standard/ src/core/Analysis/Tokenattributes/ src/core/Document/ src/core/Index/ src/core/QueryParser/ src/core/Search/...
Date Sat, 12 Nov 2011 10:44:27 GMT
Author: ccurrens
Date: Sat Nov 12 10:44:21 2011
New Revision: 1201243

URL: http://svn.apache.org/viewvc?rev=1201243&view=rev
Log:
Another large batch of changes ported.  Still in an unstable state, with things likely to change.

Added:
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/Standard/READ_BEFORE_REGENERATING.txt   (with props)
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/FileDiffs.txt
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/AppSettings.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/AttributeImplItem.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/BitSetSupport.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/BuildType.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/CRC32.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/Character.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/CloseableThreadLocalProfiler.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/CollectionsHelper.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/Compare.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/Cryptography.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/Deflater.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/Double.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/EquatableList.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/FileSupport.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/GeneralKeyedCollection.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/HashMap.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/IChecksum.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/IThreadRunnable.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/Inflater.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/Number.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/OS.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/SharpZipLib.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/Single.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/TextSupport.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/ThreadClass.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/WeakDictionary.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/WeakHashTable.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Support/WeakReference.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/IdentityDictionary.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/demo/Demo.Common/Demo.Common.csproj.user
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/UpdatedTests.txt
Removed:
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharacterCache.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/SinkTokenizer.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/TeeTokenFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/TokenWrapper.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/SupportClass.cs
Modified:
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/ASCIIFoldingFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/BaseCharFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CachingTokenFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharArraySet.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharTokenizer.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/ISOLatin1AccentFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/KeywordTokenizer.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/LengthFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/LowerCaseFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/MappingCharFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/NormalizeCharMap.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/NumericTokenStream.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/PerFieldAnalyzerWrapper.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/PorterStemFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/SimpleAnalyzer.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/Standard/StandardAnalyzer.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/Standard/StandardFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/Standard/StandardTokenizer.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/Standard/StandardTokenizerImpl.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/Standard/StandardTokenizerImpl.jflex
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/StopAnalyzer.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/StopFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/TeeSinkTokenFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/Token.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/TokenStream.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/Tokenattributes/TermAttributeImpl.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/Tokenizer.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/WordlistLoader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/ChangeNotes.txt
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Document/CompressionTools.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Document/DateField.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/BufferedDeletes.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/ByteBlockPool.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/ByteSliceWriter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/CheckIndex.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/CompoundFileReader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/ConcurrentMergeScheduler.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/DefaultSkipListReader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/DirectoryReader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/DocConsumer.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/DocFieldConsumers.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/DocFieldProcessor.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/DocFieldProcessorPerThread.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/DocInverter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/DocInverterPerField.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/DocInverterPerThread.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/DocumentsWriter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/FieldInfos.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/FieldsReader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/FreqProxFieldMergeState.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/FreqProxTermsWriterPerField.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/IndexFileDeleter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/IndexFileNames.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/IndexWriter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/InvertedDocConsumer.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/MergeDocIDRemapper.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/MultiReader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/NormsWriter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/ReusableStringReader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/SegmentInfo.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/SegmentInfos.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/SegmentReader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/SegmentTermDocs.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/SegmentTermPositions.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/TermBuffer.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/TermInfosReader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/TermVectorsTermsWriterPerField.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/TermsHash.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Index/TermsHashPerField.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Lucene.Net.csproj
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/QueryParser/ParseException.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/QueryParser/QueryParser.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/BooleanQuery.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/CachingWrapperFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/DisjunctionMaxQuery.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/FieldCache.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/FieldCacheImpl.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/FieldCacheRangeFilter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/FieldComparator.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/FilterManager.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/FuzzyQuery.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/MultiPhraseQuery.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/MultiSearcher.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/ParallelMultiSearcher.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/PhraseQuery.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/QueryTermVector.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/SloppyPhraseScorer.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/Sort.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/SortField.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/Spans/SpanFirstQuery.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/Spans/SpanNearQuery.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/Spans/SpanNotQuery.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/Spans/SpanOrQuery.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Search/TimeLimitingCollector.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Store/CheckSumIndexInput.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Store/CheckSumIndexOutput.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Store/FSDirectory.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Store/IndexOutput.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Store/Lock.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Store/MMapDirectory.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Store/NativeFSLockFactory.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Store/NoLockFactory.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Store/RAMDirectory.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/AttributeSource.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/AverageGuessMemoryModel.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/BitUtil.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/BitVector.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/CloseableThreadLocal.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/Constants.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/DocIdBitSet.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/IndexableBinaryStringTools.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/NumericUtils.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/OpenBitSet.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/PriorityQueue.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/RamUsageEstimator.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/ReaderUtil.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/ScorerDocQueue.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Util/SortedVIntList.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/demo/Demo.Common/HTML/HTMLParser.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/demo/Demo.Common/HTML/ParseException.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/demo/Demo.Common/HTML/ParserThread.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestAtomicUpdate.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestBackwardsCompatibility.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestCompoundFile.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestDoc.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestFieldsReader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestIndexFileDeleter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestIndexModifier.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestIndexReader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestIndexReaderReopen.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestIndexWriter.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestIndexWriterDelete.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestIndexWriterExceptions.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestIndexWriterLockRelease.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestIndexWriterReader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestNRTReaderWithThreads.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestParallelReader.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestPayloads.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestPositionBasedTermVectorMapper.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestSnapshotDeletionPolicy.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestStressIndexing.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestStressIndexing2.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestThreadedOptimize.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestTransactionRollback.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Index/TestTransactions.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Lucene.Net.Test.csproj
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/QueryParser/TestQueryParser.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Search/CheckHits.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Search/Function/FunctionTestSetup.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Search/Spans/TestPayloadSpans.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Search/TestDocIdSet.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Search/TestMultiSearcher.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Search/TestMultiThreadTermVectors.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Search/TestScorerPerf.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Search/TestStressSort.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Search/TestThreadSafe.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Search/TestTimeLimitedCollector.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Search/TestTimeLimitingCollector.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Store/TestBufferedIndexInput.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Store/TestDirectory.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Store/TestFileSwitchDirectory.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Store/TestLockFactory.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Store/TestRAMDirectory.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Store/TestWindowsMMap.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/TestSupportClass.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Util/LuceneTestCase.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Util/Paths.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Util/TestNumericUtils.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Util/TestOpenBitSet.cs
    incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/test/core/Util/_TestUtil.cs

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/ASCIIFoldingFilter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/ASCIIFoldingFilter.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/ASCIIFoldingFilter.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/ASCIIFoldingFilter.cs Sat Nov 12 10:44:21 2011
@@ -60,7 +60,7 @@ namespace Lucene.Net.Analysis
 	{
 		public ASCIIFoldingFilter(TokenStream input):base(input)
 		{
-			termAtt = AddAttribute(typeof(TermAttribute));
+            termAtt = AddAttribute<TermAttribute>();
 		}
 		
 		private char[] output = new char[512];

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/BaseCharFilter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/BaseCharFilter.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/BaseCharFilter.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/BaseCharFilter.cs Sat Nov 12 10:44:21 2011
@@ -16,6 +16,7 @@
  */
 
 using System;
+using Lucene.Net.Support;
 using Lucene.Net.Util;
 
 namespace Lucene.Net.Analysis
@@ -56,7 +57,7 @@ namespace Lucene.Net.Analysis
 
             while (hi >= lo)
             {
-                mid = SupportClass.Number.URShift(lo + hi, 1);
+                mid = Number.URShift(lo + hi, 1);
                 if (currentOff < offsets[mid])
                     hi = mid - 1;
                 else if (currentOff > offsets[mid])

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CachingTokenFilter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CachingTokenFilter.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CachingTokenFilter.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CachingTokenFilter.cs Sat Nov 12 10:44:21 2011
@@ -16,8 +16,7 @@
  */
 
 using System;
-
-using AttributeSource = Lucene.Net.Util.AttributeSource;
+using Lucene.Net.Util;
 
 namespace Lucene.Net.Analysis
 {
@@ -30,40 +29,22 @@ namespace Lucene.Net.Analysis
 	/// <see cref="TokenStream.Reset()" />, which repositions the
 	/// stream to the first Token. 
 	/// </summary>
-	public class CachingTokenFilter:TokenFilter
+	public sealed class CachingTokenFilter : TokenFilter
 	{
-		private System.Collections.IList cache = null;
-		private System.Collections.IEnumerator iterator = null;
+        private System.Collections.Generic.LinkedList<AttributeSource.State> cache = null;
+		private System.Collections.Generic.IEnumerator<State> iterator = null;
 		private AttributeSource.State finalState;
 		
 		public CachingTokenFilter(TokenStream input):base(input)
 		{
 		}
-		
-		/// <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
-		/// not be overridden. Delegates to the backwards compatibility layer. 
-		/// </deprecated>
-        [Obsolete("Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer. ")]
-		public override Token Next(Token reusableToken)
-		{
-			return base.Next(reusableToken);
-		}
-		
-		/// <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
-		/// not be overridden. Delegates to the backwards compatibility layer. 
-		/// </deprecated>
-        [Obsolete("Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer. ")]
-		public override Token Next()
-		{
-			return base.Next();
-		}
-		
+
 		public override bool IncrementToken()
 		{
 			if (cache == null)
 			{
 				// fill cache lazily
-				cache = new System.Collections.ArrayList();
+				cache = new System.Collections.Generic.LinkedList<State>();
 				FillCache();
 				iterator = cache.GetEnumerator();
 			}
@@ -74,7 +55,7 @@ namespace Lucene.Net.Analysis
 				return false;
 			}
 			// Since the TokenFilter can be reset, the tokens need to be preserved as immutable.
-			RestoreState((AttributeSource.State) iterator.Current);
+			RestoreState(iterator.Current);
 			return true;
 		}
 		
@@ -98,7 +79,7 @@ namespace Lucene.Net.Analysis
 		{
 			while (input.IncrementToken())
 			{
-				cache.Add(CaptureState());
+				cache.AddLast(CaptureState());
 			}
 			// capture final state
 			input.End();

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharArraySet.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharArraySet.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharArraySet.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharArraySet.cs Sat Nov 12 10:44:21 2011
@@ -16,455 +16,502 @@
  */
 
 using System;
+using System.Collections;
+using System.Linq;
+using System.Collections.Generic;
+
 namespace Lucene.Net.Analysis
 {
-	
-	
-	/// <summary> A simple class that stores Strings as char[]'s in a
-	/// hash table.  Note that this is not a general purpose
-	/// class.  For example, it cannot remove items from the
-	/// set, nor does it resize its hash table to be smaller,
-	/// etc.  It is designed to be quick to test if a char[]
-	/// is in the set without the necessity of converting it
-	/// to a String first.
-	/// </summary>
-	
-	public class CharArraySet:System.Collections.Hashtable
-	{
-		public override int Count
-		{
-			get
-			{
-				return count;
-			}
-			
-		}
-		private const int INIT_SIZE = 8;
-		private char[][] entries;
-		private int count;
-		private bool ignoreCase;
-		
-		/// <summary>Create set with enough capacity to hold startSize
-		/// terms 
-		/// </summary>
-		public CharArraySet(int startSize, bool ignoreCase)
-		{
-			this.ignoreCase = ignoreCase;
-			int size = INIT_SIZE;
-			while (startSize + (startSize >> 2) > size)
-				size <<= 1;
-			entries = new char[size][];
-		}
-		
-		/// <summary>Create set from a Collection of char[] or String </summary>
-		public CharArraySet(System.Collections.ICollection c, bool ignoreCase):this(c.Count, ignoreCase)
-		{
-            System.Collections.IEnumerator e = c is CharArraySet ? ((CharArraySet)c).GetEnumerator() : c.GetEnumerator();
-			while (e.MoveNext())
-			{
-				Add(e.Current);
-			}
-		}
-		/// <summary>Create set from entries </summary>
-		private CharArraySet(char[][] entries, bool ignoreCase, int count)
-		{
-			this.entries = entries;
-			this.ignoreCase = ignoreCase;
-			this.count = count;
-		}
-		
-		/// <summary>true if the <c>len</c> chars of <c>text</c> starting at <c>off</c>
-		/// are in the set 
-		/// </summary>
-		public virtual bool Contains(char[] text, int off, int len)
-		{
-			return entries[GetSlot(text, off, len)] != null;
-		}
-		
-		/// <summary>true if the <c>System.String</c> is in the set </summary>
-		public virtual bool Contains(System.String cs)
-		{
-			return entries[GetSlot(cs)] != null;
-		}
-		
-		private int GetSlot(char[] text, int off, int len)
-		{
-			int code = GetHashCode(text, off, len);
-			int pos = code & (entries.Length - 1);
-			char[] text2 = entries[pos];
-			if (text2 != null && !Equals(text, off, len, text2))
-			{
-				int inc = ((code >> 8) + code) | 1;
-				do 
-				{
-					code += inc;
-					pos = code & (entries.Length - 1);
-					text2 = entries[pos];
-				}
-				while (text2 != null && !Equals(text, off, len, text2));
-			}
-			return pos;
-		}
-		
-		/// <summary>Returns true if the String is in the set </summary>
-		private int GetSlot(System.String text)
-		{
-			int code = GetHashCode(text);
-			int pos = code & (entries.Length - 1);
-			char[] text2 = entries[pos];
-			if (text2 != null && !Equals(text, text2))
-			{
-				int inc = ((code >> 8) + code) | 1;
-				do 
-				{
-					code += inc;
-					pos = code & (entries.Length - 1);
-					text2 = entries[pos];
-				}
-				while (text2 != null && !Equals(text, text2));
-			}
-			return pos;
-		}
-		
-		/// <summary>Add this String into the set </summary>
-		public virtual bool Add(System.String text)
-		{
-			return Add(text.ToCharArray());
-		}
-		
-		/// <summary>Add this char[] directly to the set.
-		/// If ignoreCase is true for this Set, the text array will be directly modified.
-		/// The user should never modify this text array after calling this method.
-		/// </summary>
-		public virtual bool Add(char[] text)
-		{
-			if (ignoreCase)
-				for (int i = 0; i < text.Length; i++)
-					text[i] = System.Char.ToLower(text[i]);
-			int slot = GetSlot(text, 0, text.Length);
-			if (entries[slot] != null)
-				return false;
-			entries[slot] = text;
-			count++;
-			
-			if (count + (count >> 2) > entries.Length)
-			{
-				Rehash();
-			}
-			
-			return true;
-		}
-		
-		private bool Equals(char[] text1, int off, int len, char[] text2)
-		{
-			if (len != text2.Length)
-				return false;
-			if (ignoreCase)
-			{
-				for (int i = 0; i < len; i++)
-				{
-					if (System.Char.ToLower(text1[off + i]) != text2[i])
-						return false;
-				}
-			}
-			else
-			{
-				for (int i = 0; i < len; i++)
-				{
-					if (text1[off + i] != text2[i])
-						return false;
-				}
-			}
-			return true;
-		}
-		
-		private bool Equals(System.String text1, char[] text2)
-		{
-			int len = text1.Length;
-			if (len != text2.Length)
-				return false;
-			if (ignoreCase)
-			{
-				for (int i = 0; i < len; i++)
-				{
-					if (System.Char.ToLower(text1[i]) != text2[i])
-						return false;
-				}
-			}
-			else
-			{
-				for (int i = 0; i < len; i++)
-				{
-					if (text1[i] != text2[i])
-						return false;
-				}
-			}
-			return true;
-		}
-		
-		private void  Rehash()
-		{
-			int newSize = 2 * entries.Length;
-			char[][] oldEntries = entries;
-			entries = new char[newSize][];
-			
-			for (int i = 0; i < oldEntries.Length; i++)
-			{
-				char[] text = oldEntries[i];
-				if (text != null)
-				{
-					// todo: could be faster... no need to compare strings on collision
-					entries[GetSlot(text, 0, text.Length)] = text;
-				}
-			}
-		}
+    /// <summary> A simple class that stores Strings as char[]'s in a
+    /// hash table.  Note that this is not a general purpose
+    /// class.  For example, it cannot remove items from the
+    /// set, nor does it resize its hash table to be smaller,
+    /// etc.  It is designed to be quick to test if a char[]
+    /// is in the set without the necessity of converting it
+    /// to a String first.
+    /// <p/>
+    /// <em>Please note:</em> This class implements <see cref="System.Collections.Generic.ISet{T}"/> but
+    /// does not behave like it should in all cases. The generic type is
+    /// <see cref="System.Collections.Generic.ICollection{T}"/>, because you can add any object to it,
+    /// that has a string representation. The add methods will use
+    /// <see cref="object.ToString()"/> and store the result using a <see cref="char"/>
+    /// buffer. The same behaviour have the <see cref="Contains(object)"/> methods.
+    /// The <see cref="GetEnumerator"/> method returns an <see cref="string"/> IEnumerable.
+    /// For type safety also {@link #stringIterator()} is provided.
+    /// </summary>
+    // TODO: java uses wildcards, .net doesn't have this, easiest way is to 
+    //       make the entire class generic.  Ultimately, though, since this
+    //       works with strings, I can't think of a reason not to just declare
+    //       this as an ISet<string>.
+    public class CharArraySet : ISet<string>
+    {
+        bool _ReadOnly = false;
+        const int INIT_SIZE = 8;
+        char[][] _Entries;
+        int _Count;
+        bool _IgnoreCase;
+        public static CharArraySet EMPTY_SET = CharArraySet.UnmodifiableSet(new CharArraySet(0, false));
+
+        private void Init(int startSize, bool ignoreCase)
+        {
+            this._IgnoreCase = ignoreCase;
+            int size = INIT_SIZE;
+            while (startSize + (startSize >> 2) > size)
+                size <<= 1;
+            _Entries = new char[size][];
+        }
+
+        /// <summary>Create set with enough capacity to hold startSize
+        /// terms 
+        /// </summary>
+        public CharArraySet(int startSize, bool ignoreCase)
+        {
+            Init(startSize, ignoreCase);
+        }
+
+        /// <summary>Create set from a Collection of char[] or String </summary>
+        public CharArraySet(IEnumerable<object> c, bool ignoreCase)
+        {
+            Init(c.Count(), ignoreCase);
+            AddItems(c);
+        }
+
+        private void AddItems<T>(IEnumerable<T> items)
+        {
+            foreach(T item in items)
+            {
+                Add(item.ToString());
+            }
+        }
+
+        /// <summary>Create set from entries </summary>
+        private CharArraySet(char[][] entries, bool ignoreCase, int count)
+        {
+            this._Entries = entries;
+            this._IgnoreCase = ignoreCase;
+            this._Count = count;
+        }
+
+        /// <summary>true if the <code>len</code> chars of <code>text</code> starting at <code>off</code>
+        /// are in the set 
+        /// </summary>
+        public virtual bool Contains(char[] text, int off, int len)
+        {
+            return _Entries[GetSlot(text, off, len)] != null;
+        }
+
+        public virtual bool Contains(string text)
+        {
+            return _Entries[GetSlot(text)] != null;
+        }
+
+
+        private int GetSlot(char[] text, int off, int len)
+        {
+            int code = GetHashCode(text, off, len);
+            int pos = code & (_Entries.Length - 1);
+            char[] text2 = _Entries[pos];
+            if (text2 != null && !Equals(text, off, len, text2))
+            {
+                int inc = ((code >> 8) + code) | 1;
+                do
+                {
+                    code += inc;
+                    pos = code & (_Entries.Length - 1);
+                    text2 = _Entries[pos];
+                }
+                while (text2 != null && !Equals(text, off, len, text2));
+            }
+            return pos;
+        }
+
+        /// <summary>Returns true if the String is in the set </summary>
+        private int GetSlot(string text)
+        {
+            int code = GetHashCode(text);
+            int pos = code & (_Entries.Length - 1);
+            char[] text2 = _Entries[pos];
+            if (text2 != null && !Equals(text, text2))
+            {
+                int inc = ((code >> 8) + code) | 1;
+                do
+                {
+                    code += inc;
+                    pos = code & (_Entries.Length - 1);
+                    text2 = _Entries[pos];
+                }
+                while (text2 != null && !Equals(text, text2));
+            }
+            return pos;
+        }
+
+        public bool Add(string text)
+        {
+            if (_ReadOnly) throw new NotSupportedException();
+            return Add(text.ToCharArray());
+        }
+
+        /// <summary>Add this char[] directly to the set.
+        /// If ignoreCase is true for this Set, the text array will be directly modified.
+        /// The user should never modify this text array after calling this method.
+        /// </summary>
+        public bool Add(char[] text)
+        {
+            if (_ReadOnly) throw new NotSupportedException();
+
+            if (_IgnoreCase)
+                for (int i = 0; i < text.Length; i++)
+                    text[i] = System.Char.ToLower(text[i]);
+            int slot = GetSlot(text, 0, text.Length);
+            if (_Entries[slot] != null)
+                return false;
+            _Entries[slot] = text;
+            _Count++;
+
+            if (_Count + (_Count >> 2) > _Entries.Length)
+            {
+                Rehash();
+            }
+
+            return true;
+        }
+
+        private bool Equals(char[] text1, int off, int len, char[] text2)
+        {
+            if (len != text2.Length)
+                return false;
+            if (_IgnoreCase)
+            {
+                for (int i = 0; i < len; i++)
+                {
+                    if (char.ToLower(text1[off + i]) != text2[i])
+                        return false;
+                }
+            }
+            else
+            {
+                for (int i = 0; i < len; i++)
+                {
+                    if (text1[off + i] != text2[i])
+                        return false;
+                }
+            }
+            return true;
+        }
+
+        private bool Equals(string text1, char[] text2)
+        {
+            int len = text1.Length;
+            if (len != text2.Length)
+                return false;
+            if (_IgnoreCase)
+            {
+                for (int i = 0; i < len; i++)
+                {
+                    if (char.ToLower(text1[i]) != text2[i])
+                        return false;
+                }
+            }
+            else
+            {
+                for (int i = 0; i < len; i++)
+                {
+                    if (text1[i] != text2[i])
+                        return false;
+                }
+            }
+            return true;
+        }
+
+        private void Rehash()
+        {
+            int newSize = 2 * _Entries.Length;
+            char[][] oldEntries = _Entries;
+            _Entries = new char[newSize][];
+
+            for (int i = 0; i < oldEntries.Length; i++)
+            {
+                char[] text = oldEntries[i];
+                if (text != null)
+                {
+                    // todo: could be faster... no need to compare strings on collision
+                    _Entries[GetSlot(text, 0, text.Length)] = text;
+                }
+            }
+        }
 
         private int GetHashCode(char[] text, int offset, int len)
-		{
-			int code = 0;
-			int stop = offset + len;
-			if (ignoreCase)
-			{
-				for (int i = offset; i < stop; i++)
-				{
-					code = code * 31 + System.Char.ToLower(text[i]);
-				}
-			}
-			else
-			{
-				for (int i = offset; i < stop; i++)
-				{
-					code = code * 31 + text[i];
-				}
-			}
-			return code;
-		}
-		
-		private int GetHashCode(System.String text)
-		{
-			int code = 0;
-			int len = text.Length;
-			if (ignoreCase)
-			{
-				for (int i = 0; i < len; i++)
-				{
-					code = code * 31 + System.Char.ToLower(text[i]);
-				}
-			}
-			else
-			{
-				for (int i = 0; i < len; i++)
-				{
-					code = code * 31 + text[i];
-				}
-			}
-			return code;
-		}
-		
-		public virtual int Size()
-		{
-			return count;
-		}
-		
-		public virtual bool IsEmpty()
-		{
-			return count == 0;
-		}
-		
-		public override bool Contains(System.Object o)
-		{
-			if (o is char[])
-			{
-				char[] text = (char[]) o;
-				return Contains(text, 0, text.Length);
-			}
-			return Contains(o.ToString());
-		}
-
-        //LUCENENET-414 (https://issues.apache.org/jira/browse/LUCENENET-414)
-        public virtual bool Add(object key, object value)
-        {
-            return Add(key);
-        }
-
-		public virtual bool Add(System.Object o)
-		{
-			if (o is char[])
-			{
-				return Add((char[]) o);
-			}
-
-            if (o is System.Collections.Hashtable)
-            {
-                foreach (string word in ((System.Collections.Hashtable)o).Keys)
-                {
-                    Add(word);
-                }
-                return true;
-            }
-
-			return Add(o.ToString());
-		}
-		
-		/// <summary> Returns an unmodifiable <see cref="CharArraySet" />. This allows to provide
-		/// unmodifiable views of internal sets for "read-only" use.
-		/// </summary>
-        /// <param name="set_Renamed">a set for which the unmodifiable set is returned.
-		/// </param>
-		/// <returns> an new unmodifiable <see cref="CharArraySet" />.
-		/// </returns>
-        /// <exception cref="NullReferenceException">NullReferenceException thrown 
-        /// if the given set is <c>null</c>.</exception>
-		public static CharArraySet UnmodifiableSet(CharArraySet set_Renamed)
-		{
-			if (set_Renamed == null)
-				throw new System.NullReferenceException("Given set is null");
-			/*
-			* Instead of delegating calls to the given set copy the low-level values to
-			* the unmodifiable Subclass
-			*/
-			return new UnmodifiableCharArraySet(set_Renamed.entries, set_Renamed.ignoreCase, set_Renamed.count);
-		}
-
-        /// <summary>The Iterator&lt;String&gt; for this set.  Strings are constructed on the fly, so
-		/// use <c>nextCharArray</c> for more efficient access. 
-		/// </summary>
-		public class CharArraySetIterator : System.Collections.IEnumerator
-		{
-			private void  InitBlock(CharArraySet enclosingInstance)
-			{
-				this.enclosingInstance = enclosingInstance;
-			}
-			private CharArraySet enclosingInstance;
-            /// <summary>Returns the next String, as a Set&lt;String&gt; would...
-			/// use nextCharArray() for better efficiency. 
-			/// </summary>
-			public virtual System.Object Current
-			{
-				get
-				{
-					return new System.String(NextCharArray());
-				}
-				
-			}
-			public CharArraySet Enclosing_Instance
-			{
-				get
-				{
-					return enclosingInstance;
-				}
-				
-			}
-			internal int pos = - 1;
-			internal char[] next_Renamed_Field;
-			internal CharArraySetIterator(CharArraySet enclosingInstance)
-			{
-				InitBlock(enclosingInstance);
-				GoNext();
-			}
-			
-			private void  GoNext()
-			{
-				next_Renamed_Field = null;
-				pos++;
-				while (pos < Enclosing_Instance.entries.Length && (next_Renamed_Field = Enclosing_Instance.entries[pos]) == null)
-					pos++;
-			}
-			
-			public virtual bool MoveNext()
-			{
-				return next_Renamed_Field != null;
-			}
-			
-			/// <summary>do not modify the returned char[] </summary>
-			public virtual char[] NextCharArray()
-			{
-				char[] ret = next_Renamed_Field;
-				GoNext();
-				return ret;
-			}
-			
-			public virtual void  Remove()
-			{
-				throw new System.NotSupportedException();
-			}
-			
-			virtual public void  Reset()
-			{
-                System.Diagnostics.Debug.Fail("Port issue:", "Need to implement this call, CharArraySetIterator.Reset()");  // {{Aroush-2.9
-			}
-		}
-		
-		
-		public new System.Collections.IEnumerator GetEnumerator()
-		{
-			return new CharArraySetIterator(this);
-		}
-		
-		/// <summary> Efficient unmodifiable <see cref="CharArraySet" />. This implementation does not
-		/// delegate calls to a given <see cref="CharArraySet" /> like
-		/// Collections.UnmodifiableSet(java.util.Set) does. Instead is passes
-		/// the internal representation of a <see cref="CharArraySet" /> to a super
-		/// constructor and overrides all mutators. 
-		/// </summary>
-		private sealed class UnmodifiableCharArraySet:CharArraySet
-		{
-			
-			internal UnmodifiableCharArraySet(char[][] entries, bool ignoreCase, int count):base(entries, ignoreCase, count)
-			{
-			}
-			
-			public override bool Add(System.Object o)
-			{
-				throw new System.NotSupportedException();
-			}
-			
-			public override bool AddAll(System.Collections.ICollection coll)
-			{
-				throw new System.NotSupportedException();
-			}
-			
-			public override bool Add(char[] text)
-			{
-				throw new System.NotSupportedException();
-			}
-			
-			public override bool Add(System.String text)
-			{
-				throw new System.NotSupportedException();
-			}
-		}
+        {
+            int code = 0;
+            int stop = offset + len;
+            if (_IgnoreCase)
+            {
+                for (int i = offset; i < stop; i++)
+                {
+                    code = code * 31 + char.ToLower(text[i]);
+                }
+            }
+            else
+            {
+                for (int i = offset; i < stop; i++)
+                {
+                    code = code * 31 + text[i];
+                }
+            }
+            return code;
+        }
+
+        private int GetHashCode(string text)
+        {
+            int code = 0;
+            int len = text.Length;
+            if (_IgnoreCase)
+            {
+                for (int i = 0; i < len; i++)
+                {
+                    code = code * 31 + char.ToLower(text[i]);
+                }
+            }
+            else
+            {
+                for (int i = 0; i < len; i++)
+                {
+                    code = code * 31 + text[i];
+                }
+            }
+            return code;
+        }
+
+        public int Count
+        {
+            get { return _Count; }
+        }
+
+        public bool IsEmpty
+        {
+            get { return _Count == 0; }
+        }
+
+        public bool Contains(object item)
+        {
+            if (item is char[])
+            {
+                char[] text = (char[])item;
+                return Contains(text, 0, text.Length);
+            }
+            return Contains(item.ToString());
+        }
+
+        public bool Add(object item)
+        {
+            return Add(item.ToString());
+        }
+
+        void ICollection<string>.Add(string item)
+        {
+            this.Add(item);
+        }
+
+        /// <summary>
+        /// Returns an unmodifiable <see cref="CharArraySet"/>.  This allows to provide
+        /// unmodifiable views of internal sets for "read-only" use
+        /// </summary>
+        /// <param name="set">A Set for which the unmodifiable set it returns.</param>
+        /// <returns>A new unmodifiable <see cref="CharArraySet"/></returns>
+        /// <throws>ArgumentNullException of the given set is <c>null</c></throws>
+        public static CharArraySet UnmodifiableSet(CharArraySet set)
+        {
+            if(set == null)
+                throw new ArgumentNullException("Given set is null");
+            if (set == EMPTY_SET)
+                return EMPTY_SET;
+            if (set._ReadOnly)
+                return set;
+
+            CharArraySet newSet = new CharArraySet(set._Entries, set._IgnoreCase, set.Count) {IsReadOnly = true};
+            return newSet;
+        }
+
+        /// <summary>
+        /// returns a copy of the given set as a <see cref="CharArraySet"/>.  If the given set
+        /// is a <see cref="CharArraySet"/> the ignoreCase property will be preserved.
+        /// </summary>
+        /// <param name="set">A set to copy</param>
+        /// <returns>a copy of the given set as a <see cref="CharArraySet"/>.  If the given set
+        /// is a <see cref="CharArraySet"/> the ignoreCase property will be preserved.</returns>
+        public static CharArraySet Copy<T>(ISet<T> set)
+        {
+            if (set == null)
+                throw new ArgumentNullException("set", "Given set is null!");
+            if (set == EMPTY_SET)
+                return EMPTY_SET;
+            bool ignoreCase = set is CharArraySet ? ((CharArraySet)set)._IgnoreCase : false;
+            var arrSet = new CharArraySet(set.Count, ignoreCase);
+            arrSet.AddItems(set);
+            return arrSet;
+        }
+
+        public void Clear()
+        {
+            if (_ReadOnly) throw new NotSupportedException();
+            _Entries = null;
+            _Count = 0;
+        }
+
+        public bool IsReadOnly
+        {
+            get { return _ReadOnly; }
+            private set { _ReadOnly = value; }
+        }
 
         /// <summary>Adds all of the elements in the specified collection to this collection </summary>
-        public virtual bool AddAll(System.Collections.ICollection items)
+        public void UnionWith(IEnumerable<string> other)
         {
-            bool added = false;
-            System.Collections.IEnumerator iter = items.GetEnumerator();
-            System.Object item;
-            while (iter.MoveNext())
+            if (_ReadOnly) throw new NotSupportedException();
+
+            foreach (string s in other)
             {
-                item = iter.Current;
-                added = Add(item);
+                Add(s.ToCharArray());
             }
-            return added;
         }
 
-        /// <summary>Removes all elements from the set </summary>
-        public virtual new bool Clear()
+        /// <summary>Wrapper that calls UnionWith</summary>
+        public void AddAll(IEnumerable<string> coll)
+        {
+            UnionWith(coll);
+        }
+
+        #region Unneeded methods
+        public void RemoveAll(ICollection<string> c)
+        {
+            throw new NotSupportedException();
+        }
+
+        public void RetainAll(ICollection<string> c)
+        {
+            throw new NotSupportedException();
+        }
+
+        void ICollection<string>.CopyTo(string[] array, int arrayIndex)
+        {
+            throw new NotSupportedException();
+        }
+
+        void ISet<string>.IntersectWith(IEnumerable<string> other)
+        {
+            throw new NotSupportedException();
+        }
+
+        void ISet<string>.ExceptWith(IEnumerable<string> other)
+        {
+            throw new NotSupportedException();
+        }
+
+        void ISet<string>.SymmetricExceptWith(IEnumerable<string> other)
         {
-            throw new System.NotSupportedException();
+            throw new NotSupportedException();
         }
 
-        /// <summary>Removes from this set all of its elements that are contained in the specified collection </summary>
-        public virtual bool RemoveAll(System.Collections.ICollection items)
+        bool ISet<string>.IsSubsetOf(IEnumerable<string> other)
         {
-            throw new System.NotSupportedException();
+            throw new NotSupportedException();
         }
 
-        /// <summary>Retains only the elements in this set that are contained in the specified collection </summary>
-        public bool RetainAll(System.Collections.ICollection coll)
+        bool ISet<string>.IsSupersetOf(IEnumerable<string> other)
         {
-            throw new System.NotSupportedException();
+            throw new NotSupportedException();
+        }
+
+        bool ISet<string>.IsProperSupersetOf(IEnumerable<string> other)
+        {
+            throw new NotSupportedException();
+        }
+
+        bool ISet<string>.IsProperSubsetOf(IEnumerable<string> other)
+        {
+            throw new NotSupportedException();
+        }
+
+        bool ISet<string>.Overlaps(IEnumerable<string> other)
+        {
+            throw new NotSupportedException();
+        }
+
+        bool ISet<string>.SetEquals(IEnumerable<string> other)
+        {
+            throw new NotSupportedException();
+        }
+
+        bool ICollection<string>.Remove(string item)
+        {
+            throw new NotSupportedException();
+        }
+        #endregion
+
+        /// <summary>
+        /// The IEnumerator&lt;String&gt; for this set.  Strings are constructed on the fly,
+        /// so use <c>nextCharArray</c> for more efficient access
+        /// </summary>
+        class CharArraySetEnumerator : IEnumerator<string>
+        {
+            CharArraySet _Creator;
+            int pos = -1;
+            char[] cur;
+
+            protected internal CharArraySetEnumerator(CharArraySet creator)
+            {
+                _Creator = creator;
+            }
+
+            public bool MoveNext()
+            {
+                cur = null;
+                pos++;
+                while (pos < _Creator._Entries.Length && (cur = _Creator._Entries[pos]) == null)
+                    pos++;
+                return cur != null;
+            }
+
+            /// <summary>do not modify the returned char[] </summary>
+            public char[] NextCharArray()
+            {
+                return cur;
+            }
+
+            public string Current
+            {
+                get { return new string(NextCharArray()); }
+            }
+
+            public void Dispose()
+            {
+            }
+
+            object System.Collections.IEnumerator.Current
+            {
+                get { return new string(NextCharArray()); }
+            }
+
+            public void Reset()
+            {
+                throw new NotImplementedException();
+            }
+        }
+
+        public IEnumerator<string> StringEnumerator()
+        {
+            return new CharArraySetEnumerator(this);
+        }
+
+        public IEnumerator<string> GetEnumerator()
+        {
+            return new CharArraySetEnumerator(this);
+        }
+
+        IEnumerator IEnumerable.GetEnumerator()
+        {
+            return GetEnumerator();
         }
     }
+
 }
\ No newline at end of file

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharFilter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharFilter.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharFilter.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharFilter.cs Sat Nov 12 10:44:21 2011
@@ -29,7 +29,7 @@ namespace Lucene.Net.Analysis
 	/// <version>  $Id$
 	/// 
 	/// </version>
-	public abstract class CharFilter:CharStream
+	public abstract class CharFilter : CharStream
 	{
         private long currentPosition = -1;
 		

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharTokenizer.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharTokenizer.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharTokenizer.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/CharTokenizer.cs Sat Nov 12 10:44:21 2011
@@ -29,20 +29,20 @@ namespace Lucene.Net.Analysis
 	{
 		public CharTokenizer(System.IO.TextReader input):base(input)
 		{
-			offsetAtt = (OffsetAttribute) AddAttribute(typeof(OffsetAttribute));
-			termAtt = (TermAttribute) AddAttribute(typeof(TermAttribute));
+			offsetAtt = AddAttribute<OffsetAttribute>();
+            termAtt = AddAttribute<TermAttribute>();
 		}
 		
 		public CharTokenizer(AttributeSource source, System.IO.TextReader input):base(source, input)
 		{
-			offsetAtt = (OffsetAttribute) AddAttribute(typeof(OffsetAttribute));
-			termAtt = (TermAttribute) AddAttribute(typeof(TermAttribute));
+            offsetAtt = AddAttribute<OffsetAttribute>();
+            termAtt = AddAttribute<TermAttribute>();
 		}
 		
 		public CharTokenizer(AttributeFactory factory, System.IO.TextReader input):base(factory, input)
 		{
-			offsetAtt = (OffsetAttribute) AddAttribute(typeof(OffsetAttribute));
-			termAtt = (TermAttribute) AddAttribute(typeof(TermAttribute));
+            offsetAtt = AddAttribute<OffsetAttribute>();
+            termAtt = AddAttribute<TermAttribute>();
 		}
 		
 		private int offset = 0, bufferIndex = 0, dataLen = 0;
@@ -128,24 +128,6 @@ namespace Lucene.Net.Analysis
 			offsetAtt.SetOffset(finalOffset, finalOffset);
 		}
 		
-		/// <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
-		/// not be overridden. Delegates to the backwards compatibility layer. 
-		/// </deprecated>
-        [Obsolete("Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer. ")]
-		public override Token Next(Token reusableToken)
-		{
-			return base.Next(reusableToken);
-		}
-		
-		/// <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
-		/// not be overridden. Delegates to the backwards compatibility layer. 
-		/// </deprecated>
-        [Obsolete("Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer. ")]
-		public override Token Next()
-		{
-			return base.Next();
-		}
-		
 		public override void  Reset(System.IO.TextReader input)
 		{
 			base.Reset(input);

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/ISOLatin1AccentFilter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/ISOLatin1AccentFilter.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/ISOLatin1AccentFilter.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/ISOLatin1AccentFilter.cs Sat Nov 12 10:44:21 2011
@@ -29,15 +29,17 @@ namespace Lucene.Net.Analysis
 	/// <p/>
 	/// 
 	/// </summary>
-	/// <deprecated> in favor of <see cref="ASCIIFoldingFilter" /> which covers a superset 
-	/// of Latin 1. This class will be removed in Lucene 3.0.
+	/// <deprecated> If you build a new index, use <see cref="ASCIIFoldingFilter"/>
+	/// which covers a superset of Latin 1.
+	/// This class is included for use with existing indexes and will be removed
+	/// in a future release (possible Lucene 4.0)
 	/// </deprecated>
-    [Obsolete("in favor of ASCIIFoldingFilter which covers a superset of Latin 1. This class will be removed in Lucene 3.0.")]
-	public class ISOLatin1AccentFilter:TokenFilter
+    [Obsolete("If you build a new index, use ASCIIFoldingFilter which covers a superset of Latin 1.  This class is included for use with existing indexes and will be removed in a future release (possible Lucene 4.0).")]
+	public class ISOLatin1AccentFilter : TokenFilter
 	{
 		public ISOLatin1AccentFilter(TokenStream input):base(input)
 		{
-			termAtt = (TermAttribute) AddAttribute(typeof(TermAttribute));
+            termAtt = AddAttribute<TermAttribute>();
 		}
 		
 		private char[] output = new char[256];
@@ -68,24 +70,6 @@ namespace Lucene.Net.Analysis
 				return false;
 		}
 		
-		/// <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
-		/// not be overridden. Delegates to the backwards compatibility layer. 
-		/// </deprecated>
-        [Obsolete("Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer. ")]
-		public override Token Next(Token reusableToken)
-		{
-			return base.Next(reusableToken);
-		}
-		
-		/// <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
-		/// not be overridden. Delegates to the backwards compatibility layer. 
-		/// </deprecated>
-        [Obsolete("Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer. ")]
-		public override Token Next()
-		{
-			return base.Next();
-		}
-		
 		/// <summary> To replace accented characters in a String by unaccented equivalents.</summary>
 		public void  RemoveAccents(char[] input, int length)
 		{

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/KeywordTokenizer.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/KeywordTokenizer.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/KeywordTokenizer.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/KeywordTokenizer.cs Sat Nov 12 10:44:21 2011
@@ -25,7 +25,7 @@ namespace Lucene.Net.Analysis
 {
 	
 	/// <summary> Emits the entire input as a single token.</summary>
-	public class KeywordTokenizer:Tokenizer
+	public sealed class KeywordTokenizer:Tokenizer
 	{
 		
 		private const int DEFAULT_BUFFER_SIZE = 256;
@@ -57,8 +57,8 @@ namespace Lucene.Net.Analysis
 		private void  Init(int bufferSize)
 		{
 			this.done = false;
-			termAtt = (TermAttribute) AddAttribute(typeof(TermAttribute));
-			offsetAtt = (OffsetAttribute) AddAttribute(typeof(OffsetAttribute));
+            termAtt = AddAttribute<TermAttribute>();
+            offsetAtt = AddAttribute<OffsetAttribute>();
 			termAtt.ResizeTermBuffer(bufferSize);
 		}
 		
@@ -93,24 +93,6 @@ namespace Lucene.Net.Analysis
 			offsetAtt.SetOffset(finalOffset, finalOffset);
 		}
 		
-		/// <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
-		/// not be overridden. Delegates to the backwards compatibility layer. 
-		/// </deprecated>
-        [Obsolete("Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer. ")]
-		public override Token Next(Token reusableToken)
-		{
-			return base.Next(reusableToken);
-		}
-		
-		/// <deprecated> Will be removed in Lucene 3.0. This method is final, as it should
-		/// not be overridden. Delegates to the backwards compatibility layer. 
-		/// </deprecated>
-        [Obsolete("Will be removed in Lucene 3.0. This method is final, as it should not be overridden. Delegates to the backwards compatibility layer. ")]
-		public override Token Next()
-		{
-			return base.Next();
-		}
-		
 		public override void  Reset(System.IO.TextReader input)
 		{
 			base.Reset(input);

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/LengthFilter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/LengthFilter.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/LengthFilter.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/LengthFilter.cs Sat Nov 12 10:44:21 2011
@@ -22,12 +22,7 @@ using TermAttribute = Lucene.Net.Analysi
 namespace Lucene.Net.Analysis
 {
 	
-	/// <summary> Removes words that are too long or too short from the stream.
-	/// 
-	/// 
-	/// </summary>
-	/// <version>  $Id: LengthFilter.java 807201 2009-08-24 13:22:34Z markrmiller $
-	/// </version>
+	/// <summary>Removes words that are too long or too short from the stream.</summary>
 	public sealed class LengthFilter:TokenFilter
 	{
 		
@@ -39,11 +34,12 @@ namespace Lucene.Net.Analysis
 		/// <summary> Build a filter that removes words that are too long or too
 		/// short from the text.
 		/// </summary>
-		public LengthFilter(TokenStream in_Renamed, int min, int max):base(in_Renamed)
+		public LengthFilter(TokenStream in_Renamed, int min, int max)
+            : base(in_Renamed)
 		{
 			this.min = min;
 			this.max = max;
-			termAtt = (TermAttribute) AddAttribute(typeof(TermAttribute));
+            termAtt = AddAttribute<TermAttribute>();
 		}
 		
 		/// <summary> Returns the next input Token whose term() is the right len</summary>
@@ -59,7 +55,7 @@ namespace Lucene.Net.Analysis
 				}
 				// note: else we ignore it but should we index each part of it?
 			}
-			// reached EOS -- return null
+			// reached EOS -- return false
 			return false;
 		}
 	}

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/LowerCaseFilter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/LowerCaseFilter.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/LowerCaseFilter.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/LowerCaseFilter.cs Sat Nov 12 10:44:21 2011
@@ -22,16 +22,12 @@ using TermAttribute = Lucene.Net.Analysi
 namespace Lucene.Net.Analysis
 {
 	
-	/// <summary> Normalizes token text to lower case.
-	/// 
-	/// </summary>
-	/// <version>  $Id: LowerCaseFilter.java 797665 2009-07-24 21:45:48Z buschmi $
-	/// </version>
+	/// <summary>Normalizes token text to lower case.</summary>
 	public sealed class LowerCaseFilter:TokenFilter
 	{
 		public LowerCaseFilter(TokenStream in_Renamed):base(in_Renamed)
 		{
-			termAtt = (TermAttribute) AddAttribute(typeof(TermAttribute));
+            termAtt = AddAttribute<TermAttribute>();
 		}
 		
 		private TermAttribute termAtt;

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/MappingCharFilter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/MappingCharFilter.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/MappingCharFilter.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/MappingCharFilter.cs Sat Nov 12 10:44:21 2011
@@ -16,6 +16,7 @@
  */
 
 using System;
+using System.Collections.Generic;
 
 namespace Lucene.Net.Analysis
 {
@@ -25,12 +26,11 @@ namespace Lucene.Net.Analysis
 	/// stream, and correcting the resulting changes to the
 	/// offsets.
 	/// </summary>
-	public class MappingCharFilter:BaseCharFilter
+	public class MappingCharFilter : BaseCharFilter
 	{
 		
 		private NormalizeCharMap normMap;
-		//private LinkedList<Character> buffer;
-		private System.Collections.ArrayList buffer;
+		private System.Collections.Generic.LinkedList<char> buffer;
 		private System.String replacement;
 		private int charPointer;
 		private int nextCharCounter;
@@ -59,7 +59,9 @@ namespace Lucene.Net.Analysis
 				int firstChar = NextChar();
 				if (firstChar == - 1)
 					return - 1;
-				NormalizeCharMap nm = normMap.submap != null?(NormalizeCharMap) normMap.submap[CharacterCache.ValueOf((char) firstChar)]:null;
+			    NormalizeCharMap nm = normMap.submap != null
+			                              ? normMap.submap[(char) firstChar]
+			                              : null;
 				if (nm == null)
 					return firstChar;
 				NormalizeCharMap result = Match(nm);
@@ -88,10 +90,9 @@ namespace Lucene.Net.Analysis
 			nextCharCounter++;
 			if (buffer != null && !(buffer.Count == 0))
 			{
-				System.Object tempObject;
-				tempObject = buffer[0];
-				buffer.RemoveAt(0);
-				return ((System.Char) tempObject);
+				char tempObject = buffer.First.Value;
+				buffer.RemoveFirst();
+				return (tempObject);
 			}
 			return input.Read();
 		}
@@ -101,18 +102,18 @@ namespace Lucene.Net.Analysis
 			nextCharCounter--;
 			if (buffer == null)
 			{
-				buffer = new System.Collections.ArrayList();
+				buffer = new LinkedList<char>();
 			}
-			buffer.Insert(0, (char) c);
+			buffer.AddFirst((char)c);
 		}
 		
 		private void  PushLastChar(int c)
 		{
 			if (buffer == null)
 			{
-				buffer = new System.Collections.ArrayList();
+                buffer = new LinkedList<char>();
 			}
-			buffer.Insert(buffer.Count, (char) c);
+			buffer.AddLast((char)c);
 		}
 		
 		private NormalizeCharMap Match(NormalizeCharMap map)
@@ -123,7 +124,7 @@ namespace Lucene.Net.Analysis
 				int chr = NextChar();
 				if (chr != - 1)
 				{
-					NormalizeCharMap subMap = (NormalizeCharMap) map.submap[CharacterCache.ValueOf((char) chr)];
+					NormalizeCharMap subMap = map.submap[(char)chr];
 					if (subMap != null)
 					{
 						result = Match(subMap);

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/NormalizeCharMap.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/NormalizeCharMap.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/NormalizeCharMap.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/NormalizeCharMap.cs Sat Nov 12 10:44:21 2011
@@ -25,9 +25,7 @@ namespace Lucene.Net.Analysis
 	/// </summary>
 	public class NormalizeCharMap
 	{
-		
-		//Map<Character, NormalizeMap> submap;
-		internal System.Collections.IDictionary submap;
+		internal System.Collections.Generic.IDictionary<char, NormalizeCharMap> submap;
 		internal System.String normStr;
 		internal int diff;
 		
@@ -49,9 +47,9 @@ namespace Lucene.Net.Analysis
 				char c = singleMatch[i];
 				if (currMap.submap == null)
 				{
-					currMap.submap = new System.Collections.Hashtable(1);
+					currMap.submap = new System.Collections.Generic.Dictionary<char, NormalizeCharMap>(1);
 				}
-				NormalizeCharMap map = (NormalizeCharMap) currMap.submap[CharacterCache.ValueOf(c)];
+				NormalizeCharMap map = currMap.submap[c];
 				if (map == null)
 				{
 					map = new NormalizeCharMap();

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/NumericTokenStream.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/NumericTokenStream.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/NumericTokenStream.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/NumericTokenStream.cs Sat Nov 12 10:44:21 2011
@@ -93,9 +93,9 @@ namespace Lucene.Net.Analysis
 	{
 		private void  InitBlock()
 		{
-			termAtt = (TermAttribute) AddAttribute(typeof(TermAttribute));
-			typeAtt = (TypeAttribute) AddAttribute(typeof(TypeAttribute));
-			posIncrAtt = (PositionIncrementAttribute) AddAttribute(typeof(PositionIncrementAttribute));
+            termAtt = AddAttribute<TermAttribute>();
+            typeAtt = AddAttribute<TypeAttribute>();
+            posIncrAtt = AddAttribute<PositionIncrementAttribute>();
 		}
 		
 		/// <summary>The full precision token gets this token type assigned. </summary>

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/PerFieldAnalyzerWrapper.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/PerFieldAnalyzerWrapper.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/PerFieldAnalyzerWrapper.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/PerFieldAnalyzerWrapper.cs Sat Nov 12 10:44:21 2011
@@ -16,6 +16,7 @@
  */
 
 using System;
+using System.Collections.Generic;
 
 namespace Lucene.Net.Analysis
 {
@@ -42,7 +43,7 @@ namespace Lucene.Net.Analysis
 	public class PerFieldAnalyzerWrapper:Analyzer
 	{
 		private Analyzer defaultAnalyzer;
-		private System.Collections.IDictionary analyzerMap = new System.Collections.Hashtable();
+		private IDictionary<string, Analyzer> analyzerMap = new Dictionary<string, Analyzer>();
 		
 		
 		/// <summary> Constructs with default analyzer.
@@ -51,7 +52,8 @@ namespace Lucene.Net.Analysis
 		/// <param name="defaultAnalyzer">Any fields not specifically
 		/// defined to use a different analyzer will use the one provided here.
 		/// </param>
-		public PerFieldAnalyzerWrapper(Analyzer defaultAnalyzer):this(defaultAnalyzer, null)
+		public PerFieldAnalyzerWrapper(Analyzer defaultAnalyzer)
+            : this(defaultAnalyzer, null)
 		{
 		}
 		
@@ -65,16 +67,13 @@ namespace Lucene.Net.Analysis
 		/// <param name="fieldAnalyzers">a Map (String field name to the Analyzer) to be 
 		/// used for those fields 
 		/// </param>
-		public PerFieldAnalyzerWrapper(Analyzer defaultAnalyzer, System.Collections.IDictionary fieldAnalyzers)
+        public PerFieldAnalyzerWrapper(Analyzer defaultAnalyzer, IDictionary<string, Analyzer> fieldAnalyzers)
 		{
 			this.defaultAnalyzer = defaultAnalyzer;
 			if (fieldAnalyzers != null)
 			{
-				System.Collections.ArrayList keys = new System.Collections.ArrayList(fieldAnalyzers.Keys);
-				System.Collections.ArrayList values = new System.Collections.ArrayList(fieldAnalyzers.Values);
-
-				for (int i=0; i < keys.Count; i++)
-					analyzerMap[keys[i]] = values[i];
+				foreach(var entry in fieldAnalyzers)
+					analyzerMap[entry.Key] = entry.Value;
 			}
 			SetOverridesTokenStreamMethod(typeof(PerFieldAnalyzerWrapper));
 		}
@@ -94,7 +93,7 @@ namespace Lucene.Net.Analysis
 		
 		public override TokenStream TokenStream(System.String fieldName, System.IO.TextReader reader)
 		{
-			Analyzer analyzer = (Analyzer) analyzerMap[fieldName];
+			Analyzer analyzer = analyzerMap[fieldName];
 			if (analyzer == null)
 			{
 				analyzer = defaultAnalyzer;
@@ -103,7 +102,7 @@ namespace Lucene.Net.Analysis
 			return analyzer.TokenStream(fieldName, reader);
 		}
 		
-		public override TokenStream ReusableTokenStream(System.String fieldName, System.IO.TextReader reader)
+		public override TokenStream ReusableTokenStream(string fieldName, System.IO.TextReader reader)
 		{
 			if (overridesTokenStreamMethod)
 			{
@@ -112,7 +111,7 @@ namespace Lucene.Net.Analysis
 				// tokenStream but not reusableTokenStream
 				return TokenStream(fieldName, reader);
 			}
-			Analyzer analyzer = (Analyzer) analyzerMap[fieldName];
+			Analyzer analyzer = analyzerMap[fieldName];
 			if (analyzer == null)
 				analyzer = defaultAnalyzer;
 			
@@ -120,9 +119,9 @@ namespace Lucene.Net.Analysis
 		}
 		
 		/// <summary>Return the positionIncrementGap from the analyzer assigned to fieldName </summary>
-		public override int GetPositionIncrementGap(System.String fieldName)
+		public override int GetPositionIncrementGap(string fieldName)
 		{
-			Analyzer analyzer = (Analyzer) analyzerMap[fieldName];
+			Analyzer analyzer = analyzerMap[fieldName];
 			if (analyzer == null)
 				analyzer = defaultAnalyzer;
 			return analyzer.GetPositionIncrementGap(fieldName);
@@ -131,9 +130,7 @@ namespace Lucene.Net.Analysis
         /// <summary> Return the offsetGap from the analyzer assigned to field </summary>
         public override int GetOffsetGap(Lucene.Net.Documents.Fieldable field)
         {
-            Analyzer analyzer = (Analyzer)analyzerMap[field.Name()];
-            if (analyzer == null)
-                analyzer = defaultAnalyzer;
+            Analyzer analyzer = analyzerMap[field.Name()] ?? defaultAnalyzer;
             return analyzer.GetOffsetGap(field);
         }
 		

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/PorterStemFilter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/PorterStemFilter.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/PorterStemFilter.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/PorterStemFilter.cs Sat Nov 12 10:44:21 2011
@@ -48,7 +48,7 @@ namespace Lucene.Net.Analysis
 		public PorterStemFilter(TokenStream in_Renamed):base(in_Renamed)
 		{
 			stemmer = new PorterStemmer();
-			termAtt = (TermAttribute) AddAttribute(typeof(TermAttribute));
+            termAtt = AddAttribute<TermAttribute>();
 		}
 		
 		public override bool IncrementToken()

Modified: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/SimpleAnalyzer.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/SimpleAnalyzer.cs?rev=1201243&r1=1201242&r2=1201243&view=diff
==============================================================================
--- incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/SimpleAnalyzer.cs (original)
+++ incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/SimpleAnalyzer.cs Sat Nov 12 10:44:21 2011
@@ -24,7 +24,7 @@ namespace Lucene.Net.Analysis
 	/// with <see cref="LowerCaseFilter" /> 
 	/// </summary>
 	
-	public sealed class SimpleAnalyzer:Analyzer
+	public sealed class SimpleAnalyzer : Analyzer
 	{
 		public override TokenStream TokenStream(System.String fieldName, System.IO.TextReader reader)
 		{

Added: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/Standard/READ_BEFORE_REGENERATING.txt
URL: http://svn.apache.org/viewvc/incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/Standard/READ_BEFORE_REGENERATING.txt?rev=1201243&view=auto
==============================================================================
Binary file - no diff available.

Propchange: incubator/lucene.net/branches/Lucene.Net.3.0.3/trunk/src/core/Analysis/Standard/READ_BEFORE_REGENERATING.txt
------------------------------------------------------------------------------
    svn:mime-type = application/octet-stream



Mime
View raw message