lucenenet-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From nightowl...@apache.org
Subject [1/9] lucenenet git commit: SWEEP: Changed <item></item> to <item><description></description></item> in documentation comments
Date Thu, 01 Jun 2017 22:48:57 GMT
Repository: lucenenet
Updated Branches:
  refs/heads/master cfeaf2841 -> f43d23261


http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net.Suggest/Suggest/Fst/FSTCompletionBuilder.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net.Suggest/Suggest/Fst/FSTCompletionBuilder.cs b/src/Lucene.Net.Suggest/Suggest/Fst/FSTCompletionBuilder.cs
index c29c2f3..1146020 100644
--- a/src/Lucene.Net.Suggest/Suggest/Fst/FSTCompletionBuilder.cs
+++ b/src/Lucene.Net.Suggest/Suggest/Fst/FSTCompletionBuilder.cs
@@ -30,16 +30,16 @@ namespace Lucene.Net.Search.Suggest.Fst
     /// <para>
     /// The construction step in the object finalizer works as follows:
     /// <list type="bullet">
-    /// <item>A set of input terms and their buckets is given.</item>
-    /// <item>All terms in the input are prefixed with a synthetic pseudo-character
+    /// <item><description>A set of input terms and their buckets is given.</description></item>
+    /// <item><description>All terms in the input are prefixed with a synthetic pseudo-character
     /// (code) of the weight bucket the term fell into. For example a term
     /// <c>abc</c> with a discretized weight equal '1' would become
-    /// <c>1abc</c>.</item>
-    /// <item>The terms are then sorted by their raw value of UTF-8 character values
-    /// (including the synthetic bucket code in front).</item>
-    /// <item>A finite state automaton (<see cref="FST"/>) is constructed from the input. The
+    /// <c>1abc</c>.</description></item>
+    /// <item><description>The terms are then sorted by their raw value of UTF-8 character values
+    /// (including the synthetic bucket code in front).</description></item>
+    /// <item><description>A finite state automaton (<see cref="FST"/>) is constructed from the input. The
     /// root node has arcs labeled with all possible weights. We cache all these
-    /// arcs, highest-weight first.</item>
+    /// arcs, highest-weight first.</description></item>
     /// </list>
     /// 
     /// </para>
@@ -47,21 +47,21 @@ namespace Lucene.Net.Search.Suggest.Fst
     /// At runtime, in <see cref="FSTCompletion.DoLookup(string, int)"/>, 
     /// the automaton is utilized as follows:
     /// <list type="bullet">
-    /// <item>For each possible term weight encoded in the automaton (cached arcs from
+    /// <item><description>For each possible term weight encoded in the automaton (cached arcs from
     /// the root above), starting with the highest one, we descend along the path of
     /// the input key. If the key is not a prefix of a sequence in the automaton
-    /// (path ends prematurely), we exit immediately -- no completions.</item>
-    /// <item>Otherwise, we have found an internal automaton node that ends the key.
+    /// (path ends prematurely), we exit immediately -- no completions.</description></item>
+    /// <item><description>Otherwise, we have found an internal automaton node that ends the key.
     /// <b>The entire subautomaton (all paths) starting from this node form the key's
     /// completions.</b> We start the traversal of this subautomaton. Every time we
     /// reach a final state (arc), we add a single suggestion to the list of results
     /// (the weight of this suggestion is constant and equal to the root path we
     /// started from). The tricky part is that because automaton edges are sorted and
     /// we scan depth-first, we can terminate the entire procedure as soon as we
-    /// collect enough suggestions the user requested.</item>
-    /// <item>In case the number of suggestions collected in the step above is still
+    /// collect enough suggestions the user requested.</description></item>
+    /// <item><description>In case the number of suggestions collected in the step above is still
     /// insufficient, we proceed to the next (smaller) weight leaving the root node
-    /// and repeat the same algorithm again.</item>
+    /// and repeat the same algorithm again.</description></item>
     /// </list>
     /// 
     /// <h2>Runtime behavior and performance characteristic</h2>

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Analysis/Analyzer.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Analysis/Analyzer.cs b/src/Lucene.Net/Analysis/Analyzer.cs
index ba27c90..80e5ffb 100644
--- a/src/Lucene.Net/Analysis/Analyzer.cs
+++ b/src/Lucene.Net/Analysis/Analyzer.cs
@@ -45,22 +45,22 @@ namespace Lucene.Net.Analysis
     /// <para/>
     /// For some concrete implementations bundled with Lucene, look in the analysis modules:
     /// <list type="bullet">
-    ///   <item>Common:
-    ///       Analyzers for indexing content in different languages and domains.</item>
-    ///   <item>ICU:
-    ///       Exposes functionality from ICU to Apache Lucene.</item>
-    ///   <item>Kuromoji:
-    ///       Morphological analyzer for Japanese text.</item>
-    ///   <item>Morfologik:
-    ///       Dictionary-driven lemmatization for the Polish language.</item>
-    ///   <item>Phonetic:
-    ///       Analysis for indexing phonetic signatures (for sounds-alike search).</item>
-    ///   <item>Smart Chinese:
-    ///       Analyzer for Simplified Chinese, which indexes words.</item>
-    ///   <item>Stempel:
-    ///       Algorithmic Stemmer for the Polish Language.</item>
-    ///   <item>UIMA:
-    ///       Analysis integration with Apache UIMA.</item>
+    ///   <item><description>Common:
+    ///       Analyzers for indexing content in different languages and domains.</description></item>
+    ///   <item><description>ICU:
+    ///       Exposes functionality from ICU to Apache Lucene.</description></item>
+    ///   <item><description>Kuromoji:
+    ///       Morphological analyzer for Japanese text.</description></item>
+    ///   <item><description>Morfologik:
+    ///       Dictionary-driven lemmatization for the Polish language.</description></item>
+    ///   <item><description>Phonetic:
+    ///       Analysis for indexing phonetic signatures (for sounds-alike search).</description></item>
+    ///   <item><description>Smart Chinese:
+    ///       Analyzer for Simplified Chinese, which indexes words.</description></item>
+    ///   <item><description>Stempel:
+    ///       Algorithmic Stemmer for the Polish Language.</description></item>
+    ///   <item><description>UIMA:
+    ///       Analysis integration with Apache UIMA.</description></item>
     /// </list>
     /// </summary>
     public abstract class Analyzer : IDisposable

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Analysis/Token.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Analysis/Token.cs b/src/Lucene.Net/Analysis/Token.cs
index 8e8cf07..be1938e 100644
--- a/src/Lucene.Net/Analysis/Token.cs
+++ b/src/Lucene.Net/Analysis/Token.cs
@@ -77,38 +77,38 @@ namespace Lucene.Net.Analysis
     /// for details.</para>
     /// <para>Typical Token reuse patterns:
     /// <list type="bullet">
-    ///     <item> Copying text from a string (type is reset to <see cref="TypeAttribute.DEFAULT_TYPE"/> if not specified):
+    ///     <item><description> Copying text from a string (type is reset to <see cref="TypeAttribute.DEFAULT_TYPE"/> if not specified):
     ///     <code>
     ///         return reusableToken.Reinit(string, startOffset, endOffset[, type]);
     ///     </code>
-    ///     </item>
-    ///     <item> Copying some text from a string (type is reset to <see cref="TypeAttribute.DEFAULT_TYPE"/> if not specified):
+    ///     </description></item>
+    ///     <item><description> Copying some text from a string (type is reset to <see cref="TypeAttribute.DEFAULT_TYPE"/> if not specified):
     ///     <code>
     ///         return reusableToken.Reinit(string, 0, string.Length, startOffset, endOffset[, type]);
     ///     </code>
-    ///     </item>
-    ///     <item> Copying text from char[] buffer (type is reset to <see cref="TypeAttribute.DEFAULT_TYPE"/> if not specified):
+    ///     </description></item>
+    ///     <item><description> Copying text from char[] buffer (type is reset to <see cref="TypeAttribute.DEFAULT_TYPE"/> if not specified):
     ///     <code>
     ///         return reusableToken.Reinit(buffer, 0, buffer.Length, startOffset, endOffset[, type]);
     ///     </code>
-    ///     </item>
-    ///     <item> Copying some text from a char[] buffer (type is reset to <see cref="TypeAttribute.DEFAULT_TYPE"/> if not specified):
+    ///     </description></item>
+    ///     <item><description> Copying some text from a char[] buffer (type is reset to <see cref="TypeAttribute.DEFAULT_TYPE"/> if not specified):
     ///     <code>
     ///         return reusableToken.Reinit(buffer, start, end - start, startOffset, endOffset[, type]);
     ///     </code>
-    ///     </item>
-    ///     <item> Copying from one one <see cref="Token"/> to another (type is reset to <see cref="TypeAttribute.DEFAULT_TYPE"/> if not specified):
+    ///     </description></item>
+    ///     <item><description> Copying from one one <see cref="Token"/> to another (type is reset to <see cref="TypeAttribute.DEFAULT_TYPE"/> if not specified):
     ///     <code>
     ///         return reusableToken.Reinit(source.Buffer, 0, source.Length, source.StartOffset, source.EndOffset[, source.Type]);
     ///     </code>
-    ///     </item>
+    ///     </description></item>
     /// </list>
     /// A few things to note:
     /// <list type="bullet">
-    ///     <item><see cref="Clear()"/> initializes all of the fields to default values. this was changed in contrast to Lucene 2.4, but should affect no one.</item>
-    ///     <item>Because <see cref="TokenStream"/>s can be chained, one cannot assume that the <see cref="Token"/>'s current type is correct.</item>
-    ///     <item>The startOffset and endOffset represent the start and offset in the source text, so be careful in adjusting them.</item>
-    ///     <item>When caching a reusable token, clone it. When injecting a cached token into a stream that can be reset, clone it again.</item>
+    ///     <item><description><see cref="Clear()"/> initializes all of the fields to default values. this was changed in contrast to Lucene 2.4, but should affect no one.</description></item>
+    ///     <item><description>Because <see cref="TokenStream"/>s can be chained, one cannot assume that the <see cref="Token"/>'s current type is correct.</description></item>
+    ///     <item><description>The startOffset and endOffset represent the start and offset in the source text, so be careful in adjusting them.</description></item>
+    ///     <item><description>When caching a reusable token, clone it. When injecting a cached token into a stream that can be reset, clone it again.</description></item>
     /// </list>
     /// </para>
     /// <para>

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Analysis/TokenAttributes/IPositionIncrementAttribute.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Analysis/TokenAttributes/IPositionIncrementAttribute.cs b/src/Lucene.Net/Analysis/TokenAttributes/IPositionIncrementAttribute.cs
index 3d47b7d..f3adee1 100644
--- a/src/Lucene.Net/Analysis/TokenAttributes/IPositionIncrementAttribute.cs
+++ b/src/Lucene.Net/Analysis/TokenAttributes/IPositionIncrementAttribute.cs
@@ -29,19 +29,19 @@ namespace Lucene.Net.Analysis.TokenAttributes
     /// <para/>Some common uses for this are:
     /// 
     /// <list type="bullet">
-    /// <item>Set it to zero to put multiple terms in the same position.  this is
+    /// <item><description>Set it to zero to put multiple terms in the same position.  this is
     /// useful if, e.g., a word has multiple stems.  Searches for phrases
     /// including either stem will match.  In this case, all but the first stem's
     /// increment should be set to zero: the increment of the first instance
     /// should be one.  Repeating a token with an increment of zero can also be
-    /// used to boost the scores of matches on that token.</item>
+    /// used to boost the scores of matches on that token.</description></item>
     ///
-    /// <item>Set it to values greater than one to inhibit exact phrase matches.
+    /// <item><description>Set it to values greater than one to inhibit exact phrase matches.
     /// If, for example, one does not want phrases to match across removed stop
     /// words, then one could build a stop word filter that removes stop words and
     /// also sets the increment to the number of stop words removed before each
     /// non-stop word.  Then exact phrase queries will only match when the terms
-    /// occur with no intervening stop words.</item>
+    /// occur with no intervening stop words.</description></item>
     /// </list>
     /// </summary>
     /// <seealso cref="Lucene.Net.Index.DocsAndPositionsEnum"/>

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Analysis/TokenStream.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Analysis/TokenStream.cs b/src/Lucene.Net/Analysis/TokenStream.cs
index 7cec955..f9ec60f 100644
--- a/src/Lucene.Net/Analysis/TokenStream.cs
+++ b/src/Lucene.Net/Analysis/TokenStream.cs
@@ -31,9 +31,9 @@ namespace Lucene.Net.Analysis
     /// <para/>
     /// this is an abstract class; concrete subclasses are:
     /// <list type="bullet">
-    ///     <item><see cref="Tokenizer"/>, a <see cref="TokenStream"/> whose input is a <see cref="System.IO.TextReader"/>; and</item>
-    ///     <item><see cref="TokenFilter"/>, a <see cref="TokenStream"/> whose input is another
-    ///         <see cref="TokenStream"/>.</item>
+    ///     <item><description><see cref="Tokenizer"/>, a <see cref="TokenStream"/> whose input is a <see cref="System.IO.TextReader"/>; and</description></item>
+    ///     <item><description><see cref="TokenFilter"/>, a <see cref="TokenStream"/> whose input is another
+    ///         <see cref="TokenStream"/>.</description></item>
     /// </list>
     /// A new <see cref="TokenStream"/> API has been introduced with Lucene 2.9. this API
     /// has moved from being <see cref="Token"/>-based to <see cref="Util.IAttribute"/>-based. While
@@ -49,17 +49,17 @@ namespace Lucene.Net.Analysis
     /// <para/>
     /// <b>The workflow of the new <see cref="TokenStream"/> API is as follows:</b>
     /// <list type="number">
-    ///     <item>Instantiation of <see cref="TokenStream"/>/<see cref="TokenFilter"/>s which add/get
-    ///         attributes to/from the <see cref="AttributeSource"/>.</item>
-    ///     <item>The consumer calls <see cref="TokenStream.Reset()"/>.</item>
-    ///     <item>The consumer retrieves attributes from the stream and stores local
-    ///         references to all attributes it wants to access.</item>
-    ///     <item>The consumer calls <see cref="IncrementToken()"/> until it returns false
-    ///         consuming the attributes after each call.</item>
-    ///     <item>The consumer calls <see cref="End()"/> so that any end-of-stream operations
-    ///         can be performed.</item>
-    ///     <item>The consumer calls <see cref="Dispose()"/> to release any resource when finished
-    ///         using the <see cref="TokenStream"/>.</item>
+    ///     <item><description>Instantiation of <see cref="TokenStream"/>/<see cref="TokenFilter"/>s which add/get
+    ///         attributes to/from the <see cref="AttributeSource"/>.</description></item>
+    ///     <item><description>The consumer calls <see cref="TokenStream.Reset()"/>.</description></item>
+    ///     <item><description>The consumer retrieves attributes from the stream and stores local
+    ///         references to all attributes it wants to access.</description></item>
+    ///     <item><description>The consumer calls <see cref="IncrementToken()"/> until it returns false
+    ///         consuming the attributes after each call.</description></item>
+    ///     <item><description>The consumer calls <see cref="End()"/> so that any end-of-stream operations
+    ///         can be performed.</description></item>
+    ///     <item><description>The consumer calls <see cref="Dispose()"/> to release any resource when finished
+    ///         using the <see cref="TokenStream"/>.</description></item>
     /// </list>
     /// To make sure that filters and consumers know which attributes are available,
     /// the attributes must be added during instantiation. Filters and consumers are

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Codecs/Codec.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Codecs/Codec.cs b/src/Lucene.Net/Codecs/Codec.cs
index e70cd5e..a9f2448 100644
--- a/src/Lucene.Net/Codecs/Codec.cs
+++ b/src/Lucene.Net/Codecs/Codec.cs
@@ -31,13 +31,13 @@ namespace Lucene.Net.Codecs
     /// <para/>
     /// To implement your own codec:
     /// <list type="number">
-    ///     <item>Subclass this class.</item>
-    ///     <item>Subclass <see cref="DefaultCodecFactory"/>, override the <see cref="DefaultCodecFactory.Initialize()"/> method,
+    ///     <item><description>Subclass this class.</description></item>
+    ///     <item><description>Subclass <see cref="DefaultCodecFactory"/>, override the <see cref="DefaultCodecFactory.Initialize()"/> method,
     ///         and add the line <c>base.ScanForCodecs(typeof(YourCodec).GetTypeInfo().Assembly)</c>. 
     ///         If you have any codec classes in your assembly 
     ///         that are not meant for reading, you can add the <see cref="ExcludeCodecFromScanAttribute"/> 
-    ///         to them so they are ignored by the scan.</item>
-    ///     <item>set the new <see cref="ICodecFactory"/> by calling <see cref="SetCodecFactory"/> at application startup.</item>
+    ///         to them so they are ignored by the scan.</description></item>
+    ///     <item><description>set the new <see cref="ICodecFactory"/> by calling <see cref="SetCodecFactory"/> at application startup.</description></item>
     /// </list>
     /// If your codec has dependencies, you may also override <see cref="DefaultCodecFactory.GetCodec(Type)"/> to inject 
     /// them via pure DI or a DI container. See <a href="http://blog.ploeh.dk/2014/05/19/di-friendly-framework/">DI-Friendly Framework</a>

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Codecs/DocValuesFormat.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Codecs/DocValuesFormat.cs b/src/Lucene.Net/Codecs/DocValuesFormat.cs
index 813d9a1..9ef0f4d 100644
--- a/src/Lucene.Net/Codecs/DocValuesFormat.cs
+++ b/src/Lucene.Net/Codecs/DocValuesFormat.cs
@@ -34,14 +34,14 @@ namespace Lucene.Net.Codecs
     /// <para/>
     /// To implement your own format:
     /// <list type="number">
-    ///     <item>Subclass this class.</item>
-    ///     <item>Subclass <see cref="DefaultDocValuesFormatFactory"/>, override the <see cref="DefaultDocValuesFormatFactory.Initialize()"/> method,
+    ///     <item><description>Subclass this class.</description></item>
+    ///     <item><description>Subclass <see cref="DefaultDocValuesFormatFactory"/>, override the <see cref="DefaultDocValuesFormatFactory.Initialize()"/> method,
     ///         and add the line <c>base.ScanForDocValuesFormats(typeof(YourDocValuesFormat).GetTypeInfo().Assembly)</c>. 
     ///         If you have any format classes in your assembly 
     ///         that are not meant for reading, you can add the <see cref="ExcludeDocValuesFormatFromScanAttribute"/> 
-    ///         to them so they are ignored by the scan.</item>
-    ///     <item>Set the new <see cref="IDocValuesFormatFactory"/> by calling <see cref="SetDocValuesFormatFactory(IDocValuesFormatFactory)"/>
-    ///         at application startup.</item>
+    ///         to them so they are ignored by the scan.</description></item>
+    ///     <item><description>Set the new <see cref="IDocValuesFormatFactory"/> by calling <see cref="SetDocValuesFormatFactory(IDocValuesFormatFactory)"/>
+    ///         at application startup.</description></item>
     /// </list>
     /// If your format has dependencies, you may also override <see cref="DefaultDocValuesFormatFactory.GetDocValuesFormat(Type)"/>
     /// to inject them via pure DI or a DI container. See <a href="http://blog.ploeh.dk/2014/05/19/di-friendly-framework/">DI-Friendly Framework</a>

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Codecs/PostingsFormat.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Codecs/PostingsFormat.cs b/src/Lucene.Net/Codecs/PostingsFormat.cs
index 2a74ea6..bc34e65 100644
--- a/src/Lucene.Net/Codecs/PostingsFormat.cs
+++ b/src/Lucene.Net/Codecs/PostingsFormat.cs
@@ -32,14 +32,14 @@ namespace Lucene.Net.Codecs
     /// <para/>
     /// If you implement your own format:
     /// <list type="number">
-    ///     <item>Subclass this class.</item>
-    ///     <item>Subclass <see cref="DefaultPostingsFormatFactory"/>, override <see cref="DefaultPostingsFormatFactory.Initialize()"/>,
+    ///     <item><description>Subclass this class.</description></item>
+    ///     <item><description>Subclass <see cref="DefaultPostingsFormatFactory"/>, override <see cref="DefaultPostingsFormatFactory.Initialize()"/>,
     ///         and add the line <c>base.ScanForPostingsFormats(typeof(YourPostingsFormat).GetTypeInfo().Assembly)</c>. 
     ///         If you have any format classes in your assembly 
     ///         that are not meant for reading, you can add the <see cref="ExcludePostingsFormatFromScanAttribute"/> 
-    ///         to them so they are ignored by the scan.</item>
-    ///     <item>Set the new <see cref="IPostingsFormatFactory"/> by calling <see cref="SetPostingsFormatFactory(IPostingsFormatFactory)"/> 
-    ///         at application startup.</item>
+    ///         to them so they are ignored by the scan.</description></item>
+    ///     <item><description>Set the new <see cref="IPostingsFormatFactory"/> by calling <see cref="SetPostingsFormatFactory(IPostingsFormatFactory)"/> 
+    ///         at application startup.</description></item>
     /// </list>
     /// If your format has dependencies, you may also override <see cref="DefaultPostingsFormatFactory.GetPostingsFormat(Type)"/> to inject 
     /// them via pure DI or a DI container. See <a href="http://blog.ploeh.dk/2014/05/19/di-friendly-framework/">DI-Friendly Framework</a>

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Document/Field.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Document/Field.cs b/src/Lucene.Net/Document/Field.cs
index fbd7d13..54fe113 100644
--- a/src/Lucene.Net/Document/Field.cs
+++ b/src/Lucene.Net/Document/Field.cs
@@ -920,8 +920,8 @@ namespace Lucene.Net.Documents
         /// <exception cref="ArgumentNullException">if <paramref name="name"/> or <paramref name="value"/> is <c>null</c></exception>
         /// <exception cref="ArgumentException">in any of the following situations:
         /// <list type="bullet">
-        ///     <item>the field is neither stored nor indexed</item>
-        ///     <item>the field is not indexed but termVector is <see cref="TermVector.YES"/></item>
+        ///     <item><description>the field is neither stored nor indexed</description></item>
+        ///     <item><description>the field is not indexed but termVector is <see cref="TermVector.YES"/></description></item>
         /// </list>
         /// </exception>
         [Obsolete("Use StringField, TextField instead.")]

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Index/AutomatonTermsEnum.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Index/AutomatonTermsEnum.cs b/src/Lucene.Net/Index/AutomatonTermsEnum.cs
index e969235..e60a975 100644
--- a/src/Lucene.Net/Index/AutomatonTermsEnum.cs
+++ b/src/Lucene.Net/Index/AutomatonTermsEnum.cs
@@ -34,9 +34,9 @@ namespace Lucene.Net.Index
     /// <para/>
     /// The algorithm is such:
     /// <list type="number">
-    ///     <item>As long as matches are successful, keep reading sequentially.</item>
-    ///     <item>When a match fails, skip to the next string in lexicographic order that
-    ///         does not enter a reject state.</item>
+    ///     <item><description>As long as matches are successful, keep reading sequentially.</description></item>
+    ///     <item><description>When a match fails, skip to the next string in lexicographic order that
+    ///         does not enter a reject state.</description></item>
     /// </list>
     /// <para>
     /// The algorithm does not attempt to actually skip to the next string that is

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Index/DocTermOrds.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Index/DocTermOrds.cs b/src/Lucene.Net/Index/DocTermOrds.cs
index 1805499..237e3fc 100644
--- a/src/Lucene.Net/Index/DocTermOrds.cs
+++ b/src/Lucene.Net/Index/DocTermOrds.cs
@@ -74,36 +74,36 @@ namespace Lucene.Net.Index
     /// <remarks>
     /// Final form of the un-inverted field:
     /// <list type="bullet">
-    ///     <item>Each document points to a list of term numbers that are contained in that document.</item>
-    ///     <item>
+    ///     <item><description>Each document points to a list of term numbers that are contained in that document.</description></item>
+    ///     <item><description>
     ///         Term numbers are in sorted order, and are encoded as variable-length deltas from the
     ///         previous term number.  Real term numbers start at 2 since 0 and 1 are reserved.  A
     ///         term number of 0 signals the end of the termNumber list.
-    ///     </item>
-    ///     <item>
+    ///     </description></item>
+    ///     <item><description>
     ///         There is a single int[maxDoc()] which either contains a pointer into a byte[] for
     ///         the termNumber lists, or directly contains the termNumber list if it fits in the 4
     ///         bytes of an integer.  If the first byte in the integer is 1, the next 3 bytes
     ///         are a pointer into a byte[] where the termNumber list starts.
-    ///     </item>
-    ///     <item>
+    ///     </description></item>
+    ///     <item><description>
     ///         There are actually 256 byte arrays, to compensate for the fact that the pointers
     ///         into the byte arrays are only 3 bytes long.  The correct byte array for a document
     ///         is a function of it's id.
-    ///     </item>
-    ///     <item>
+    ///     </description></item>
+    ///     <item><description>
     ///         To save space and speed up faceting, any term that matches enough documents will
     ///         not be un-inverted... it will be skipped while building the un-inverted field structure,
     ///         and will use a set intersection method during faceting.
-    ///     </item>
-    ///     <item>
+    ///     </description></item>
+    ///     <item><description>
     ///         To further save memory, the terms (the actual string values) are not all stored in
     ///         memory, but a TermIndex is used to convert term numbers to term values only
     ///         for the terms needed after faceting has completed.  Only every 128th term value
     ///         is stored, along with it's corresponding term number, and this is used as an
     ///         index to find the closest term and iterate until the desired number is hit (very
     ///         much like Lucene's own internal term index).
-    ///     </item>
+    ///     </description></item>
     /// </list>
     /// </remarks>
 #if FEATURE_SERIALIZABLE

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Index/DocumentsWriterDeleteQueue.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Index/DocumentsWriterDeleteQueue.cs b/src/Lucene.Net/Index/DocumentsWriterDeleteQueue.cs
index 36052ef..f035d65 100644
--- a/src/Lucene.Net/Index/DocumentsWriterDeleteQueue.cs
+++ b/src/Lucene.Net/Index/DocumentsWriterDeleteQueue.cs
@@ -52,13 +52,13 @@ namespace Lucene.Net.Index
     /// DWPT updates a document it:
     ///
     /// <list type="number">
-    ///     <item>consumes a document and finishes its processing</item>
-    ///     <item>updates its private <see cref="DeleteSlice"/> either by calling
+    ///     <item><description>consumes a document and finishes its processing</description></item>
+    ///     <item><description>updates its private <see cref="DeleteSlice"/> either by calling
     ///     <see cref="UpdateSlice(DeleteSlice)"/> or <see cref="Add(Term, DeleteSlice)"/> (if the
-    ///         document has a delTerm)</item>
-    ///     <item>applies all deletes in the slice to its private <see cref="BufferedUpdates"/>
-    ///         and resets it</item>
-    ///     <item>increments its internal document id</item>
+    ///         document has a delTerm)</description></item>
+    ///     <item><description>applies all deletes in the slice to its private <see cref="BufferedUpdates"/>
+    ///         and resets it</description></item>
+    ///     <item><description>increments its internal document id</description></item>
     /// </list>
     ///
     /// The DWPT also doesn't apply its current documents delete term until it has

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Index/FlushByRamOrCountsPolicy.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Index/FlushByRamOrCountsPolicy.cs b/src/Lucene.Net/Index/FlushByRamOrCountsPolicy.cs
index 5ed2c7b..8c4da3c 100644
--- a/src/Lucene.Net/Index/FlushByRamOrCountsPolicy.cs
+++ b/src/Lucene.Net/Index/FlushByRamOrCountsPolicy.cs
@@ -28,13 +28,13 @@ namespace Lucene.Net.Index
     /// number of buffered delete terms.
     ///
     /// <list type="bullet">
-    ///     <item>
+    ///     <item><description>
     ///         <see cref="OnDelete(DocumentsWriterFlushControl, DocumentsWriterPerThreadPool.ThreadState)"/>
     ///         - applies pending delete operations based on the global number of buffered
     ///         delete terms iff <see cref="LiveIndexWriterConfig.MaxBufferedDeleteTerms"/> is
     ///         enabled
-    ///     </item>
-    ///     <item>
+    ///     </description></item>
+    ///     <item><description>
     ///         <see cref="OnInsert(DocumentsWriterFlushControl, DocumentsWriterPerThreadPool.ThreadState)"/>
     ///         - flushes either on the number of documents per
     ///         <see cref="DocumentsWriterPerThread"/> (
@@ -42,15 +42,15 @@ namespace Lucene.Net.Index
     ///         memory consumption in the current indexing session iff
     ///         <see cref="LiveIndexWriterConfig.MaxBufferedDocs"/> or
     ///         <see cref="LiveIndexWriterConfig.RAMBufferSizeMB"/> is enabled respectively
-    ///     </item>
-    ///     <item>
+    ///     </description></item>
+    ///     <item><description>
     ///         <see cref="FlushPolicy.OnUpdate(DocumentsWriterFlushControl, DocumentsWriterPerThreadPool.ThreadState)"/>
     ///         - calls
     ///         <see cref="OnInsert(DocumentsWriterFlushControl, DocumentsWriterPerThreadPool.ThreadState)"/>
     ///         and
     ///         <see cref="OnDelete(DocumentsWriterFlushControl, DocumentsWriterPerThreadPool.ThreadState)"/>
     ///         in order
-    ///     </item>
+    ///     </description></item>
     /// </list>
     /// All <see cref="IndexWriterConfig"/> settings are used to mark
     /// <see cref="DocumentsWriterPerThread"/> as flush pending during indexing with

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Index/FlushPolicy.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Index/FlushPolicy.cs b/src/Lucene.Net/Index/FlushPolicy.cs
index d342b0b..8eca198 100644
--- a/src/Lucene.Net/Index/FlushPolicy.cs
+++ b/src/Lucene.Net/Index/FlushPolicy.cs
@@ -30,10 +30,10 @@ namespace Lucene.Net.Index
     /// <para/>
     /// Segments are traditionally flushed by:
     /// <list type="bullet">
-    ///     <item>RAM consumption - configured via
-    ///         <see cref="LiveIndexWriterConfig.RAMBufferSizeMB"/></item>
-    ///     <item>Number of RAM resident documents - configured via
-    ///         <see cref="LiveIndexWriterConfig.MaxBufferedDocs"/></item>
+    ///     <item><description>RAM consumption - configured via
+    ///         <see cref="LiveIndexWriterConfig.RAMBufferSizeMB"/></description></item>
+    ///     <item><description>Number of RAM resident documents - configured via
+    ///         <see cref="LiveIndexWriterConfig.MaxBufferedDocs"/></description></item>
     /// </list>
     /// The policy also applies pending delete operations (by term and/or query),
     /// given the threshold set in

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Index/IndexReader.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Index/IndexReader.cs b/src/Lucene.Net/Index/IndexReader.cs
index c9ade02..3b72fc7 100644
--- a/src/Lucene.Net/Index/IndexReader.cs
+++ b/src/Lucene.Net/Index/IndexReader.cs
@@ -36,16 +36,16 @@ namespace Lucene.Net.Index
     ///
     /// <para/>There are two different types of <see cref="IndexReader"/>s:
     /// <list type="bullet">
-    ///     <item><see cref="AtomicReader"/>: These indexes do not consist of several sub-readers,
+    ///     <item><description><see cref="AtomicReader"/>: These indexes do not consist of several sub-readers,
     ///         they are atomic. They support retrieval of stored fields, doc values, terms,
-    ///         and postings.</item>
-    ///     <item><see cref="CompositeReader"/>: Instances (like <see cref="DirectoryReader"/>)
+    ///         and postings.</description></item>
+    ///     <item><description><see cref="CompositeReader"/>: Instances (like <see cref="DirectoryReader"/>)
     ///         of this reader can only
     ///         be used to get stored fields from the underlying <see cref="AtomicReader"/>s,
     ///         but it is not possible to directly retrieve postings. To do that, get
     ///         the sub-readers via <see cref="CompositeReader.GetSequentialSubReaders()"/>.
     ///         Alternatively, you can mimic an <see cref="AtomicReader"/> (with a serious slowdown),
-    ///         by wrapping composite readers with <see cref="SlowCompositeReaderWrapper"/>.</item>
+    ///         by wrapping composite readers with <see cref="SlowCompositeReaderWrapper"/>.</description></item>
     /// </list>
     ///
     /// <para/><see cref="IndexReader"/> instances for indexes on disk are usually constructed

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Store/CompoundFileDirectory.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Store/CompoundFileDirectory.cs b/src/Lucene.Net/Store/CompoundFileDirectory.cs
index 29ce68e..58cca90 100644
--- a/src/Lucene.Net/Store/CompoundFileDirectory.cs
+++ b/src/Lucene.Net/Store/CompoundFileDirectory.cs
@@ -42,29 +42,29 @@ namespace Lucene.Net.Store
     /// <para/>
     /// Files:
     /// <list type="bullet">
-    ///     <item><c>.cfs</c>: An optional "virtual" file consisting of all the other
-    ///         index files for systems that frequently run out of file handles.</item>
-    ///     <item><c>.cfe</c>: The "virtual" compound file's entry table holding all
-    ///         entries in the corresponding .cfs file.</item>
+    ///     <item><description><c>.cfs</c>: An optional "virtual" file consisting of all the other
+    ///         index files for systems that frequently run out of file handles.</description></item>
+    ///     <item><description><c>.cfe</c>: The "virtual" compound file's entry table holding all
+    ///         entries in the corresponding .cfs file.</description></item>
     /// </list>
     /// <para>Description:</para>
     /// <list type="bullet">
-    ///     <item>Compound (.cfs) --&gt; Header, FileData <sup>FileCount</sup></item>
-    ///     <item>Compound Entry Table (.cfe) --&gt; Header, FileCount, &lt;FileName,
-    ///         DataOffset, DataLength&gt; <sup>FileCount</sup>, Footer</item>
-    ///     <item>Header --&gt; <see cref="CodecUtil.WriteHeader"/></item>
-    ///     <item>FileCount --&gt; <see cref="DataOutput.WriteVInt32"/></item>
-    ///     <item>DataOffset,DataLength --&gt; <see cref="DataOutput.WriteInt64"/></item>
-    ///     <item>FileName --&gt; <see cref="DataOutput.WriteString"/></item>
-    ///     <item>FileData --&gt; raw file data</item>
-    ///     <item>Footer --&gt; <see cref="CodecUtil.WriteFooter"/></item>
+    ///     <item><description>Compound (.cfs) --&gt; Header, FileData <sup>FileCount</sup></description></item>
+    ///     <item><description>Compound Entry Table (.cfe) --&gt; Header, FileCount, &lt;FileName,
+    ///         DataOffset, DataLength&gt; <sup>FileCount</sup>, Footer</description></item>
+    ///     <item><description>Header --&gt; <see cref="CodecUtil.WriteHeader"/></description></item>
+    ///     <item><description>FileCount --&gt; <see cref="DataOutput.WriteVInt32"/></description></item>
+    ///     <item><description>DataOffset,DataLength --&gt; <see cref="DataOutput.WriteInt64"/></description></item>
+    ///     <item><description>FileName --&gt; <see cref="DataOutput.WriteString"/></description></item>
+    ///     <item><description>FileData --&gt; raw file data</description></item>
+    ///     <item><description>Footer --&gt; <see cref="CodecUtil.WriteFooter"/></description></item>
     /// </list>
     /// <para>Notes:</para>
     /// <list type="bullet">
-    ///   <item>FileCount indicates how many files are contained in this compound file.
-    ///         The entry table that follows has that many entries.</item>
-    ///   <item>Each directory entry contains a long pointer to the start of this file's data
-    ///         section, the files length, and a <see cref="string"/> with that file's name.</item>
+    ///   <item><description>FileCount indicates how many files are contained in this compound file.
+    ///         The entry table that follows has that many entries.</description></item>
+    ///   <item><description>Each directory entry contains a long pointer to the start of this file's data
+    ///         section, the files length, and a <see cref="string"/> with that file's name.</description></item>
     /// </list>
     /// <para/>
     /// @lucene.experimental

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Store/Directory.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Store/Directory.cs b/src/Lucene.Net/Store/Directory.cs
index 17cb98b..7565b8a 100644
--- a/src/Lucene.Net/Store/Directory.cs
+++ b/src/Lucene.Net/Store/Directory.cs
@@ -31,9 +31,9 @@ namespace Lucene.Net.Store
     /// .NET's i/o APIs not used directly, but rather all i/o is
     /// through this API.  This permits things such as: 
     /// <list type="bullet">
-    ///     <item> implementation of RAM-based indices;</item>
-    ///     <item> implementation indices stored in a database;</item>
-    ///     <item> implementation of an index as a single file;</item>
+    ///     <item><description> implementation of RAM-based indices;</description></item>
+    ///     <item><description> implementation indices stored in a database;</description></item>
+    ///     <item><description> implementation of an index as a single file;</description></item>
     /// </list>
     /// <para/>
     /// Directory locking is implemented by an instance of
@@ -67,9 +67,9 @@ namespace Lucene.Net.Store
         /// Returns the length of a file in the directory. this method follows the
         /// following contract:
         /// <list>
-        ///     <item>Throws <see cref="System.IO.FileNotFoundException"/>
-        ///         if the file does not exist.</item>
-        ///     <item>Returns a value &gt;=0 if the file exists, which specifies its length.</item>
+        ///     <item><description>Throws <see cref="System.IO.FileNotFoundException"/>
+        ///         if the file does not exist.</description></item>
+        ///     <item><description>Returns a value &gt;=0 if the file exists, which specifies its length.</description></item>
         /// </list>
         /// </summary>
         /// <param name="name"> the name of the file for which to return the length. </param>

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Store/FSDirectory.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Store/FSDirectory.cs b/src/Lucene.Net/Store/FSDirectory.cs
index 42c5bfc..d1cbda1 100644
--- a/src/Lucene.Net/Store/FSDirectory.cs
+++ b/src/Lucene.Net/Store/FSDirectory.cs
@@ -36,14 +36,14 @@ namespace Lucene.Net.Store
     ///
     /// <list type="bullet">
     ///
-    ///     <item> <see cref="SimpleFSDirectory"/> is a straightforward
+    ///     <item><description> <see cref="SimpleFSDirectory"/> is a straightforward
     ///         implementation using <see cref="System.IO.FileStream"/>.
     ///         However, it has poor concurrent performance
     ///         (multiple threads will bottleneck) as it
     ///         synchronizes when multiple threads read from the
-    ///         same file.</item>
+    ///         same file.</description></item>
     ///
-    ///     <item> <see cref="NIOFSDirectory"/> uses java.nio's
+    ///     <item><description> <see cref="NIOFSDirectory"/> uses java.nio's
     ///         FileChannel's positional io when reading to avoid
     ///         synchronization when reading from the same file.
     ///         Unfortunately, due to a Windows-only <a
@@ -53,9 +53,9 @@ namespace Lucene.Net.Store
     ///         choice. Applications using <see cref="System.Threading.Thread.Interrupt()"/> or
     ///         <see cref="System.Threading.Tasks.Task{TResult}"/> should use
     ///         <see cref="SimpleFSDirectory"/> instead. See <see cref="NIOFSDirectory"/> java doc
-    ///         for details.</item>
+    ///         for details.</description></item>
     ///
-    ///     <item> <see cref="MMapDirectory"/> uses memory-mapped IO when
+    ///     <item><description> <see cref="MMapDirectory"/> uses memory-mapped IO when
     ///         reading. This is a good choice if you have plenty
     ///         of virtual memory relative to your index size, eg
     ///         if you are running on a 64 bit runtime, or you are
@@ -65,7 +65,7 @@ namespace Lucene.Net.Store
     ///         Applications using <see cref="System.Threading.Thread.Interrupt()"/> or
     ///         <see cref="System.Threading.Tasks.Task"/> should use
     ///         <see cref="SimpleFSDirectory"/> instead. See <see cref="MMapDirectory"/>
-    ///         doc for details.</item>
+    ///         doc for details.</description></item>
     /// </list>
     ///
     /// Unfortunately, because of system peculiarities, there is

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Support/C5.Support.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Support/C5.Support.cs b/src/Lucene.Net/Support/C5.Support.cs
index 50baa83..ab6600a 100644
--- a/src/Lucene.Net/Support/C5.Support.cs
+++ b/src/Lucene.Net/Support/C5.Support.cs
@@ -3961,13 +3961,13 @@ namespace Lucene.Net.Support.C5
         /// <summary>
         /// A default generic equality comparer for type T. The procedure is as follows:
         /// <list>
-        /// <item>If the actual generic argument T implements the generic interface
+        /// <item><description>If the actual generic argument T implements the generic interface
         /// <see cref="T:C5.ISequenced`1"/> for some value W of its generic parameter T,
-        /// the equalityComparer will be <see cref="T:C5.SequencedCollectionEqualityComparer`2"/></item>
-        /// <item>If the actual generic argument T implements 
+        /// the equalityComparer will be <see cref="T:C5.SequencedCollectionEqualityComparer`2"/></description></item>
+        /// <item><description>If the actual generic argument T implements 
         /// <see cref="T:C5.ICollection`1"/> for some value W of its generic parameter T,
-        /// the equalityComparer will be <see cref="T:C5.UnsequencedCollectionEqualityComparer`2"/></item>
-        /// <item>Otherwise the SCG.EqualityComparer&lt;T&gt;.Default is returned</item>
+        /// the equalityComparer will be <see cref="T:C5.UnsequencedCollectionEqualityComparer`2"/></description></item>
+        /// <item><description>Otherwise the SCG.EqualityComparer&lt;T&gt;.Default is returned</description></item>
         /// </list>   
         /// </summary>
         /// <value>The comparer</value>
@@ -5311,9 +5311,9 @@ namespace Lucene.Net.Support.C5
         /// whose only sign changes when going through items in increasing order
         /// can be 
         /// <list>
-        /// <item>from positive to zero</item>
-        /// <item>from positive to negative</item>
-        /// <item>from zero to negative</item>
+        /// <item><description>from positive to zero</description></item>
+        /// <item><description>from positive to negative</description></item>
+        /// <item><description>from zero to negative</description></item>
         /// </list>
         /// The "cut" function is supplied as the <code>CompareTo</code> method 
         /// of an object <code>c</code> implementing 
@@ -6030,10 +6030,10 @@ namespace Lucene.Net.Support.C5
     /// 
     /// <para>The methods are grouped according to
     /// <list>
-    /// <item>Extrema: report or report and delete an extremal item. This is reminiscent of simplified priority queues.</item>
-    /// <item>Nearest neighbor: report predecessor or successor in the collection of an item. Cut belongs to this group.</item>
-    /// <item>Range: report a view of a range of elements or remove all elements in a range.</item>
-    /// <item>AddSorted: add a collection of items known to be sorted in the same order (should be faster) (to be removed?)</item>
+    /// <item><description>Extrema: report or report and delete an extremal item. This is reminiscent of simplified priority queues.</description></item>
+    /// <item><description>Nearest neighbor: report predecessor or successor in the collection of an item. Cut belongs to this group.</description></item>
+    /// <item><description>Range: report a view of a range of elements or remove all elements in a range.</description></item>
+    /// <item><description>AddSorted: add a collection of items known to be sorted in the same order (should be faster) (to be removed?)</description></item>
     /// </list>
     /// </para>
     /// 
@@ -6175,9 +6175,9 @@ namespace Lucene.Net.Support.C5
         /// whose only sign changes when going through items in increasing order
         /// can be 
         /// <list>
-        /// <item>from positive to zero</item>
-        /// <item>from positive to negative</item>
-        /// <item>from zero to negative</item>
+        /// <item><description>from positive to zero</description></item>
+        /// <item><description>from positive to negative</description></item>
+        /// <item><description>from zero to negative</description></item>
         /// </list>
         /// The "cut" function is supplied as the <code>CompareTo</code> method 
         /// of an object <code>c</code> implementing 

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Support/Codecs/DefaultCodecFactory.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Support/Codecs/DefaultCodecFactory.cs b/src/Lucene.Net/Support/Codecs/DefaultCodecFactory.cs
index 8c1ecb1..a08acc8 100644
--- a/src/Lucene.Net/Support/Codecs/DefaultCodecFactory.cs
+++ b/src/Lucene.Net/Support/Codecs/DefaultCodecFactory.cs
@@ -28,19 +28,19 @@ namespace Lucene.Net.Codecs
     /// <para/>
     /// The most common use cases are:
     /// <list type="bullet">
-    ///     <item>subclass <see cref="DefaultCodecFactory"/> and override
+    ///     <item><description>subclass <see cref="DefaultCodecFactory"/> and override
     ///         <see cref="DefaultCodecFactory.GetCodec(Type)"/> so an external dependency injection
     ///         container can be used to supply the instances (lifetime should be singleton). Note that you could 
     ///         alternately use the "named type" feature that many DI containers have to supply the type based on name by 
-    ///         overriding <see cref="GetCodec(string)"/>.</item>
-    ///     <item>subclass <see cref="DefaultCodecFactory"/> and override
+    ///         overriding <see cref="GetCodec(string)"/>.</description></item>
+    ///     <item><description>subclass <see cref="DefaultCodecFactory"/> and override
     ///         <see cref="DefaultCodecFactory.GetCodecType(string)"/> so a type new type can be
-    ///         supplied that is not in the <see cref="DefaultCodecFactory.codecNameToTypeMap"/>.</item>
-    ///     <item>subclass <see cref="DefaultCodecFactory"/> to add new or override the default <see cref="Codec"/> 
-    ///         types by overriding <see cref="Initialize()"/> and calling <see cref="PutCodecType(Type)"/>.</item>
-    ///     <item>subclass <see cref="DefaultCodecFactory"/> to scan additional assemblies for <see cref="Codec"/>
+    ///         supplied that is not in the <see cref="DefaultCodecFactory.codecNameToTypeMap"/>.</description></item>
+    ///     <item><description>subclass <see cref="DefaultCodecFactory"/> to add new or override the default <see cref="Codec"/> 
+    ///         types by overriding <see cref="Initialize()"/> and calling <see cref="PutCodecType(Type)"/>.</description></item>
+    ///     <item><description>subclass <see cref="DefaultCodecFactory"/> to scan additional assemblies for <see cref="Codec"/>
     ///         subclasses in by overriding <see cref="Initialize()"/> and calling <see cref="ScanForCodecs(Assembly)"/>. 
-    ///         For performance reasons, the default behavior only loads Lucene.Net codecs.</item>
+    ///         For performance reasons, the default behavior only loads Lucene.Net codecs.</description></item>
     /// </list>
     /// <para/>
     /// To set the <see cref="ICodecFactory"/>, call <see cref="Codec.SetCodecFactory(ICodecFactory)"/>.

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Support/Codecs/DefaultDocValuesFormatFactory.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Support/Codecs/DefaultDocValuesFormatFactory.cs b/src/Lucene.Net/Support/Codecs/DefaultDocValuesFormatFactory.cs
index a85d6af..f03c3ae 100644
--- a/src/Lucene.Net/Support/Codecs/DefaultDocValuesFormatFactory.cs
+++ b/src/Lucene.Net/Support/Codecs/DefaultDocValuesFormatFactory.cs
@@ -28,19 +28,19 @@ namespace Lucene.Net.Codecs
     /// <para/>
     /// The most common use cases are:
     /// <list type="bullet">
-    ///     <item>subclass <see cref="DefaultDocValuesFormatFactory"/> and override
+    ///     <item><description>subclass <see cref="DefaultDocValuesFormatFactory"/> and override
     ///         <see cref="DefaultDocValuesFormatFactory.GetDocValuesFormat(Type)"/> so an external dependency injection
     ///         container can be used to supply the instances (lifetime should be singleton). Note that you could 
     ///         alternately use the "named type" feature that many DI containers have to supply the type based on name by 
-    ///         overriding <see cref="GetDocValuesFormat(string)"/>.</item>
-    ///     <item>subclass <see cref="DefaultDocValuesFormatFactory"/> and override
+    ///         overriding <see cref="GetDocValuesFormat(string)"/>.</description></item>
+    ///     <item><description>subclass <see cref="DefaultDocValuesFormatFactory"/> and override
     ///         <see cref="DefaultDocValuesFormatFactory.GetDocValuesFormatType(string)"/> so a type new type can be
-    ///         supplied that is not in the <see cref="DefaultDocValuesFormatFactory.docValuesFormatNameToTypeMap"/>.</item>
-    ///     <item>subclass <see cref="DefaultDocValuesFormatFactory"/> to add new or override the default <see cref="DocValuesFormat"/> 
-    ///         types by overriding <see cref="Initialize()"/> and calling <see cref="PutDocValuesFormatType(Type)"/>.</item>
-    ///     <item>subclass <see cref="DefaultDocValuesFormatFactory"/> to scan additional assemblies for <see cref="DocValuesFormat"/>
+    ///         supplied that is not in the <see cref="DefaultDocValuesFormatFactory.docValuesFormatNameToTypeMap"/>.</description></item>
+    ///     <item><description>subclass <see cref="DefaultDocValuesFormatFactory"/> to add new or override the default <see cref="DocValuesFormat"/> 
+    ///         types by overriding <see cref="Initialize()"/> and calling <see cref="PutDocValuesFormatType(Type)"/>.</description></item>
+    ///     <item><description>subclass <see cref="DefaultDocValuesFormatFactory"/> to scan additional assemblies for <see cref="DocValuesFormat"/>
     ///         subclasses in by overriding <see cref="Initialize()"/> and calling <see cref="ScanForDocValuesFormats(Assembly)"/>. 
-    ///         For performance reasons, the default behavior only loads Lucene.Net codecs.</item>
+    ///         For performance reasons, the default behavior only loads Lucene.Net codecs.</description></item>
     /// </list>
     /// <para/>
     /// To set the <see cref="IDocValuesFormatFactory"/>, call <see cref="DocValuesFormat.SetDocValuesFormatFactory(IDocValuesFormatFactory)"/>.

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Support/Codecs/DefaultPostingsFormatFactory.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Support/Codecs/DefaultPostingsFormatFactory.cs b/src/Lucene.Net/Support/Codecs/DefaultPostingsFormatFactory.cs
index 0cbd907..08fb60e 100644
--- a/src/Lucene.Net/Support/Codecs/DefaultPostingsFormatFactory.cs
+++ b/src/Lucene.Net/Support/Codecs/DefaultPostingsFormatFactory.cs
@@ -28,19 +28,19 @@ namespace Lucene.Net.Codecs
     /// <para/>
     /// The most common use cases are:
     /// <list type="bullet">
-    ///     <item>subclass <see cref="DefaultPostingsFormatFactory"/> and override
+    ///     <item><description>subclass <see cref="DefaultPostingsFormatFactory"/> and override
     ///         <see cref="DefaultPostingsFormatFactory.GetPostingsFormat(Type)"/> so an external dependency injection
     ///         container can be used to supply the instances (lifetime should be singleton). Note that you could 
     ///         alternately use the "named type" feature that many DI containers have to supply the type based on name by 
-    ///         overriding <see cref="GetPostingsFormat(string)"/>.</item>
-    ///     <item>subclass <see cref="DefaultPostingsFormatFactory"/> and override
+    ///         overriding <see cref="GetPostingsFormat(string)"/>.</description></item>
+    ///     <item><description>subclass <see cref="DefaultPostingsFormatFactory"/> and override
     ///         <see cref="DefaultPostingsFormatFactory.GetPostingsFormatType(string)"/> so a type new type can be
-    ///         supplied that is not in the <see cref="DefaultPostingsFormatFactory.postingsFormatNameToTypeMap"/>.</item>
-    ///     <item>subclass <see cref="DefaultPostingsFormatFactory"/> to add new or override the default <see cref="PostingsFormat"/> 
-    ///         types by overriding <see cref="Initialize()"/> and calling <see cref="PutPostingsFormatType(Type)"/>.</item>
-    ///     <item>subclass <see cref="DefaultPostingsFormatFactory"/> to scan additional assemblies for <see cref="PostingsFormat"/>
+    ///         supplied that is not in the <see cref="DefaultPostingsFormatFactory.postingsFormatNameToTypeMap"/>.</description></item>
+    ///     <item><description>subclass <see cref="DefaultPostingsFormatFactory"/> to add new or override the default <see cref="PostingsFormat"/> 
+    ///         types by overriding <see cref="Initialize()"/> and calling <see cref="PutPostingsFormatType(Type)"/>.</description></item>
+    ///     <item><description>subclass <see cref="DefaultPostingsFormatFactory"/> to scan additional assemblies for <see cref="PostingsFormat"/>
     ///         subclasses in by overriding <see cref="Initialize()"/> and calling <see cref="ScanForPostingsFormats(Assembly)"/>. 
-    ///         For performance reasons, the default behavior only loads Lucene.Net codecs.</item>
+    ///         For performance reasons, the default behavior only loads Lucene.Net codecs.</description></item>
     /// </list>
     /// <para/>
     /// To set the <see cref="IPostingsFormatFactory"/>, call <see cref="PostingsFormat.SetPostingsFormatFactory(IPostingsFormatFactory)"/>.

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Support/HashMap.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Support/HashMap.cs b/src/Lucene.Net/Support/HashMap.cs
index aaf270b..6a293c3 100644
--- a/src/Lucene.Net/Support/HashMap.cs
+++ b/src/Lucene.Net/Support/HashMap.cs
@@ -57,15 +57,15 @@ namespace Lucene.Net.Support
     /// <remarks>
     /// <h2>Unordered Dictionaries</h2>
     /// <list type="bullet">
-    ///     <item><see cref="Dictionary{TKey, TValue}"/> - use when order is not important and all keys are non-null.</item>
-    ///     <item><see cref="HashMap{TKey, TValue}"/> - use when order is not important and support for a null key is required.</item>
+    ///     <item><description><see cref="Dictionary{TKey, TValue}"/> - use when order is not important and all keys are non-null.</description></item>
+    ///     <item><description><see cref="HashMap{TKey, TValue}"/> - use when order is not important and support for a null key is required.</description></item>
     /// </list>
     /// <h2>Ordered Dictionaries</h2>
     /// <list type="bullet">
-    ///     <item><see cref="LinkedHashMap{TKey, TValue}"/> - use when you need to preserve entry insertion order. Keys are nullable.</item>
-    ///     <item><see cref="SortedDictionary{TKey, TValue}"/> - use when you need natural sort order. Keys must be unique.</item>
-    ///     <item><see cref="TreeDictionary{K, V}"/> - use when you need natural sort order. Keys may contain duplicates.</item>
-    ///     <item><see cref="LurchTable{TKey, TValue}"/> - use when you need to sort by most recent access or most recent update. Works well for LRU caching.</item>
+    ///     <item><description><see cref="LinkedHashMap{TKey, TValue}"/> - use when you need to preserve entry insertion order. Keys are nullable.</description></item>
+    ///     <item><description><see cref="SortedDictionary{TKey, TValue}"/> - use when you need natural sort order. Keys must be unique.</description></item>
+    ///     <item><description><see cref="TreeDictionary{K, V}"/> - use when you need natural sort order. Keys may contain duplicates.</description></item>
+    ///     <item><description><see cref="LurchTable{TKey, TValue}"/> - use when you need to sort by most recent access or most recent update. Works well for LRU caching.</description></item>
     /// </list>
     /// </remarks>
 #if FEATURE_SERIALIZABLE

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Support/IO/Buffer.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Support/IO/Buffer.cs b/src/Lucene.Net/Support/IO/Buffer.cs
index 3892365..79bd73e 100644
--- a/src/Lucene.Net/Support/IO/Buffer.cs
+++ b/src/Lucene.Net/Support/IO/Buffer.cs
@@ -27,39 +27,39 @@ namespace Lucene.Net.Support.IO
     /// <para/>
     /// A buffer can be described by the following properties:
     /// <list type="bullet">
-    ///     <item>
+    ///     <item><description>
     ///         Capacity:
     ///         The number of elements a buffer can hold. Capacity may not be
     ///         negative and never changes.
-    ///     </item>
-    ///     <item>
+    ///     </description></item>
+    ///     <item><description>
     ///         Position:
     ///         A cursor of this buffer. Elements are read or written at the
     ///         position if you do not specify an index explicitly. Position may not be
     ///         negative and not greater than the limit.
-    ///     </item>
-    ///     <item>
+    ///     </description></item>
+    ///     <item><description>
     ///         Limit:
     ///         Controls the scope of accessible elements. You can only read or
     ///         write elements from index zero to <c>limit - 1</c>. Accessing
     ///         elements out of the scope will cause an exception. Limit may not be negative
     ///         and not greater than capacity.
-    ///     </item>
-    ///     <item>
+    ///     </description></item>
+    ///     <item><description>
     ///         Mark: 
     ///         Used to remember the current position, so that you can reset the
     ///         position later. Mark may not be negative and no greater than position.
-    ///     </item>
-    ///     <item>
+    ///     </description></item>
+    ///     <item><description>
     ///         A buffer can be read-only or read-write. Trying to modify the elements
     ///         of a read-only buffer will cause a <see cref="ReadOnlyBufferException"/>,
     ///         while changing the position, limit and mark of a read-only buffer is OK.
-    ///     </item>
-    ///     <item>
+    ///     </description></item>
+    ///     <item><description>
     ///         A buffer can be direct or indirect. A direct buffer will try its best to
     ///         take advantage of native memory APIs and it may not stay in the heap,
     ///         thus it is not affected by garbage collection.
-    ///     </item>
+    ///     </description></item>
     /// </list>
     /// <para/>
     /// Buffers are not thread-safe. If concurrent access to a buffer instance is

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Support/IO/ByteBuffer.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Support/IO/ByteBuffer.cs b/src/Lucene.Net/Support/IO/ByteBuffer.cs
index 021ce02..709d1aa 100644
--- a/src/Lucene.Net/Support/IO/ByteBuffer.cs
+++ b/src/Lucene.Net/Support/IO/ByteBuffer.cs
@@ -30,11 +30,11 @@ namespace Lucene.Net.Support.IO
     /// <para/>
     /// A byte buffer can be created in either one of the following ways:
     /// <list type="bullet">
-    ///     <item><see cref="Allocate(int)"/> a new byte array and create a
-    ///     buffer based on it</item>
-    ///     <item><see cref="AllocateDirect(int)"/> a memory block and create a direct
-    ///     buffer based on it</item>
-    ///     <item><see cref="Wrap(byte[])"/> an existing byte array to create a new buffer</item>
+    ///     <item><description><see cref="Allocate(int)"/> a new byte array and create a
+    ///     buffer based on it</description></item>
+    ///     <item><description><see cref="AllocateDirect(int)"/> a memory block and create a direct
+    ///     buffer based on it</description></item>
+    ///     <item><description><see cref="Wrap(byte[])"/> an existing byte array to create a new buffer</description></item>
     /// </list>
     /// </summary>
 #if FEATURE_SERIALIZABLE

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Support/IO/FileSupport.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Support/IO/FileSupport.cs b/src/Lucene.Net/Support/IO/FileSupport.cs
index 82e8b1c..90f470a 100644
--- a/src/Lucene.Net/Support/IO/FileSupport.cs
+++ b/src/Lucene.Net/Support/IO/FileSupport.cs
@@ -134,8 +134,8 @@ namespace Lucene.Net.Support.IO
         /// Creates a new empty file in the specified directory, using the given prefix and suffix strings to generate its name. 
         /// If this method returns successfully then it is guaranteed that:
         /// <list type="number">
-        /// <item>The file denoted by the returned abstract pathname did not exist before this method was invoked, and</item>
-        /// <item>Neither this method nor any of its variants will return the same abstract pathname again in the current invocation of the virtual machine.</item>
+        /// <item><description>The file denoted by the returned abstract pathname did not exist before this method was invoked, and</description></item>
+        /// <item><description>Neither this method nor any of its variants will return the same abstract pathname again in the current invocation of the virtual machine.</description></item>
         /// </list>
         /// This method provides only part of a temporary-file facility.To arrange for a file created by this method to be deleted automatically, use the deleteOnExit() method.
         /// The prefix argument must be at least three characters long. It is recommended that the prefix be a short, meaningful string such as "hjb" or "mail". The suffix argument may be null, in which case the suffix ".tmp" will be used.

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Support/IO/LongBuffer.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Support/IO/LongBuffer.cs b/src/Lucene.Net/Support/IO/LongBuffer.cs
index 82255c7..b278245 100644
--- a/src/Lucene.Net/Support/IO/LongBuffer.cs
+++ b/src/Lucene.Net/Support/IO/LongBuffer.cs
@@ -30,10 +30,10 @@ namespace Lucene.Net.Support.IO
     /// <para/>
     /// A long buffer can be created in either of the following ways:
     /// <list type="bullet">
-    ///     <item><see cref="Allocate(int)"/> a new long array and create a buffer
-    ///     based on it</item>
-    ///     <item><see cref="Wrap(long[])"/> an existing long array to create a new
-    ///     buffer</item>
+    ///     <item><description><see cref="Allocate(int)"/> a new long array and create a buffer
+    ///     based on it</description></item>
+    ///     <item><description><see cref="Wrap(long[])"/> an existing long array to create a new
+    ///     buffer</description></item>
     /// </list>
     /// </summary>
 #if FEATURE_SERIALIZABLE

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Support/IO/LongToByteBufferAdapter.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Support/IO/LongToByteBufferAdapter.cs b/src/Lucene.Net/Support/IO/LongToByteBufferAdapter.cs
index b83974d..ca5f4d7 100644
--- a/src/Lucene.Net/Support/IO/LongToByteBufferAdapter.cs
+++ b/src/Lucene.Net/Support/IO/LongToByteBufferAdapter.cs
@@ -28,10 +28,10 @@ namespace Lucene.Net.Support.IO
     /// <para/>
     /// Implementation notice:
     /// <list type="bullet">
-    ///     <item>After a byte buffer instance is wrapped, it becomes privately owned by
-    ///     the adapter. It must NOT be accessed outside the adapter any more.</item>
-    ///     <item>The byte buffer's position and limit are NOT linked with the adapter.
-    ///     The adapter extends Buffer, thus has its own position and limit.</item>
+    ///     <item><description>After a byte buffer instance is wrapped, it becomes privately owned by
+    ///     the adapter. It must NOT be accessed outside the adapter any more.</description></item>
+    ///     <item><description>The byte buffer's position and limit are NOT linked with the adapter.
+    ///     The adapter extends Buffer, thus has its own position and limit.</description></item>
     /// </list>
     /// </summary>
     internal sealed class Int64ToByteBufferAdapter : Int64Buffer

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Support/LinkedHashMap.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Support/LinkedHashMap.cs b/src/Lucene.Net/Support/LinkedHashMap.cs
index d241e68..e3d9c94 100644
--- a/src/Lucene.Net/Support/LinkedHashMap.cs
+++ b/src/Lucene.Net/Support/LinkedHashMap.cs
@@ -47,15 +47,15 @@ namespace Lucene.Net.Support
     /// <remarks>
     /// <h2>Unordered Dictionaries</h2>
     /// <list type="bullet">
-    ///     <item><see cref="Dictionary{TKey, TValue}"/> - use when order is not important and all keys are non-null.</item>
-    ///     <item><see cref="HashMap{TKey, TValue}"/> - use when order is not important and support for a null key is required.</item>
+    ///     <item><description><see cref="Dictionary{TKey, TValue}"/> - use when order is not important and all keys are non-null.</description></item>
+    ///     <item><description><see cref="HashMap{TKey, TValue}"/> - use when order is not important and support for a null key is required.</description></item>
     /// </list>
     /// <h2>Ordered Dictionaries</h2>
     /// <list type="bullet">
-    ///     <item><see cref="LinkedHashMap{TKey, TValue}"/> - use when you need to preserve entry insertion order. Keys are nullable.</item>
-    ///     <item><see cref="SortedDictionary{TKey, TValue}"/> - use when you need natural sort order. Keys must be unique.</item>
-    ///     <item><see cref="TreeDictionary{K, V}"/> - use when you need natural sort order. Keys may contain duplicates.</item>
-    ///     <item><see cref="LurchTable{TKey, TValue}"/> - use when you need to sort by most recent access or most recent update. Works well for LRU caching.</item>
+    ///     <item><description><see cref="LinkedHashMap{TKey, TValue}"/> - use when you need to preserve entry insertion order. Keys are nullable.</description></item>
+    ///     <item><description><see cref="SortedDictionary{TKey, TValue}"/> - use when you need natural sort order. Keys must be unique.</description></item>
+    ///     <item><description><see cref="TreeDictionary{K, V}"/> - use when you need natural sort order. Keys may contain duplicates.</description></item>
+    ///     <item><description><see cref="LurchTable{TKey, TValue}"/> - use when you need to sort by most recent access or most recent update. Works well for LRU caching.</description></item>
     /// </list>
     /// </remarks>
 #if FEATURE_SERIALIZABLE

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Support/StringExtensions.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Support/StringExtensions.cs b/src/Lucene.Net/Support/StringExtensions.cs
index fe815d4..e8513f9 100644
--- a/src/Lucene.Net/Support/StringExtensions.cs
+++ b/src/Lucene.Net/Support/StringExtensions.cs
@@ -36,8 +36,8 @@ namespace Lucene.Net.Support
         /// <summary>
         /// This method mimics the Java String.compareTo(String) method in that it
         /// <list type="number">
-        /// <item>Compares the strings using lexographic sorting rules</item>
-        /// <item>Performs a culture-insensitive comparison</item>
+        /// <item><description>Compares the strings using lexographic sorting rules</description></item>
+        /// <item><description>Performs a culture-insensitive comparison</description></item>
         /// </list>
         /// This method is a convenience to replace the .NET CompareTo method 
         /// on all strings, provided the logic does not expect specific values

http://git-wip-us.apache.org/repos/asf/lucenenet/blob/7099a846/src/Lucene.Net/Util/ArrayUtil.cs
----------------------------------------------------------------------
diff --git a/src/Lucene.Net/Util/ArrayUtil.cs b/src/Lucene.Net/Util/ArrayUtil.cs
index afe75f8..2e45dfc 100644
--- a/src/Lucene.Net/Util/ArrayUtil.cs
+++ b/src/Lucene.Net/Util/ArrayUtil.cs
@@ -813,14 +813,14 @@ namespace Lucene.Net.Util
         /// <para/>
         /// The comparer returned depends on the <typeparam name="T"/> argument:
         /// <list type="number">
-        ///     <item>If the type is <see cref="string"/>, the comparer returned uses
+        ///     <item><description>If the type is <see cref="string"/>, the comparer returned uses
         ///         the <see cref="string.CompareOrdinal(string, string)"/> to make the comparison
         ///         to ensure that the current culture doesn't affect the results. This is the
-        ///         default string comparison used in Java, and what Lucene's design depends on.</item>
-        ///     <item>If the type implements <see cref="IComparable{T}"/>, the comparer uses
+        ///         default string comparison used in Java, and what Lucene's design depends on.</description></item>
+        ///     <item><description>If the type implements <see cref="IComparable{T}"/>, the comparer uses
         ///         <see cref="IComparable{T}.CompareTo(T)"/> for the comparison. This allows
-        ///         the use of types with custom comparison schemes.</item>
-        ///     <item>If neither of the above conditions are true, will default to <see cref="Comparer{T}.Default"/>.</item>
+        ///         the use of types with custom comparison schemes.</description></item>
+        ///     <item><description>If neither of the above conditions are true, will default to <see cref="Comparer{T}.Default"/>.</description></item>
         /// </list>
         /// <para/>
         /// NOTE: This was naturalComparer() in Lucene


Mime
View raw message