lucenenet-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From d...@apache.org
Subject svn commit: r890338 [2/4] - in /incubator/lucene.net/trunk/C#/src/Lucene.Net: Analysis/ Analysis/Standard/ Analysis/Tokenattributes/ Document/ Index/ QueryParser/ Search/ Search/Function/ Search/Payloads/ Search/Spans/ Store/ Util/
Date Mon, 14 Dec 2009 14:13:08 GMT
Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexReader.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/IndexReader.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexReader.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexReader.cs Mon Dec 14 14:13:03 2009
@@ -28,24 +28,24 @@
 	/// <summary>IndexReader is an abstract class, providing an interface for accessing an
 	/// index.  Search of an index is done entirely through this abstract interface,
 	/// so that any subclass which implements it is searchable.
-	/// <p> Concrete subclasses of IndexReader are usually constructed with a call to
+	/// <p/> Concrete subclasses of IndexReader are usually constructed with a call to
 	/// one of the static <code>open()</code> methods, e.g. {@link
 	/// #Open(String, boolean)}.
-	/// <p> For efficiency, in this API documents are often referred to via
+	/// <p/> For efficiency, in this API documents are often referred to via
 	/// <i>document numbers</i>, non-negative integers which each name a unique
 	/// document in the index.  These document numbers are ephemeral--they may change
 	/// as documents are added to and deleted from an index.  Clients should thus not
 	/// rely on a given document having the same number between sessions.
-	/// <p> An IndexReader can be opened on a directory for which an IndexWriter is
+	/// <p/> An IndexReader can be opened on a directory for which an IndexWriter is
 	/// opened already, but it cannot be used to delete documents from the index then.
-	/// <p>
+	/// <p/>
 	/// <b>NOTE</b>: for backwards API compatibility, several methods are not listed 
 	/// as abstract, but have no useful implementations in this base class and 
 	/// instead always throw UnsupportedOperationException.  Subclasses are 
 	/// strongly encouraged to override these methods, but in many cases may not 
 	/// need to.
 	/// </p>
-	/// <p>
+	/// <p/>
 	/// <b>NOTE</b>: as of 2.4, it's possible to open a read-only
 	/// IndexReader using one of the static open methods that
 	/// accepts the boolean readOnly parameter.  Such a reader has
@@ -56,7 +56,7 @@
 	/// change to true, meaning you must explicitly specify false
 	/// if you want to make changes with the resulting IndexReader.
 	/// </p>
-	/// <a name="thread-safety"></a><p><b>NOTE</b>: {@link
+	/// <a name="thread-safety"></a><p/><b>NOTE</b>: {@link
 	/// <code>IndexReader</code>} instances are completely thread
 	/// safe, meaning multiple threads can call any of its methods,
 	/// concurrently.  If your application requires external
@@ -211,7 +211,7 @@
 		
 		/// <summary> Legacy Constructor for backwards compatibility.
 		/// 
-		/// <p>
+		/// <p/>
 		/// This Constructor should not be used, it exists for backwards 
 		/// compatibility only to support legacy subclasses that did not "own" 
 		/// a specific directory, but needed to specify something to be returned 
@@ -581,22 +581,22 @@
 		
 		/// <summary> Refreshes an IndexReader if the index has changed since this instance 
 		/// was (re)opened. 
-		/// <p>
+		/// <p/>
 		/// Opening an IndexReader is an expensive operation. This method can be used
 		/// to refresh an existing IndexReader to reduce these costs. This method 
 		/// tries to only load segments that have changed or were created after the 
 		/// IndexReader was (re)opened.
-		/// <p>
+		/// <p/>
 		/// If the index has not changed since this instance was (re)opened, then this
 		/// call is a NOOP and returns this instance. Otherwise, a new instance is 
 		/// returned. The old instance is <b>not</b> closed and remains usable.<br>
-		/// <p>   
+		/// <p/>   
 		/// If the reader is reopened, even though they share
 		/// resources internally, it's safe to make changes
 		/// (deletions, norms) with the new reader.  All shared
 		/// mutable state obeys "copy on write" semantics to ensure
 		/// the changes are not seen by other readers.
-		/// <p>
+		/// <p/>
 		/// You can determine whether a reader was actually reopened by comparing the
 		/// old instance with the instance returned by this method: 
 		/// <pre>
@@ -615,7 +615,7 @@
 		/// if present, can never use reader after it has been
 		/// closed and before it's switched to newReader.
 		/// 
-		/// <p><b>NOTE</b>: If this reader is a near real-time
+		/// <p/><b>NOTE</b>: If this reader is a near real-time
 		/// reader (obtained from {@link IndexWriter#GetReader()},
 		/// reopen() will simply call writer.getReader() again for
 		/// you, though this may change in the future.
@@ -662,19 +662,19 @@
 		
 		/// <summary> Efficiently clones the IndexReader (sharing most
 		/// internal state).
-		/// <p>
+		/// <p/>
 		/// On cloning a reader with pending changes (deletions,
 		/// norms), the original reader transfers its write lock to
 		/// the cloned reader.  This means only the cloned reader
 		/// may make further changes to the index, and commit the
 		/// changes to the index on close, but the old reader still
 		/// reflects all changes made up until it was cloned.
-		/// <p>
+		/// <p/>
 		/// Like {@link #Reopen()}, it's safe to make changes to
 		/// either the original or the cloned reader: all shared
 		/// mutable state obeys "copy on write" semantics to ensure
 		/// the changes are not seen by other readers.
-		/// <p>
+		/// <p/>
 		/// </summary>
 		/// <throws>  CorruptIndexException if the index is corrupt </throws>
 		/// <throws>  IOException if there is a low-level IO error </throws>
@@ -854,7 +854,7 @@
 		/// <summary> Version number when this IndexReader was opened. Not implemented in the
 		/// IndexReader base class.
 		/// 
-		/// <p>
+		/// <p/>
 		/// If this reader is based on a Directory (ie, was created by calling
 		/// {@link #Open}, or {@link #Reopen} on a reader based on a Directory), then
 		/// this method returns the version recorded in the commit that the reader
@@ -862,7 +862,7 @@
 		/// called.
 		/// </p>
 		/// 
-		/// <p>
+		/// <p/>
 		/// If instead this reader is a near real-time reader (ie, obtained by a call
 		/// to {@link IndexWriter#GetReader}, or by calling {@link #Reopen} on a near
 		/// real-time reader), then this method returns the version of the last
@@ -894,7 +894,7 @@
 			throw new System.NotSupportedException("This reader does not support this method.");
 		}
 		
-		/// <summary><p>For IndexReader implementations that use
+		/// <summary><p/>For IndexReader implementations that use
 		/// TermInfosReader to read terms, this sets the
 		/// indexDivisor to subsample the number of indexed terms
 		/// loaded into memory.  This has the same effect as {@link
@@ -919,7 +919,7 @@
 			throw new System.NotSupportedException("Please pass termInfosIndexDivisor up-front when opening IndexReader");
 		}
 		
-		/// <summary><p>For IndexReader implementations that use
+		/// <summary><p/>For IndexReader implementations that use
 		/// TermInfosReader to read terms, this returns the
 		/// current indexDivisor as specified when the reader was
 		/// opened.
@@ -932,14 +932,14 @@
 		/// <summary> Check whether any new changes have occurred to the index since this
 		/// reader was opened.
 		/// 
-		/// <p>
+		/// <p/>
 		/// If this reader is based on a Directory (ie, was created by calling
 		/// {@link #open}, or {@link #reopen} on a reader based on a Directory), then
 		/// this method checks if any further commits (see {@link IndexWriter#commit}
 		/// have occurred in that directory).
 		/// </p>
 		/// 
-		/// <p>
+		/// <p/>
 		/// If instead this reader is a near real-time reader (ie, obtained by a call
 		/// to {@link IndexWriter#getReader}, or by calling {@link #reopen} on a near
 		/// real-time reader), then this method checks if either a new commmit has
@@ -948,7 +948,7 @@
 		/// still return false.
 		/// </p>
 		/// 
-		/// <p>
+		/// <p/>
 		/// In any event, if this returns false, you should call {@link #reopen} to
 		/// get a new reader that sees the changes.
 		/// </p>
@@ -1113,7 +1113,7 @@
 		
 		/// <summary> Returns the stored fields of the <code>n</code><sup>th</sup>
 		/// <code>Document</code> in this index.
-		/// <p>
+		/// <p/>
 		/// <b>NOTE:</b> for performance reasons, this method does not check if the
 		/// requested document is deleted, and therefore asking for a deleted document
 		/// may yield unspecified results. Usually this is not required, however you
@@ -1138,7 +1138,7 @@
 		/// thrown. If you want the value of a lazy
 		/// {@link Lucene.Net.Documents.Field} to be available after closing you
 		/// must explicitly load it or fetch the Document again with a new loader.
-		/// <p>
+		/// <p/>
 		/// <b>NOTE:</b> for performance reasons, this method does not check if the
 		/// requested document is deleted, and therefore asking for a deleted document
 		/// may yield unspecified results. Usually this is not required, however you
@@ -1289,10 +1289,10 @@
 		/// search scoring.  If term is null, then all non-deleted
 		/// docs are returned with freq=1.
 		/// Thus, this method implements the mapping:
-		/// <p><ul>
+		/// <p/><ul>
 		/// Term &nbsp;&nbsp; =&gt; &nbsp;&nbsp; &lt;docNum, freq&gt;<sup>*</sup>
 		/// </ul>
-		/// <p>The enumeration is ordered by document number.  Each document number
+		/// <p/>The enumeration is ordered by document number.  Each document number
 		/// is greater than all that precede it in the enumeration.
 		/// </summary>
 		/// <throws>  IOException if there is a low-level IO error </throws>
@@ -1314,14 +1314,14 @@
 		/// positions of the term in the document is available.  Thus, this method
 		/// implements the mapping:
 		/// 
-		/// <p><ul>
+		/// <p/><ul>
 		/// Term &nbsp;&nbsp; =&gt; &nbsp;&nbsp; &lt;docNum, freq,
 		/// &lt;pos<sub>1</sub>, pos<sub>2</sub>, ...
 		/// pos<sub>freq-1</sub>&gt;
 		/// &gt;<sup>*</sup>
 		/// </ul>
-		/// <p> This positional information facilitates phrase and proximity searching.
-		/// <p>The enumeration is ordered by document number.  Each document number is
+		/// <p/> This positional information facilitates phrase and proximity searching.
+		/// <p/>The enumeration is ordered by document number.  Each document number is
 		/// greater than all that precede it in the enumeration.
 		/// </summary>
 		/// <throws>  IOException if there is a low-level IO error </throws>
@@ -1605,7 +1605,7 @@
 		}
 		
 		/// <summary> Forcibly unlocks the index in the named directory.
-		/// <P>
+		/// <p/>
 		/// Caution: this should only be used by failure recovery code,
 		/// when it is known that no other process nor thread is in fact
 		/// currently accessing this index.
@@ -1625,7 +1625,7 @@
 		/// readers that correspond to a Directory with its own
 		/// segments_N file.
 		/// 
-		/// <p><b>WARNING</b>: this API is new and experimental and
+		/// <p/><b>WARNING</b>: this API is new and experimental and
 		/// may suddenly change.</p>
 		/// </summary>
 		public virtual IndexCommit GetIndexCommit()
@@ -1753,7 +1753,7 @@
 		/// If this method returns an empty array, that means this
 		/// reader is a null reader (for example a MultiReader
 		/// that has no sub readers).
-		/// <p>
+		/// <p/>
 		/// NOTE: You should not try using sub-readers returned by
 		/// this method to make any changes (setNorm, deleteDocument,
 		/// etc.). While this might succeed for one composite reader

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexWriter.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/IndexWriter.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexWriter.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/IndexWriter.cs Mon Dec 14 14:13:03 2009
@@ -34,7 +34,7 @@
 {
 	
 	/// <summary>An <code>IndexWriter</code> creates and maintains an index.
-	/// <p>The <code>create</code> argument to the {@link
+	/// <p/>The <code>create</code> argument to the {@link
 	/// #IndexWriter(Directory, Analyzer, boolean) constructor} determines 
 	/// whether a new index is created, or whether an existing index is
 	/// opened.  Note that you can open an index with <code>create=true</code>
@@ -45,14 +45,14 @@
 	/// with no <code>create</code> argument which will create a new index
 	/// if there is not already an index at the provided path and otherwise 
 	/// open the existing index.</p>
-	/// <p>In either case, documents are added with {@link #AddDocument(Document)
+	/// <p/>In either case, documents are added with {@link #AddDocument(Document)
 	/// addDocument} and removed with {@link #DeleteDocuments(Term)} or {@link
 	/// #DeleteDocuments(Query)}. A document can be updated with {@link
 	/// #UpdateDocument(Term, Document) updateDocument} (which just deletes
 	/// and then adds the entire document). When finished adding, deleting 
 	/// and updating documents, {@link #Close() close} should be called.</p>
 	/// <a name="flush"></a>
-	/// <p>These changes are buffered in memory and periodically
+	/// <p/>These changes are buffered in memory and periodically
 	/// flushed to the {@link Directory} (during the above method
 	/// calls).  A flush is triggered when there are enough
 	/// buffered deletes (see {@link #setMaxBufferedDeleteTerms})
@@ -71,7 +71,7 @@
 	/// addDocument calls (see <a href="#mergePolicy">below</a>
 	/// for changing the {@link MergeScheduler}).</p>
 	/// <a name="autoCommit"></a>
-	/// <p>The optional <code>autoCommit</code> argument to the {@link
+	/// <p/>The optional <code>autoCommit</code> argument to the {@link
 	/// #IndexWriter(Directory, boolean, Analyzer) constructors}
 	/// controls visibility of the changes to {@link IndexReader}
 	/// instances reading the same index.  When this is
@@ -97,7 +97,7 @@
 	/// Lucene is working with an external resource (for example,
 	/// a database) and both must either commit or rollback the
 	/// transaction.</p>
-	/// <p>When <code>autoCommit</code> is <code>true</code> then
+	/// <p/>When <code>autoCommit</code> is <code>true</code> then
 	/// the writer will periodically commit on its own.  [<b>Deprecated</b>: Note that in 3.0, IndexWriter will
 	/// no longer accept autoCommit=true (it will be hardwired to
 	/// false).  You can always call {@link #Commit()} yourself
@@ -112,23 +112,23 @@
 	/// readers while optimize or segment merges are taking place
 	/// as this can tie up substantial disk space.</p>
 	/// </summary>
-	/// <summary><p>Regardless of <code>autoCommit</code>, an {@link
+	/// <summary><p/>Regardless of <code>autoCommit</code>, an {@link
 	/// IndexReader} or {@link Lucene.Net.Search.IndexSearcher} will only see the
 	/// index as of the "point in time" that it was opened.  Any
 	/// changes committed to the index after the reader was opened
 	/// are not visible until the reader is re-opened.</p>
-	/// <p>If an index will not have more documents added for a while and optimal search
+	/// <p/>If an index will not have more documents added for a while and optimal search
 	/// performance is desired, then either the full {@link #Optimize() optimize}
 	/// method or partial {@link #Optimize(int)} method should be
 	/// called before the index is closed.</p>
-	/// <p>Opening an <code>IndexWriter</code> creates a lock file for the directory in use. Trying to open
+	/// <p/>Opening an <code>IndexWriter</code> creates a lock file for the directory in use. Trying to open
 	/// another <code>IndexWriter</code> on the same directory will lead to a
 	/// {@link LockObtainFailedException}. The {@link LockObtainFailedException}
 	/// is also thrown if an IndexReader on the same directory is used to delete documents
 	/// from the index.</p>
 	/// </summary>
 	/// <summary><a name="deletionPolicy"></a>
-	/// <p>Expert: <code>IndexWriter</code> allows an optional
+	/// <p/>Expert: <code>IndexWriter</code> allows an optional
 	/// {@link IndexDeletionPolicy} implementation to be
 	/// specified.  You can use this to control when prior commits
 	/// are deleted from the index.  The default policy is {@link
@@ -142,7 +142,7 @@
 	/// filesystems like NFS that do not support "delete on last
 	/// close" semantics, which Lucene's "point in time" search
 	/// normally relies on. </p>
-	/// <a name="mergePolicy"></a> <p>Expert:
+	/// <a name="mergePolicy"></a> <p/>Expert:
 	/// <code>IndexWriter</code> allows you to separately change
 	/// the {@link MergePolicy} and the {@link MergeScheduler}.
 	/// The {@link MergePolicy} is invoked whenever there are
@@ -154,7 +154,7 @@
 	/// MergeScheduler} is invoked with the requested merges and
 	/// it decides when and how to run the merges.  The default is
 	/// {@link ConcurrentMergeScheduler}. </p>
-	/// <a name="OOME"></a><p><b>NOTE</b>: if you hit an
+	/// <a name="OOME"></a><p/><b>NOTE</b>: if you hit an
 	/// OutOfMemoryError then IndexWriter will quietly record this
 	/// fact and block all future segment commits.  This is a
 	/// defensive measure in case any internal state (buffered
@@ -166,7 +166,7 @@
 	/// last commit.  If you opened the writer with autoCommit
 	/// false you can also just call {@link #Rollback()}
 	/// directly.</p>
-	/// <a name="thread-safety"></a><p><b>NOTE</b>: {@link
+	/// <a name="thread-safety"></a><p/><b>NOTE</b>: {@link
 	/// <code>IndexWriter</code>} instances are completely thread
 	/// safe, meaning multiple threads can call any of its
 	/// methods, concurrently.  If your application requires
@@ -358,14 +358,14 @@
 		/// quickly made available for searching without closing the writer nor
 		/// calling {@link #commit}.
 		/// 
-		/// <p>
+		/// <p/>
 		/// Note that this is functionally equivalent to calling {#commit} and then
 		/// using {@link IndexReader#open} to open a new reader. But the turarnound
 		/// time of this method should be faster since it avoids the potentially
 		/// costly {@link #commit}.
-		/// <p>
+		/// <p/>
 		/// 
-		/// <p>
+		/// <p/>
 		/// It's <i>near</i> real-time because there is no hard
 		/// guarantee on how quickly you can get a new reader after
 		/// making changes with IndexWriter.  You'll have to
@@ -374,33 +374,33 @@
 		/// feature, please report back on your findings so we can
 		/// learn, improve and iterate.</p>
 		/// 
-		/// <p>The resulting reader suppports {@link
+		/// <p/>The resulting reader suppports {@link
 		/// IndexReader#reopen}, but that call will simply forward
 		/// back to this method (though this may change in the
 		/// future).</p>
 		/// 
-		/// <p>The very first time this method is called, this
+		/// <p/>The very first time this method is called, this
 		/// writer instance will make every effort to pool the
 		/// readers that it opens for doing merges, applying
 		/// deletes, etc.  This means additional resources (RAM,
 		/// file descriptors, CPU time) will be consumed.</p>
 		/// 
-		/// <p>For lower latency on reopening a reader, you may
+		/// <p/>For lower latency on reopening a reader, you may
 		/// want to call {@link #setMergedSegmentWarmer} to
 		/// pre-warm a newly merged segment before it's committed
 		/// to the index.</p>
 		/// 
-		/// <p>If an addIndexes* call is running in another thread,
+		/// <p/>If an addIndexes* call is running in another thread,
 		/// then this reader will only search those segments from
 		/// the foreign index that have been successfully copied
 		/// over, so far</p>.
 		/// 
-		/// <p><b>NOTE</b>: Once the writer is closed, any
+		/// <p/><b>NOTE</b>: Once the writer is closed, any
 		/// outstanding readers may continue to be used.  However,
 		/// if you attempt to reopen any of those readers, you'll
 		/// hit an {@link AlreadyClosedException}.</p>
 		/// 
-		/// <p><b>NOTE:</b> This API is experimental and might
+		/// <p/><b>NOTE:</b> This API is experimental and might
 		/// change in incompatible ways in the next release.</p>
 		/// 
 		/// </summary>
@@ -957,14 +957,14 @@
 				throw new System.ArgumentException("this method can only be called when the merge policy is the default LogMergePolicy");
 		}
 		
-		/// <summary><p>Get the current setting of whether newly flushed
+		/// <summary><p/>Get the current setting of whether newly flushed
 		/// segments will use the compound file format.  Note that
 		/// this just returns the value previously set with
 		/// setUseCompoundFile(boolean), or the default value
 		/// (true).  You cannot use this to query the status of
 		/// previously flushed segments.</p>
 		/// 
-		/// <p>Note that this method is a convenience method: it
+		/// <p/>Note that this method is a convenience method: it
 		/// just calls mergePolicy.getUseCompoundFile as long as
 		/// mergePolicy is an instance of {@link LogMergePolicy}.
 		/// Otherwise an IllegalArgumentException is thrown.</p>
@@ -977,11 +977,11 @@
 			return GetLogMergePolicy().GetUseCompoundFile();
 		}
 		
-		/// <summary><p>Setting to turn on usage of a compound file. When on,
+		/// <summary><p/>Setting to turn on usage of a compound file. When on,
 		/// multiple files for each segment are merged into a
 		/// single file when a new segment is flushed.</p>
 		/// 
-		/// <p>Note that this method is a convenience method: it
+		/// <p/>Note that this method is a convenience method: it
 		/// just calls mergePolicy.setUseCompoundFile as long as
 		/// mergePolicy is an instance of {@link LogMergePolicy}.
 		/// Otherwise an IllegalArgumentException is thrown.</p>
@@ -1006,7 +1006,7 @@
 		
 		/// <summary>Expert: Return the Similarity implementation used by this IndexWriter.
 		/// 
-		/// <p>This defaults to the current value of {@link Similarity#GetDefault()}.
+		/// <p/>This defaults to the current value of {@link Similarity#GetDefault()}.
 		/// </summary>
 		public virtual Similarity GetSimilarity()
 		{
@@ -1060,7 +1060,7 @@
 		/// <code>path</code>, replacing the index already there,
 		/// if any.
 		/// 
-		/// <p><b>NOTE</b>: autoCommit (see <a
+		/// <p/><b>NOTE</b>: autoCommit (see <a
 		/// href="#autoCommit">above</a>) is set to false with this
 		/// constructor.
 		/// 
@@ -1137,7 +1137,7 @@
 		/// is true, then a new, empty index will be created in
 		/// <code>path</code>, replacing the index already there, if any.
 		/// 
-		/// <p><b>NOTE</b>: autoCommit (see <a
+		/// <p/><b>NOTE</b>: autoCommit (see <a
 		/// href="#autoCommit">above</a>) is set to false with this
 		/// constructor.
 		/// 
@@ -1214,7 +1214,7 @@
 		/// is true, then a new, empty index will be created in
 		/// <code>d</code>, replacing the index already there, if any.
 		/// 
-		/// <p><b>NOTE</b>: autoCommit (see <a
+		/// <p/><b>NOTE</b>: autoCommit (see <a
 		/// href="#autoCommit">above</a>) is set to false with this
 		/// constructor.
 		/// 
@@ -1286,7 +1286,7 @@
 		/// already exist.  Text will be analyzed with
 		/// <code>a</code>.
 		/// 
-		/// <p><b>NOTE</b>: autoCommit (see <a
+		/// <p/><b>NOTE</b>: autoCommit (see <a
 		/// href="#autoCommit">above</a>) is set to false with this
 		/// constructor.
 		/// 
@@ -1351,7 +1351,7 @@
 		/// already exist.  Text will be analyzed with
 		/// <code>a</code>.
 		/// 
-		/// <p><b>NOTE</b>: autoCommit (see <a
+		/// <p/><b>NOTE</b>: autoCommit (see <a
 		/// href="#autoCommit">above</a>) is set to false with this
 		/// constructor.
 		/// 
@@ -1417,7 +1417,7 @@
 		/// already exist.  Text will be analyzed with
 		/// <code>a</code>.
 		/// 
-		/// <p><b>NOTE</b>: autoCommit (see <a
+		/// <p/><b>NOTE</b>: autoCommit (see <a
 		/// href="#autoCommit">above</a>) is set to false with this
 		/// constructor.
 		/// 
@@ -1551,7 +1551,7 @@
 		/// first creating it if it does not already exist.  Text
 		/// will be analyzed with <code>a</code>.
 		/// 
-		/// <p><b>NOTE</b>: autoCommit (see <a
+		/// <p/><b>NOTE</b>: autoCommit (see <a
 		/// href="#autoCommit">above</a>) is set to false with this
 		/// constructor.
 		/// 
@@ -1621,7 +1621,7 @@
 		/// will be created in <code>d</code>, replacing the index
 		/// already there, if any.
 		/// 
-		/// <p><b>NOTE</b>: autoCommit (see <a
+		/// <p/><b>NOTE</b>: autoCommit (see <a
 		/// href="#autoCommit">above</a>) is set to false with this
 		/// constructor.
 		/// 
@@ -1662,7 +1662,7 @@
 		/// will be created in <code>d</code>, replacing the index
 		/// already there, if any.
 		/// 
-		/// <p><b>NOTE</b>: autoCommit (see <a
+		/// <p/><b>NOTE</b>: autoCommit (see <a
 		/// href="#autoCommit">above</a>) is set to false with this
 		/// constructor.
 		/// 
@@ -1747,11 +1747,11 @@
 		/// the index in <code>d</code>.  Text will be analyzed
 		/// with <code>a</code>.
 		/// 
-		/// <p> This is only meaningful if you've used a {@link
+		/// <p/> This is only meaningful if you've used a {@link
 		/// IndexDeletionPolicy} in that past that keeps more than
 		/// just the last commit.
 		/// 
-		/// <p>This operation is similar to {@link #Rollback()},
+		/// <p/>This operation is similar to {@link #Rollback()},
 		/// except that method can only rollback what's been done
 		/// with the current instance of IndexWriter since its last
 		/// commit, whereas this method can rollback to an
@@ -1759,7 +1759,7 @@
 		/// {@link IndexDeletionPolicy} has preserved past
 		/// commits.
 		/// 
-		/// <p><b>NOTE</b>: autoCommit (see <a
+		/// <p/><b>NOTE</b>: autoCommit (see <a
 		/// href="#autoCommit">above</a>) is set to false with this
 		/// constructor.
 		/// 
@@ -1995,7 +1995,7 @@
 			return mergeScheduler;
 		}
 		
-		/// <summary><p>Determines the largest segment (measured by
+		/// <summary><p/>Determines the largest segment (measured by
 		/// document count) that may be merged with other segments.
 		/// Small values (e.g., less than 10,000) are best for
 		/// interactive indexing, as this limits the length of
@@ -2003,14 +2003,14 @@
 		/// are best for batched indexing and speedier
 		/// searches.</p>
 		/// 
-		/// <p>The default value is {@link Integer#MAX_VALUE}.</p>
+		/// <p/>The default value is {@link Integer#MAX_VALUE}.</p>
 		/// 
-		/// <p>Note that this method is a convenience method: it
+		/// <p/>Note that this method is a convenience method: it
 		/// just calls mergePolicy.setMaxMergeDocs as long as
 		/// mergePolicy is an instance of {@link LogMergePolicy}.
 		/// Otherwise an IllegalArgumentException is thrown.</p>
 		/// 
-		/// <p>The default merge policy ({@link
+		/// <p/>The default merge policy ({@link
 		/// LogByteSizeMergePolicy}) also allows you to set this
 		/// limit by net size (in MB) of the segment, using {@link
 		/// LogByteSizeMergePolicy#setMaxMergeMB}.</p>
@@ -2020,10 +2020,10 @@
 			GetLogMergePolicy().SetMaxMergeDocs(maxMergeDocs);
 		}
 		
-		/// <summary> <p>Returns the largest segment (measured by document
+		/// <summary> <p/>Returns the largest segment (measured by document
 		/// count) that may be merged with other segments.</p>
 		/// 
-		/// <p>Note that this method is a convenience method: it
+		/// <p/>Note that this method is a convenience method: it
 		/// just calls mergePolicy.getMaxMergeDocs as long as
 		/// mergePolicy is an instance of {@link LogMergePolicy}.
 		/// Otherwise an IllegalArgumentException is thrown.</p>
@@ -2074,14 +2074,14 @@
 		/// a new Segment.  Large values generally gives faster
 		/// indexing.
 		/// 
-		/// <p>When this is set, the writer will flush every
+		/// <p/>When this is set, the writer will flush every
 		/// maxBufferedDocs added documents.  Pass in {@link
 		/// #DISABLE_AUTO_FLUSH} to prevent triggering a flush due
 		/// to number of buffered documents.  Note that if flushing
 		/// by RAM usage is also enabled, then the flush will be
 		/// triggered by whichever comes first.</p>
 		/// 
-		/// <p>Disabled by default (writer flushes by RAM usage).</p>
+		/// <p/>Disabled by default (writer flushes by RAM usage).</p>
 		/// 
 		/// </summary>
 		/// <throws>  IllegalArgumentException if maxBufferedDocs is </throws>
@@ -2144,14 +2144,14 @@
 		/// instead of document count and use as large a RAM buffer
 		/// as you can.
 		/// 
-		/// <p>When this is set, the writer will flush whenever
+		/// <p/>When this is set, the writer will flush whenever
 		/// buffered documents and deletions use this much RAM.
 		/// Pass in {@link #DISABLE_AUTO_FLUSH} to prevent
 		/// triggering a flush due to RAM usage.  Note that if
 		/// flushing by document count is also enabled, then the
 		/// flush will be triggered by whichever comes first.</p>
 		/// 
-		/// <p> <b>NOTE</b>: the account of RAM usage for pending
+		/// <p/> <b>NOTE</b>: the account of RAM usage for pending
 		/// deletions is only approximate.  Specifically, if you
 		/// delete by Query, Lucene currently has no way to measure
 		/// the RAM usage if individual Queries so the accounting
@@ -2161,7 +2161,7 @@
 		/// instead of RAM usage (each buffered delete Query counts
 		/// as one).
 		/// 
-		/// <p>
+		/// <p/>
 		/// <b>NOTE</b>: because IndexWriter uses <code>int</code>s when managing its
 		/// internal storage, the absolute maximum value for this setting is somewhat
 		/// less than 2048 MB. The precise limit depends on various factors, such as
@@ -2169,7 +2169,7 @@
 		/// best to set this value comfortably under 2048.
 		/// </p>
 		/// 
-		/// <p> The default value is {@link #DEFAULT_RAM_BUFFER_SIZE_MB}.</p>
+		/// <p/> The default value is {@link #DEFAULT_RAM_BUFFER_SIZE_MB}.</p>
 		/// 
 		/// </summary>
 		/// <throws>  IllegalArgumentException if ramBufferSize is </throws>
@@ -2197,11 +2197,11 @@
 			return docWriter.GetRAMBufferSizeMB();
 		}
 		
-		/// <summary> <p>Determines the minimal number of delete terms required before the buffered
+		/// <summary> <p/>Determines the minimal number of delete terms required before the buffered
 		/// in-memory delete terms are applied and flushed. If there are documents
 		/// buffered in memory at the time, they are merged and a new segment is
 		/// created.</p>
-		/// <p>Disabled by default (writer flushes by RAM usage).</p>
+		/// <p/>Disabled by default (writer flushes by RAM usage).</p>
 		/// 
 		/// </summary>
 		/// <throws>  IllegalArgumentException if maxBufferedDeleteTerms </throws>
@@ -2238,23 +2238,23 @@
 		/// for batch index creation, and smaller values (< 10) for indices that are
 		/// interactively maintained.
 		/// 
-		/// <p>Note that this method is a convenience method: it
+		/// <p/>Note that this method is a convenience method: it
 		/// just calls mergePolicy.setMergeFactor as long as
 		/// mergePolicy is an instance of {@link LogMergePolicy}.
 		/// Otherwise an IllegalArgumentException is thrown.</p>
 		/// 
-		/// <p>This must never be less than 2.  The default value is 10.
+		/// <p/>This must never be less than 2.  The default value is 10.
 		/// </summary>
 		public virtual void  SetMergeFactor(int mergeFactor)
 		{
 			GetLogMergePolicy().SetMergeFactor(mergeFactor);
 		}
 		
-		/// <summary> <p>Returns the number of segments that are merged at
+		/// <summary> <p/>Returns the number of segments that are merged at
 		/// once and also controls the total number of segments
 		/// allowed to accumulate in the index.</p>
 		/// 
-		/// <p>Note that this method is a convenience method: it
+		/// <p/>Note that this method is a convenience method: it
 		/// just calls mergePolicy.getMergeFactor as long as
 		/// mergePolicy is an instance of {@link LogMergePolicy}.
 		/// Otherwise an IllegalArgumentException is thrown.</p>
@@ -2394,7 +2394,7 @@
 		/// closing and opening a new one.  See {@link #Commit()} for
 		/// caveats about write caching done by some IO devices.
 		/// 
-		/// <p> If an Exception is hit during close, eg due to disk
+		/// <p/> If an Exception is hit during close, eg due to disk
 		/// full or some other reason, then both the on-disk index
 		/// and the internal state of the IndexWriter instance will
 		/// be consistent.  However, the close will not be complete
@@ -2402,7 +2402,7 @@
 		/// may have succeeded, so the write lock will still be
 		/// held.</p>
 		/// 
-		/// <p> If you can correct the underlying cause (eg free up
+		/// <p/> If you can correct the underlying cause (eg free up
 		/// some disk space) then you can call close() again.
 		/// Failing that, if you want to force the write lock to be
 		/// released (dangerous, because you may then lose buffered
@@ -2422,7 +2422,7 @@
 		/// after which, you must be certain not to use the writer
 		/// instance anymore.</p>
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer, again.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -2439,11 +2439,11 @@
 		/// using a MergeScheduler that runs merges in background
 		/// threads.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer, again.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
-		/// <p><b>NOTE</b>: it is dangerous to always call
+		/// <p/><b>NOTE</b>: it is dangerous to always call
 		/// close(false), especially when IndexWriter is not open
 		/// for very long, because this can result in "merge
 		/// starvation" whereby long merges will never have a
@@ -2785,19 +2785,19 @@
 		/// {@link #SetMaxFieldLength(int)} terms for a given field, the remainder are
 		/// discarded.
 		/// 
-		/// <p> Note that if an Exception is hit (for example disk full)
+		/// <p/> Note that if an Exception is hit (for example disk full)
 		/// then the index will be consistent, but this document
 		/// may not have been added.  Furthermore, it's possible
 		/// the index will have one segment in non-compound format
 		/// even when using compound files (when a merge has
 		/// partially succeeded).</p>
 		/// 
-		/// <p> This method periodically flushes pending documents
+		/// <p/> This method periodically flushes pending documents
 		/// to the Directory (see <a href="#flush">above</a>), and
 		/// also periodically triggers segment merges in the index
 		/// according to the {@link MergePolicy} in use.</p>
 		/// 
-		/// <p>Merges temporarily consume space in the
+		/// <p/>Merges temporarily consume space in the
 		/// directory. The amount of space required is up to 1X the
 		/// size of all segments being merged, when no
 		/// readers/searchers are open against the index, and up to
@@ -2807,17 +2807,17 @@
 		/// primitive merge operations performed is governed by the
 		/// merge policy.
 		/// 
-		/// <p>Note that each term in the document can be no longer
+		/// <p/>Note that each term in the document can be no longer
 		/// than 16383 characters, otherwise an
 		/// IllegalArgumentException will be thrown.</p>
 		/// 
-		/// <p>Note that it's possible to create an invalid Unicode
+		/// <p/>Note that it's possible to create an invalid Unicode
 		/// string in java if a UTF16 surrogate pair is malformed.
 		/// In this case, the invalid characters are silently
 		/// replaced with the Unicode replacement character
 		/// U+FFFD.</p>
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -2834,11 +2834,11 @@
 		/// {@link #SetMaxFieldLength(int)} terms for a given field, the remainder are
 		/// discarded.
 		/// 
-		/// <p>See {@link #AddDocument(Document)} for details on
+		/// <p/>See {@link #AddDocument(Document)} for details on
 		/// index and IndexWriter state after an Exception, and
 		/// flushing/merging temporary free space requirements.</p>
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -2889,7 +2889,7 @@
 		
 		/// <summary> Deletes the document(s) containing <code>term</code>.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -2916,7 +2916,7 @@
 		/// <summary> Deletes the document(s) containing any of the
 		/// terms. All deletes are flushed at the same time.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -2943,7 +2943,7 @@
 		
 		/// <summary> Deletes the document(s) matching the provided query.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -2963,7 +2963,7 @@
 		/// <summary> Deletes the document(s) matching any of the provided queries.
 		/// All deletes are flushed at the same time.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -2987,7 +2987,7 @@
 		/// by a reader on the same index (flush may happen only after
 		/// the add).
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -3011,7 +3011,7 @@
 		/// by a reader on the same index (flush may happen only after
 		/// the add).
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -3142,33 +3142,33 @@
 		/// default merge policy, but individaul merge policies may implement
 		/// optimize in different ways.
 		/// 
-		/// <p>It is recommended that this method be called upon completion of indexing.  In
+		/// <p/>It is recommended that this method be called upon completion of indexing.  In
 		/// environments with frequent updates, optimize is best done during low volume times, if at all. 
 		/// 
 		/// </p>
-		/// <p>See http://www.gossamer-threads.com/lists/lucene/java-dev/47895 for more discussion. </p>
+		/// <p/>See http://www.gossamer-threads.com/lists/lucene/java-dev/47895 for more discussion. </p>
 		/// 
-		/// <p>Note that optimize requires 2X the index size free
+		/// <p/>Note that optimize requires 2X the index size free
 		/// space in your Directory.  For example, if your index
 		/// size is 10 MB then you need 20 MB free for optimize to
 		/// complete.</p>
 		/// 
-		/// <p>If some but not all readers re-open while an
+		/// <p/>If some but not all readers re-open while an
 		/// optimize is underway, this will cause > 2X temporary
 		/// space to be consumed as those new readers will then
 		/// hold open the partially optimized segments at that
 		/// time.  It is best not to re-open readers while optimize
 		/// is running.</p>
 		/// 
-		/// <p>The actual temporary usage could be much less than
+		/// <p/>The actual temporary usage could be much less than
 		/// these figures (it depends on many factors).</p>
 		/// 
-		/// <p>In general, once the optimize completes, the total size of the
+		/// <p/>In general, once the optimize completes, the total size of the
 		/// index will be less than the size of the starting index.
 		/// It could be quite a bit smaller (if there were many
 		/// pending deletes) or just slightly smaller.</p>
 		/// 
-		/// <p>If an Exception is hit during optimize(), for example
+		/// <p/>If an Exception is hit during optimize(), for example
 		/// due to disk full, the index will not be corrupt and no
 		/// documents will have been lost.  However, it may have
 		/// been partially optimized (some segments were merged but
@@ -3178,13 +3178,13 @@
 		/// Exception is hit during conversion of the segment into
 		/// compound format.</p>
 		/// 
-		/// <p>This call will optimize those segments present in
+		/// <p/>This call will optimize those segments present in
 		/// the index when the call started.  If other threads are
 		/// still adding documents and flushing segments, those
 		/// newly created segments will not be optimized unless you
 		/// call optimize again.</p>
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -3202,7 +3202,7 @@
 		/// maxNumSegments==1 then this is the same as {@link
 		/// #Optimize()}.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -3221,7 +3221,7 @@
 		/// {@link MergeScheduler} that is able to run merges in
 		/// background threads.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// </summary>
@@ -3236,7 +3236,7 @@
 		/// {@link MergeScheduler} that is able to run merges in
 		/// background threads.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// </summary>
@@ -3366,7 +3366,7 @@
 		/// {@link MergeScheduler} that is able to run merges in
 		/// background threads.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// </summary>
@@ -3453,7 +3453,7 @@
 		/// documents, so you must do so yourself if necessary.
 		/// See also {@link #ExpungeDeletes(boolean)}
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// </summary>
@@ -3471,7 +3471,7 @@
 		/// necessary. The most common case is when merge policy
 		/// parameters have changed.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// </summary>
@@ -3902,14 +3902,14 @@
 		
 		/// <summary> Delete all documents in the index.
 		/// 
-		/// <p>This method will drop all buffered documents and will 
+		/// <p/>This method will drop all buffered documents and will 
 		/// remove all segments from the index. This change will not be
 		/// visible until a {@link #Commit()} has been called. This method
 		/// can be rolled back using {@link #Rollback()}.</p>
 		/// 
-		/// <p>NOTE: this method is much faster than using deleteDocuments( new MatchAllDocsQuery() ).</p>
+		/// <p/>NOTE: this method is much faster than using deleteDocuments( new MatchAllDocsQuery() ).</p>
 		/// 
-		/// <p>NOTE: this method will forcefully abort all merges
+		/// <p/>NOTE: this method will forcefully abort all merges
 		/// in progress.  If other threads are running {@link
 		/// #Optimize()} or any of the addIndexes methods, they
 		/// will receive {@link MergePolicy.MergeAbortedException}s.
@@ -4027,7 +4027,7 @@
 		
 		/// <summary> Wait for any currently outstanding merges to finish.
 		/// 
-		/// <p>It is guaranteed that any merges started prior to calling this method 
+		/// <p/>It is guaranteed that any merges started prior to calling this method 
 		/// will have completed once this method completes.</p>
 		/// </summary>
 		public virtual void  WaitForMerges()
@@ -4095,7 +4095,7 @@
 		
 		/// <summary>Merges all segments from an array of indexes into this index.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -4209,29 +4209,29 @@
 		/// <summary> Merges all segments from an array of indexes into this
 		/// index.
 		/// 
-		/// <p>This may be used to parallelize batch indexing.  A large document
+		/// <p/>This may be used to parallelize batch indexing.  A large document
 		/// collection can be broken into sub-collections.  Each sub-collection can be
 		/// indexed in parallel, on a different thread, process or machine.  The
 		/// complete index can then be created by merging sub-collection indexes
 		/// with this method.
 		/// 
-		/// <p><b>NOTE:</b> the index in each Directory must not be
+		/// <p/><b>NOTE:</b> the index in each Directory must not be
 		/// changed (opened by a writer) while this method is
 		/// running.  This method does not acquire a write lock in
 		/// each input Directory, so it is up to the caller to
 		/// enforce this.
 		/// 
-		/// <p><b>NOTE:</b> while this is running, any attempts to
+		/// <p/><b>NOTE:</b> while this is running, any attempts to
 		/// add or delete documents (with another thread) will be
 		/// paused until this method completes.
 		/// 
-		/// <p>This method is transactional in how Exceptions are
+		/// <p/>This method is transactional in how Exceptions are
 		/// handled: it does not commit a new segments_N file until
 		/// all indexes are added.  This means if an Exception
 		/// occurs (for example disk full), then either no indexes
 		/// will have been added or they all will have been.</p>
 		/// 
-		/// <p>Note that this requires temporary free space in the
+		/// <p/>Note that this requires temporary free space in the
 		/// Directory up to 2X the sum of all input indexes
 		/// (including the starting index).  If readers/searchers
 		/// are open against the starting index, then temporary
@@ -4239,16 +4239,16 @@
 		/// starting index (see {@link #Optimize()} for details).
 		/// </p>
 		/// 
-		/// <p>Once this completes, the final size of the index
+		/// <p/>Once this completes, the final size of the index
 		/// will be less than the sum of all input index sizes
 		/// (including the starting index).  It could be quite a
 		/// bit smaller (if there were many pending deletes) or
 		/// just slightly smaller.</p>
 		/// 
-		/// <p>
+		/// <p/>
 		/// This requires this index not be among those to be added.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -4434,19 +4434,19 @@
 		}
 		
 		/// <summary>Merges the provided indexes into this index.
-		/// <p>After this completes, the index is optimized. </p>
-		/// <p>The provided IndexReaders are not closed.</p>
+		/// <p/>After this completes, the index is optimized. </p>
+		/// <p/>The provided IndexReaders are not closed.</p>
 		/// 
-		/// <p><b>NOTE:</b> while this is running, any attempts to
+		/// <p/><b>NOTE:</b> while this is running, any attempts to
 		/// add or delete documents (with another thread) will be
 		/// paused until this method completes.
 		/// 
-		/// <p>See {@link #AddIndexesNoOptimize(Directory[])} for
+		/// <p/>See {@link #AddIndexesNoOptimize(Directory[])} for
 		/// details on transactional semantics, temporary free
 		/// space required in the Directory, and non-CFS segments
 		/// on an Exception.</p>
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -4638,11 +4638,11 @@
 		
 		/// <summary> Flush all in-memory buffered updates (adds and deletes)
 		/// to the Directory. 
-		/// <p>Note: while this will force buffered docs to be
+		/// <p/>Note: while this will force buffered docs to be
 		/// pushed into the index, it will not make these docs
 		/// visible to a reader.  Use {@link #Commit()} instead
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -4665,7 +4665,7 @@
 		
 		/// <summary>Expert: prepare for commit.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -4678,7 +4678,7 @@
 			PrepareCommit(null);
 		}
 		
-		/// <summary><p>Expert: prepare for commit, specifying
+		/// <summary><p/>Expert: prepare for commit, specifying
 		/// commitUserData Map (String -> String).  This does the
 		/// first phase of 2-phase commit.  You can only call this
 		/// when autoCommit is false.  This method does all steps
@@ -4694,7 +4694,7 @@
 		/// without prepareCommit first in which case that method
 		/// will internally call prepareCommit.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -4745,7 +4745,7 @@
 			FinishCommit();
 		}
 		
-		/// <summary> <p>Commits all pending changes (added & deleted
+		/// <summary> <p/>Commits all pending changes (added & deleted
 		/// documents, optimizations, segment merges, added
 		/// indexes, etc.) to the index, and syncs all referenced
 		/// index files, such that a reader will see the changes
@@ -4755,7 +4755,7 @@
 		/// costly operation, so you should test the cost in your
 		/// application and do it only when really necessary.</p>
 		/// 
-		/// <p> Note that this operation calls Directory.sync on
+		/// <p/> Note that this operation calls Directory.sync on
 		/// the index files.  That call should not return until the
 		/// file contents & metadata are on stable storage.  For
 		/// FSDirectory, this calls the OS's fsync.  But, beware:
@@ -4767,7 +4767,7 @@
 		/// loss it may still lose data.  Lucene cannot guarantee
 		/// consistency on such devices.  </p>
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// 
@@ -4786,7 +4786,7 @@
 		/// calls {@link #PrepareCommit(Map)} (if you didn't
 		/// already call it) and then {@link #finishCommit}.
 		/// 
-		/// <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+		/// <p/><b>NOTE</b>: if this method hits an OutOfMemoryError
 		/// you should immediately close the writer.  See <a
 		/// href="#OOME">above</a> for details.</p>
 		/// </summary>
@@ -6543,7 +6543,7 @@
 		}
 		
 		/// <summary> Forcibly unlocks the index in the named directory.
-		/// <P>
+		/// <p/>
 		/// Caution: this should only be used by failure recovery code,
 		/// when it is known that no other process nor thread is in fact
 		/// currently accessing this index.
@@ -6617,10 +6617,10 @@
 		/// search, but will reduce search latency on opening a
 		/// new near real-time reader after a merge completes.
 		/// 
-		/// <p><b>NOTE:</b> This API is experimental and might
+		/// <p/><b>NOTE:</b> This API is experimental and might
 		/// change in incompatible ways in the next release.</p>
 		/// 
-		/// <p><b>NOTE</b>: warm is called before any deletes have
+		/// <p/><b>NOTE</b>: warm is called before any deletes have
 		/// been carried over to the merged segment. 
 		/// </summary>
 		public abstract class IndexReaderWarmer

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/LogByteSizeMergePolicy.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/LogByteSizeMergePolicy.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/LogByteSizeMergePolicy.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/LogByteSizeMergePolicy.cs Mon Dec 14 14:13:03 2009
@@ -47,7 +47,7 @@
 			return SizeBytes(info);
 		}
 		
-		/// <summary><p>Determines the largest segment (measured by total
+		/// <summary><p/>Determines the largest segment (measured by total
 		/// byte size of the segment's files, in MB) that may be
 		/// merged with other segments.  Small values (e.g., less
 		/// than 50 MB) are best for interactive indexing, as this
@@ -55,7 +55,7 @@
 		/// seconds.  Larger values are best for batched indexing
 		/// and speedier searches.</p>
 		/// 
-		/// <p>Note that {@link #setMaxMergeDocs} is also
+		/// <p/>Note that {@link #setMaxMergeDocs} is also
 		/// used to check whether a segment is too large for
 		/// merging (it's either or).</p>
 		/// </summary>

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/LogMergePolicy.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/LogMergePolicy.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/LogMergePolicy.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/LogMergePolicy.cs Mon Dec 14 14:13:03 2009
@@ -20,7 +20,7 @@
 namespace Lucene.Net.Index
 {
 	
-	/// <summary><p>This class implements a {@link MergePolicy} that tries
+	/// <summary><p/>This class implements a {@link MergePolicy} that tries
 	/// to merge segments into levels of exponentially
 	/// increasing size, where each level has fewer segments than
 	/// the value of the merge factor. Whenever extra segments
@@ -29,7 +29,7 @@
 	/// set the merge factor using {@link #GetMergeFactor()} and
 	/// {@link #SetMergeFactor(int)} respectively.</p>
 	/// 
-	/// <p>This class is abstract and requires a subclass to
+	/// <p/>This class is abstract and requires a subclass to
 	/// define the {@link #size} method which specifies how a
 	/// segment's size is determined.  {@link LogDocMergePolicy}
 	/// is one subclass that measures size by document count in
@@ -85,7 +85,7 @@
 				writer.Message("LMP: " + message);
 		}
 		
-		/// <summary><p>Returns the number of segments that are merged at
+		/// <summary><p/>Returns the number of segments that are merged at
 		/// once and also controls the total number of segments
 		/// allowed to accumulate in the index.</p> 
 		/// </summary>
@@ -514,7 +514,7 @@
 			return spec;
 		}
 		
-		/// <summary><p>Determines the largest segment (measured by
+		/// <summary><p/>Determines the largest segment (measured by
 		/// document count) that may be merged with other segments.
 		/// Small values (e.g., less than 10,000) are best for
 		/// interactive indexing, as this limits the length of
@@ -522,9 +522,9 @@
 		/// are best for batched indexing and speedier
 		/// searches.</p>
 		/// 
-		/// <p>The default value is {@link Integer#MAX_VALUE}.</p>
+		/// <p/>The default value is {@link Integer#MAX_VALUE}.</p>
 		/// 
-		/// <p>The default merge policy ({@link
+		/// <p/>The default merge policy ({@link
 		/// LogByteSizeMergePolicy}) also allows you to set this
 		/// limit by net size (in MB) of the segment, using {@link
 		/// LogByteSizeMergePolicy#setMaxMergeMB}.</p>

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/MergePolicy.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/MergePolicy.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/MergePolicy.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/MergePolicy.cs Mon Dec 14 14:13:03 2009
@@ -22,11 +22,11 @@
 namespace Lucene.Net.Index
 {
 	
-	/// <summary> <p>Expert: a MergePolicy determines the sequence of
+	/// <summary> <p/>Expert: a MergePolicy determines the sequence of
 	/// primitive merge operations to be used for overall merge
 	/// and optimize operations.</p>
 	/// 
-	/// <p>Whenever the segments in an index have been altered by
+	/// <p/>Whenever the segments in an index have been altered by
 	/// {@link IndexWriter}, either the addition of a newly
 	/// flushed segment, addition of many segments from
 	/// addIndexes* calls, or a previous merge that may now need
@@ -39,19 +39,19 @@
 	/// {@link #findMergesForOptimize} and the MergePolicy should
 	/// then return the necessary merges.</p>
 	/// 
-	/// <p>Note that the policy can return more than one merge at
+	/// <p/>Note that the policy can return more than one merge at
 	/// a time.  In this case, if the writer is using {@link
 	/// SerialMergeScheduler}, the merges will be run
 	/// sequentially but if it is using {@link
 	/// ConcurrentMergeScheduler} they will be run concurrently.</p>
 	/// 
-	/// <p>The default MergePolicy is {@link
+	/// <p/>The default MergePolicy is {@link
 	/// LogByteSizeMergePolicy}.</p>
 	/// 
-	/// <p><b>NOTE:</b> This API is new and still experimental
+	/// <p/><b>NOTE:</b> This API is new and still experimental
 	/// (subject to change suddenly in the next release)</p>
 	/// 
-	/// <p><b>NOTE</b>: This class typically requires access to
+	/// <p/><b>NOTE</b>: This class typically requires access to
 	/// package-private APIs (e.g. <code>SegmentInfos</code>) to do its job;
 	/// if you implement your own MergePolicy, you'll need to put
 	/// it in package Lucene.Net.Index in order to use

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/MergeScheduler.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/MergeScheduler.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/MergeScheduler.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/MergeScheduler.cs Mon Dec 14 14:13:03 2009
@@ -20,15 +20,15 @@
 namespace Lucene.Net.Index
 {
 	
-	/// <summary><p>Expert: {@link IndexWriter} uses an instance
+	/// <summary><p/>Expert: {@link IndexWriter} uses an instance
 	/// implementing this interface to execute the merges
 	/// selected by a {@link MergePolicy}.  The default
 	/// MergeScheduler is {@link ConcurrentMergeScheduler}.</p>
 	/// 
-	/// <p><b>NOTE:</b> This API is new and still experimental
+	/// <p/><b>NOTE:</b> This API is new and still experimental
 	/// (subject to change suddenly in the next release)</p>
 	/// 
-	/// <p><b>NOTE</b>: This class typically requires access to
+	/// <p/><b>NOTE</b>: This class typically requires access to
 	/// package-private APIs (eg, SegmentInfos) to do its job;
 	/// if you implement your own MergePolicy, you'll need to put
 	/// it in package Lucene.Net.Index in order to use

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/MultiReader.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/MultiReader.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/MultiReader.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/MultiReader.cs Mon Dec 14 14:13:03 2009
@@ -42,10 +42,10 @@
 		private int numDocs = - 1;
 		private bool hasDeletions = false;
 		
-		/// <summary> <p>Construct a MultiReader aggregating the named set of (sub)readers.
+		/// <summary> <p/>Construct a MultiReader aggregating the named set of (sub)readers.
 		/// Directory locking for delete, undeleteAll, and setNorm operations is
 		/// left to the subreaders. </p>
-		/// <p>Note that all subreaders are closed if this Multireader is closed.</p>
+		/// <p/>Note that all subreaders are closed if this Multireader is closed.</p>
 		/// </summary>
 		/// <param name="subReaders">set of (sub)readers
 		/// </param>
@@ -55,7 +55,7 @@
 			Initialize(subReaders, true);
 		}
 		
-		/// <summary> <p>Construct a MultiReader aggregating the named set of (sub)readers.
+		/// <summary> <p/>Construct a MultiReader aggregating the named set of (sub)readers.
 		/// Directory locking for delete, undeleteAll, and setNorm operations is
 		/// left to the subreaders. </p>
 		/// </summary>
@@ -102,12 +102,12 @@
 		/// If one or more subreaders could be re-opened (i. e. subReader.reopen() 
 		/// returned a new instance != subReader), then a new MultiReader instance 
 		/// is returned, otherwise this instance is returned.
-		/// <p>
+		/// <p/>
 		/// A re-opened instance might share one or more subreaders with the old 
 		/// instance. Index modification operations result in undefined behavior
 		/// when performed before the old instance is closed.
 		/// (see {@link IndexReader#Reopen()}).
-		/// <p>
+		/// <p/>
 		/// If subreaders are shared, then the reference count of those
 		/// readers is increased to ensure that the subreaders remain open
 		/// until the last referring reader is closed.
@@ -126,7 +126,7 @@
 		/// <summary> Clones the subreaders.
 		/// (see {@link IndexReader#clone()}).
 		/// <br>
-		/// <p>
+		/// <p/>
 		/// If subreaders are shared, then the reference count of those
 		/// readers is increased to ensure that the subreaders remain open
 		/// until the last referring reader is closed.

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/ParallelReader.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/ParallelReader.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/ParallelReader.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/ParallelReader.cs Mon Dec 14 14:13:03 2009
@@ -32,12 +32,12 @@
 	/// documents with the same document number.  When searching, matches for a
 	/// query term are from the first index added that has the field.
 	/// 
-	/// <p>This is useful, e.g., with collections that have large fields which
+	/// <p/>This is useful, e.g., with collections that have large fields which
 	/// change rarely and small fields that change more frequently.  The smaller
 	/// fields may be re-indexed in a new index and both indexes may be searched
 	/// together.
 	/// 
-	/// <p><strong>Warning:</strong> It is up to you to make sure all indexes
+	/// <p/><strong>Warning:</strong> It is up to you to make sure all indexes
 	/// are created and modified the same way. For example, if you add
 	/// documents to one index, you need to add the same documents in the
 	/// same order to the other indexes. <em>Failure to do so will result in
@@ -57,7 +57,7 @@
 		private bool hasDeletions;
 		
 		/// <summary>Construct a ParallelReader. 
-		/// <p>Note that all subreaders are closed if this ParallelReader is closed.</p>
+		/// <p/>Note that all subreaders are closed if this ParallelReader is closed.</p>
 		/// </summary>
 		public ParallelReader():this(true)
 		{
@@ -148,12 +148,12 @@
 		/// If one or more subreaders could be re-opened (i. e. subReader.reopen() 
 		/// returned a new instance != subReader), then a new ParallelReader instance 
 		/// is returned, otherwise this instance is returned.
-		/// <p>
+		/// <p/>
 		/// A re-opened instance might share one or more subreaders with the old 
 		/// instance. Index modification operations result in undefined behavior
 		/// when performed before the old instance is closed.
 		/// (see {@link IndexReader#Reopen()}).
-		/// <p>
+		/// <p/>
 		/// If subreaders are shared, then the reference count of those
 		/// readers is increased to ensure that the subreaders remain open
 		/// until the last referring reader is closed.

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/Payload.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/Payload.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/Payload.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/Payload.cs Mon Dec 14 14:13:03 2009
@@ -26,10 +26,10 @@
 	/// <summary>  A Payload is metadata that can be stored together with each occurrence 
 	/// of a term. This metadata is stored inline in the posting list of the
 	/// specific term.  
-	/// <p>
+	/// <p/>
 	/// To store payloads in the index a {@link TokenStream} has to be used that
 	/// produces payload data.
-	/// <p>
+	/// <p/>
 	/// Use {@link TermPositions#GetPayloadLength()} and {@link TermPositions#GetPayload(byte[], int)}
 	/// to retrieve the payloads from the index.<br>
 	/// 

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SegmentInfo.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/SegmentInfo.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SegmentInfo.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SegmentInfo.cs Mon Dec 14 14:13:03 2009
@@ -28,7 +28,7 @@
 	/// <summary> Information about a segment such as it's name, directory, and files related
 	/// to the segment.
 	/// 
-	/// * <p><b>NOTE:</b> This API is new and still experimental
+	/// * <p/><b>NOTE:</b> This API is new and still experimental
 	/// (subject to change suddenly in the next release)</p>
 	/// </summary>
 	public sealed class SegmentInfo : System.ICloneable

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SegmentInfos.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/SegmentInfos.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SegmentInfos.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SegmentInfos.cs Mon Dec 14 14:13:03 2009
@@ -30,7 +30,7 @@
 	/// <summary> A collection of segmentInfo objects with methods for operating on
 	/// those segments in relation to the file system.
 	/// 
-	/// <p><b>NOTE:</b> This API is new and still experimental
+	/// <p/><b>NOTE:</b> This API is new and still experimental
 	/// (subject to change suddenly in the next release)</p>
 	/// </summary>
 	[Serializable]

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SegmentMerger.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/SegmentMerger.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SegmentMerger.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SegmentMerger.cs Mon Dec 14 14:13:03 2009
@@ -32,7 +32,7 @@
 	/// <summary> The SegmentMerger class combines two or more Segments, represented by an IndexReader ({@link #add},
 	/// into a single Segment.  After adding the appropriate readers, call the merge method to combine the 
 	/// segments.
-	/// <P> 
+	/// <p/> 
 	/// If the compoundFile flag is set, then the segments will be merged into a compound file.
 	/// 
 	/// 

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SegmentReader.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/SegmentReader.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SegmentReader.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SegmentReader.cs Mon Dec 14 14:13:03 2009
@@ -32,7 +32,7 @@
 	
 	/// <version>  $Id 
 	/// </version>
-	/// <summary> <p><b>NOTE:</b> This API is new and still experimental
+	/// <summary> <p/><b>NOTE:</b> This API is new and still experimental
 	/// (subject to change suddenly in the next release)</p>
 	/// </summary>
 	public class SegmentReader:IndexReader, System.ICloneable

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SnapshotDeletionPolicy.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/SnapshotDeletionPolicy.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SnapshotDeletionPolicy.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/SnapshotDeletionPolicy.cs Mon Dec 14 14:13:03 2009
@@ -37,7 +37,7 @@
 	/// snapshot held when a writer is closed will "survive"
 	/// when the next writer is opened.
 	/// 
-	/// <p><b>WARNING</b>: This API is a new and experimental and
+	/// <p/><b>WARNING</b>: This API is a new and experimental and
 	/// may suddenly change.</p> 
 	/// </summary>
 	

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/Term.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/Term.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/Term.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/Term.cs Mon Dec 14 14:13:03 2009
@@ -36,7 +36,7 @@
 		internal System.String text;
 		
 		/// <summary>Constructs a Term with the given field and text.
-		/// <p>Note that a null field or null text value results in undefined
+		/// <p/>Note that a null field or null text value results in undefined
 		/// behavior for most Lucene APIs that accept a Term parameter. 
 		/// </summary>
 		public Term(System.String fld, System.String txt)

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermDocs.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/TermDocs.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermDocs.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermDocs.cs Mon Dec 14 14:13:03 2009
@@ -21,9 +21,9 @@
 {
 	
 	/// <summary>TermDocs provides an interface for enumerating &lt;document, frequency&gt;
-	/// pairs for a term.  <p> The document portion names each document containing
+	/// pairs for a term.  <p/> The document portion names each document containing
 	/// the term.  Documents are indicated by number.  The frequency portion gives
-	/// the number of times the term occurred in each document.  <p> The pairs are
+	/// the number of times the term occurred in each document.  <p/> The pairs are
 	/// ordered by document number.
 	/// </summary>
 	/// <seealso cref="IndexReader.TermDocs()">
@@ -41,17 +41,17 @@
 		/// </summary>
 		void  Seek(TermEnum termEnum);
 		
-		/// <summary>Returns the current document number.  <p> This is invalid until {@link
+		/// <summary>Returns the current document number.  <p/> This is invalid until {@link
 		/// #Next()} is called for the first time.
 		/// </summary>
 		int Doc();
 		
-		/// <summary>Returns the frequency of the term within the current document.  <p> This
+		/// <summary>Returns the frequency of the term within the current document.  <p/> This
 		/// is invalid until {@link #Next()} is called for the first time.
 		/// </summary>
 		int Freq();
 		
-		/// <summary>Moves to the next pair in the enumeration.  <p> Returns true iff there is
+		/// <summary>Moves to the next pair in the enumeration.  <p/> Returns true iff there is
 		/// such a next pair in the enumeration. 
 		/// </summary>
 		bool Next();
@@ -61,14 +61,14 @@
 		/// frequencies are stored in <i>freqs</i>.  The <i>freqs</i> array must be as
 		/// long as the <i>docs</i> array.
 		/// 
-		/// <p>Returns the number of entries read.  Zero is only returned when the
+		/// <p/>Returns the number of entries read.  Zero is only returned when the
 		/// stream has been exhausted.  
 		/// </summary>
 		int Read(int[] docs, int[] freqs);
 		
 		/// <summary>Skips entries to the first beyond the current whose document number is
-		/// greater than or equal to <i>target</i>. <p>Returns true iff there is such
-		/// an entry.  <p>Behaves as if written: <pre>
+		/// greater than or equal to <i>target</i>. <p/>Returns true iff there is such
+		/// an entry.  <p/>Behaves as if written: <pre>
 		/// boolean skipTo(int target) {
 		/// do {
 		/// if (!next())

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermEnum.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/TermEnum.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermEnum.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermEnum.cs Mon Dec 14 14:13:03 2009
@@ -21,7 +21,7 @@
 {
 	
 	/// <summary>Abstract class for enumerating terms.
-	/// <p>Term enumerations are always ordered by Term.compareTo().  Each term in
+	/// <p/>Term enumerations are always ordered by Term.compareTo().  Each term in
 	/// the enumeration is greater than all that precede it.  
 	/// </summary>
 	
@@ -40,8 +40,8 @@
 		public abstract void  Close();
 		
 		/// <summary>Skips terms to the first beyond the current whose value is
-		/// greater or equal to <i>target</i>. <p>Returns true iff there is such
-		/// an entry.  <p>Behaves as if written: <pre>
+		/// greater or equal to <i>target</i>. <p/>Returns true iff there is such
+		/// an entry.  <p/>Behaves as if written: <pre>
 		/// public boolean skipTo(Term target) {
 		/// do {
 		/// if (!next())

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermPositions.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/Index/TermPositions.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermPositions.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/Index/TermPositions.cs Mon Dec 14 14:13:03 2009
@@ -21,7 +21,7 @@
 {
 	
 	/// <summary> TermPositions provides an interface for enumerating the &lt;document,
-	/// frequency, &lt;position&gt;* &gt; tuples for a term.  <p> The document and
+	/// frequency, &lt;position&gt;* &gt; tuples for a term.  <p/> The document and
 	/// frequency are the same as for a TermDocs.  The positions portion lists the ordinal
 	/// positions of each occurrence of a term in a document.
 	/// 
@@ -33,7 +33,7 @@
 	{
 		/// <summary>Returns next position in the current document.  It is an error to call
 		/// this more than {@link #Freq()} times
-		/// without calling {@link #Next()}<p> This is
+		/// without calling {@link #Next()}<p/> This is
 		/// invalid until {@link #Next()} is called for
 		/// the first time.
 		/// </summary>
@@ -69,7 +69,7 @@
 		byte[] GetPayload(byte[] data, int offset);
 		
 		/// <summary> Checks if a payload can be loaded at this position.
-		/// <p>
+		/// <p/>
 		/// Payloads can only be loaded once per call to 
 		/// {@link #NextPosition()}.
 		/// 

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/FastCharStream.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/QueryParser/FastCharStream.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/FastCharStream.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/FastCharStream.cs Mon Dec 14 14:13:03 2009
@@ -22,7 +22,7 @@
 namespace Lucene.Net.QueryParsers
 {
 	
-	/// <summary>An efficient implementation of JavaCC's CharStream interface.  <p>Note that
+	/// <summary>An efficient implementation of JavaCC's CharStream interface.  <p/>Note that
 	/// this does not do line-number counting, but instead keeps track of the
 	/// character position of the token in the input, as required by Lucene's {@link
 	/// Lucene.Net.Analysis.Token} API. 

Modified: incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs
URL: http://svn.apache.org/viewvc/incubator/lucene.net/trunk/C%23/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs?rev=890338&r1=890337&r2=890338&view=diff
==============================================================================
--- incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs (original)
+++ incubator/lucene.net/trunk/C#/src/Lucene.Net/QueryParser/MultiFieldQueryParser.cs Mon Dec 14 14:13:03 2009
@@ -41,7 +41,7 @@
 		/// <summary> Creates a MultiFieldQueryParser. Allows passing of a map with term to
 		/// Boost, and the boost to apply to each term.
 		/// 
-		/// <p>
+		/// <p/>
 		/// It will, when parse(String query) is called, construct a query like this
 		/// (assuming the query consists of two terms and you specify the two fields
 		/// <code>title</code> and <code>body</code>):
@@ -51,7 +51,7 @@
 		/// (title:term1 body:term1) (title:term2 body:term2)
 		/// </code>
 		/// 
-		/// <p>
+		/// <p/>
 		/// When setDefaultOperator(AND_OPERATOR) is set, the result will be:
 		/// </p>
 		/// 
@@ -59,7 +59,7 @@
 		/// +(title:term1 body:term1) +(title:term2 body:term2)
 		/// </code>
 		/// 
-		/// <p>
+		/// <p/>
 		/// When you pass a boost (title=>5 body=>10) you can get
 		/// </p>
 		/// 
@@ -67,7 +67,7 @@
 		/// +(title:term1^5.0 body:term1^10.0) +(title:term2^5.0 body:term2^10.0)
 		/// </code>
 		/// 
-		/// <p>
+		/// <p/>
 		/// In other words, all the query's terms must appear, but it doesn't matter
 		/// in what fields they appear.
 		/// </p>
@@ -86,7 +86,7 @@
 		/// <summary> Creates a MultiFieldQueryParser. Allows passing of a map with term to
 		/// Boost, and the boost to apply to each term.
 		/// 
-		/// <p>
+		/// <p/>
 		/// It will, when parse(String query) is called, construct a query like this
 		/// (assuming the query consists of two terms and you specify the two fields
 		/// <code>title</code> and <code>body</code>):
@@ -96,7 +96,7 @@
 		/// (title:term1 body:term1) (title:term2 body:term2)
 		/// </code>
 		/// 
-		/// <p>
+		/// <p/>
 		/// When setDefaultOperator(AND_OPERATOR) is set, the result will be:
 		/// </p>
 		/// 
@@ -104,7 +104,7 @@
 		/// +(title:term1 body:term1) +(title:term2 body:term2)
 		/// </code>
 		/// 
-		/// <p>
+		/// <p/>
 		/// When you pass a boost (title=>5 body=>10) you can get
 		/// </p>
 		/// 
@@ -112,7 +112,7 @@
 		/// +(title:term1^5.0 body:term1^10.0) +(title:term2^5.0 body:term2^10.0)
 		/// </code>
 		/// 
-		/// <p>
+		/// <p/>
 		/// In other words, all the query's terms must appear, but it doesn't matter
 		/// in what fields they appear.
 		/// </p>
@@ -124,7 +124,7 @@
 		
 		/// <summary> Creates a MultiFieldQueryParser.
 		/// 
-		/// <p>
+		/// <p/>
 		/// It will, when parse(String query) is called, construct a query like this
 		/// (assuming the query consists of two terms and you specify the two fields
 		/// <code>title</code> and <code>body</code>):
@@ -134,7 +134,7 @@
 		/// (title:term1 body:term1) (title:term2 body:term2)
 		/// </code>
 		/// 
-		/// <p>
+		/// <p/>
 		/// When setDefaultOperator(AND_OPERATOR) is set, the result will be:
 		/// </p>
 		/// 
@@ -142,7 +142,7 @@
 		/// +(title:term1 body:term1) +(title:term2 body:term2)
 		/// </code>
 		/// 
-		/// <p>
+		/// <p/>
 		/// In other words, all the query's terms must appear, but it doesn't matter
 		/// in what fields they appear.
 		/// </p>
@@ -159,7 +159,7 @@
 		
 		/// <summary> Creates a MultiFieldQueryParser.
 		/// 
-		/// <p>
+		/// <p/>
 		/// It will, when parse(String query) is called, construct a query like this
 		/// (assuming the query consists of two terms and you specify the two fields
 		/// <code>title</code> and <code>body</code>):
@@ -169,7 +169,7 @@
 		/// (title:term1 body:term1) (title:term2 body:term2)
 		/// </code>
 		/// 
-		/// <p>
+		/// <p/>
 		/// When setDefaultOperator(AND_OPERATOR) is set, the result will be:
 		/// </p>
 		/// 
@@ -177,7 +177,7 @@
 		/// +(title:term1 body:term1) +(title:term2 body:term2)
 		/// </code>
 		/// 
-		/// <p>
+		/// <p/>
 		/// In other words, all the query's terms must appear, but it doesn't matter
 		/// in what fields they appear.
 		/// </p>
@@ -298,7 +298,7 @@
 		}
 		
 		/// <summary> Parses a query which searches on the fields specified.
-		/// <p>
+		/// <p/>
 		/// If x fields are specified, this effectively constructs:
 		/// 
 		/// <pre>
@@ -331,7 +331,7 @@
 		}
 		
 		/// <summary> Parses a query which searches on the fields specified.
-		/// <p>
+		/// <p/>
 		/// If x fields are specified, this effectively constructs:
 		/// 
 		/// <pre>
@@ -377,7 +377,7 @@
 		/// <summary> Parses a query, searching on the fields specified.
 		/// Use this if you need to specify certain fields as required,
 		/// and others as prohibited.
-		/// <p><pre>
+		/// <p/><pre>
 		/// Usage:
 		/// <code>
 		/// String[] fields = {"filename", "contents", "description"};
@@ -387,7 +387,7 @@
 		/// MultiFieldQueryParser.parse("query", fields, flags, analyzer);
 		/// </code>
 		/// </pre>
-		/// <p>
+		/// <p/>
 		/// The code above would construct a query:
 		/// <pre>
 		/// <code>
@@ -420,7 +420,7 @@
 		
 		/// <summary> Parses a query, searching on the fields specified. Use this if you need
 		/// to specify certain fields as required, and others as prohibited.
-		/// <p>
+		/// <p/>
 		/// 
 		/// <pre>
 		/// Usage:
@@ -432,7 +432,7 @@
 		/// MultiFieldQueryParser.parse(&quot;query&quot;, fields, flags, analyzer);
 		/// &lt;/code&gt;
 		/// </pre>
-		/// <p>
+		/// <p/>
 		/// The code above would construct a query:
 		/// 
 		/// <pre>
@@ -480,7 +480,7 @@
 		/// <summary> Parses a query, searching on the fields specified.
 		/// Use this if you need to specify certain fields as required,
 		/// and others as prohibited.
-		/// <p><pre>
+		/// <p/><pre>
 		/// Usage:
 		/// <code>
 		/// String[] query = {"query1", "query2", "query3"};
@@ -491,7 +491,7 @@
 		/// MultiFieldQueryParser.parse(query, fields, flags, analyzer);
 		/// </code>
 		/// </pre>
-		/// <p>
+		/// <p/>
 		/// The code above would construct a query:
 		/// <pre>
 		/// <code>
@@ -524,7 +524,7 @@
 		
 		/// <summary> Parses a query, searching on the fields specified. Use this if you need
 		/// to specify certain fields as required, and others as prohibited.
-		/// <p>
+		/// <p/>
 		/// 
 		/// <pre>
 		/// Usage:
@@ -537,7 +537,7 @@
 		/// MultiFieldQueryParser.parse(query, fields, flags, analyzer);
 		/// &lt;/code&gt;
 		/// </pre>
-		/// <p>
+		/// <p/>
 		/// The code above would construct a query:
 		/// 
 		/// <pre>



Mime
View raw message