Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id ACC99200CAD for ; Tue, 6 Jun 2017 02:11:39 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id AB8EC160BE4; Tue, 6 Jun 2017 00:11:39 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id E981D160BF8 for ; Tue, 6 Jun 2017 02:11:37 +0200 (CEST) Received: (qmail 64873 invoked by uid 500); 6 Jun 2017 00:11:36 -0000 Mailing-List: contact commits-help@lucenenet.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: lucene-net-dev@lucenenet.apache.org Delivered-To: mailing list commits@lucenenet.apache.org Received: (qmail 63676 invoked by uid 99); 6 Jun 2017 00:11:35 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 06 Jun 2017 00:11:35 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 50C1BDFFAB; Tue, 6 Jun 2017 00:11:35 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: nightowl888@apache.org To: commits@lucenenet.apache.org Date: Tue, 06 Jun 2017 00:11:58 -0000 Message-Id: <0072e9b0ac1f4be79e582a367425a943@git.apache.org> In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [25/48] lucenenet git commit: Lucene.Net.Util: Fixed XML Documentation comments, types beginning with H-Z archived-at: Tue, 06 Jun 2017 00:11:39 -0000 http://git-wip-us.apache.org/repos/asf/lucenenet/blob/268e78d4/src/Lucene.Net/Util/WAH8DocIdSet.cs ---------------------------------------------------------------------- diff --git a/src/Lucene.Net/Util/WAH8DocIdSet.cs b/src/Lucene.Net/Util/WAH8DocIdSet.cs index 2641a41..b5abda8 100644 --- a/src/Lucene.Net/Util/WAH8DocIdSet.cs +++ b/src/Lucene.Net/Util/WAH8DocIdSet.cs @@ -29,48 +29,48 @@ namespace Lucene.Net.Util using PackedInt32s = Lucene.Net.Util.Packed.PackedInt32s; /// - /// implementation based on word-aligned hybrid encoding on + /// implementation based on word-aligned hybrid encoding on /// words of 8 bits. - ///

this implementation doesn't support random-access but has a fast - /// which can advance in logarithmic time thanks to - /// an index.

- ///

The compression scheme is simplistic and should work well with sparse and + /// This implementation doesn't support random-access but has a fast + /// which can advance in logarithmic time thanks to + /// an index. + /// The compression scheme is simplistic and should work well with sparse and /// very dense doc id sets while being only slightly larger than a - /// for incompressible sets (overhead<2% in the worst - /// case) in spite of the index.

- ///

Format: The format is byte-aligned. An 8-bits word is either clean, + /// for incompressible sets (overhead<2% in the worst + /// case) in spite of the index. + /// Format: The format is byte-aligned. An 8-bits word is either clean, /// meaning composed only of zeros or ones, or dirty, meaning that it contains /// between 1 and 7 bits set. The idea is to encode sequences of clean words - /// using run-length encoding and to leave sequences of dirty words as-is.

- /// - /// - /// - ///
TokenClean length+Dirty length+Dirty words
1 byte0-n bytes0-n bytes0-n bytes
- ///
    - ///
  • Token encodes whether clean means full of zeros or ones in the - /// first bit, the number of clean words minus 2 on the next 3 bits and the - /// number of dirty words on the last 4 bits. The higher-order bit is a - /// continuation bit, meaning that the number is incomplete and needs additional - /// bytes to be read.
  • - ///
  • Clean length+: If clean length has its higher-order bit set, - /// you need to read a , shift it by 3 bits on - /// the left side and add it to the 3 bits which have been read in the token.
  • - ///
  • Dirty length+ works the same way as Clean length+ but - /// on 4 bits and for the length of dirty words.
  • - ///
  • Dirty words are the dirty words, there are Dirty length - /// of them.
  • - ///
- ///

this format cannot encode sequences of less than 2 clean words and 0 dirty + /// using run-length encoding and to leave sequences of dirty words as-is. + /// + /// TokenClean length+Dirty length+Dirty words + /// 1 byte0-n bytes0-n bytes0-n bytes + /// + /// + /// Token encodes whether clean means full of zeros or ones in the + /// first bit, the number of clean words minus 2 on the next 3 bits and the + /// number of dirty words on the last 4 bits. The higher-order bit is a + /// continuation bit, meaning that the number is incomplete and needs additional + /// bytes to be read. + /// Clean length+: If clean length has its higher-order bit set, + /// you need to read a vint (), shift it by 3 bits on + /// the left side and add it to the 3 bits which have been read in the token. + /// Dirty length+ works the same way as Clean length+ but + /// on 4 bits and for the length of dirty words. + /// Dirty wordsare the dirty words, there are Dirty length + /// of them. + /// + /// This format cannot encode sequences of less than 2 clean words and 0 dirty /// word. The reason is that if you find a single clean word, you should rather - /// encode it as a dirty word. this takes the same space as starting a new + /// encode it as a dirty word. This takes the same space as starting a new /// sequence (since you need one byte for the token) but will be lighter to /// decode. There is however an exception for the first sequence. Since the first /// sequence may start directly with a dirty word, the clean length is encoded - /// directly, without subtracting 2.

- ///

There is an additional restriction on the format: the sequence of dirty - /// words is not allowed to contain two consecutive clean words. this restriction + /// directly, without subtracting 2. + /// There is an additional restriction on the format: the sequence of dirty + /// words is not allowed to contain two consecutive clean words. This restriction /// exists to make sure no space is wasted and to make sure iterators can read - /// the next doc ID by reading at most 2 dirty words.

+ /// the next doc ID by reading at most 2 dirty words. /// @lucene.experimental ///
public sealed class WAH8DocIdSet : DocIdSet @@ -110,14 +110,14 @@ namespace Lucene.Net.Util } /// - /// Same as with the default index interval. + /// Same as with the default index interval. public static WAH8DocIdSet Intersect(ICollection docIdSets) { return Intersect(docIdSets, DEFAULT_INDEX_INTERVAL); } /// - /// Compute the intersection of the provided sets. this method is much faster than + /// Compute the intersection of the provided sets. This method is much faster than /// computing the intersection manually since it operates directly at the byte level. /// public static WAH8DocIdSet Intersect(ICollection docIdSets, int indexInterval) @@ -184,14 +184,14 @@ namespace Lucene.Net.Util } /// - /// Same as with the default index interval. + /// Same as with the default index interval. public static WAH8DocIdSet Union(ICollection docIdSets) { return Union(docIdSets, DEFAULT_INDEX_INTERVAL); } /// - /// Compute the union of the provided sets. this method is much faster than + /// Compute the union of the provided sets. This method is much faster than /// computing the union manually since it operates directly at the byte level. /// public static WAH8DocIdSet Union(ICollection docIdSets, int indexInterval) @@ -292,12 +292,12 @@ namespace Lucene.Net.Util /// /// Set the index interval. Smaller index intervals improve performance of - /// but make the - /// larger. An index interval i makes the index add an overhead - /// which is at most 4/i, but likely much less.The default index - /// interval is 8, meaning the index has an overhead of at most - /// 50%. To disable indexing, you can pass as an - /// index interval. + /// but make the + /// larger. An index interval i makes the index add an overhead + /// which is at most 4/i, but likely much less. The default index + /// interval is 8, meaning the index has an overhead of at most + /// 50%. To disable indexing, you can pass as an + /// index interval. /// public virtual object SetIndexInterval(int indexInterval) { @@ -454,7 +454,7 @@ namespace Lucene.Net.Util } /// - /// Build a new . + /// Build a new . public virtual WAH8DocIdSet Build() { if (cardinality == 0) @@ -509,7 +509,7 @@ namespace Lucene.Net.Util } /// - /// A builder for s. + /// A builder for s. public sealed class Builder : WordBuilder { private int lastDocID; @@ -554,7 +554,7 @@ namespace Lucene.Net.Util } /// - /// Add the content of the provided . + /// Add the content of the provided . public Builder Add(DocIdSetIterator disi) { for (int doc = disi.NextDoc(); doc != DocIdSetIterator.NO_MORE_DOCS; doc = disi.NextDoc()) @@ -893,7 +893,7 @@ namespace Lucene.Net.Util } /// - /// Return the number of documents in this in constant time. + /// Return the number of documents in this in constant time. public int Cardinality() { return cardinality; http://git-wip-us.apache.org/repos/asf/lucenenet/blob/268e78d4/src/Lucene.Net/Util/WeakIdentityMap.cs ---------------------------------------------------------------------- diff --git a/src/Lucene.Net/Util/WeakIdentityMap.cs b/src/Lucene.Net/Util/WeakIdentityMap.cs index 2999d94..a1ba475 100644 --- a/src/Lucene.Net/Util/WeakIdentityMap.cs +++ b/src/Lucene.Net/Util/WeakIdentityMap.cs @@ -24,38 +24,38 @@ namespace Lucene.Net.Util */ /// - /// Implements a combination of and - /// . - /// Useful for caches that need to key off of a {@code ==} comparison - /// instead of a {@code .equals}. + /// Implements a combination of java.util.WeakHashMap and + /// java.util.IdentityHashMap. + /// Useful for caches that need to key off of a == comparison + /// instead of a .Equals(object). /// - ///

this class is not a general-purpose + /// This class is not a general-purpose /// implementation! It intentionally violates - /// Map's general contract, which mandates the use of the equals method - /// when comparing objects. this class is designed for use only in the + /// 's general contract, which mandates the use of the method + /// when comparing objects. This class is designed for use only in the /// rare cases wherein reference-equality semantics are required. /// - ///

this implementation was forked from Apache CXF - /// but modified to not implement the interface and + /// This implementation was forked from Apache CXF + /// but modified to not implement the interface and /// without any set views on it, as those are error-prone and inefficient, - /// if not implemented carefully. The map only contains implementations - /// on the values and not-GCed keys. Lucene's implementation also supports {@code null} + /// if not implemented carefully. The map only contains implementations + /// on the values and not-GCed keys. Lucene's implementation also supports null /// keys, but those are never weak! /// - ///

The map supports two modes of operation: - ///

- /// + /// The map supports two modes of operation: + /// + /// reapOnRead = true: This behaves identical to a java.util.WeakHashMap + /// where it also cleans up the reference queue on every read operation (, + /// , , ), freeing map entries + /// of already GCed keys. + /// reapOnRead = false: This mode does not call on every read + /// operation. In this case, the reference queue is only cleaned up on write operations + /// (like ). This is ideal for maps with few entries where + /// the keys are unlikely be garbage collected, but there are lots of + /// operations. The code can still call to manually clean up the queue without + /// doing a write operation. + /// + /// /// @lucene.internal ///
public sealed class WeakIdentityMap @@ -66,7 +66,7 @@ namespace Lucene.Net.Util private readonly bool reapOnRead; /// - /// Creates a new {@code WeakIdentityMap} based on a non-synchronized . + /// Creates a new based on a non-synchronized . /// The map cleans up the reference queue on every read operation. /// public static WeakIdentityMap NewHashMap() @@ -75,7 +75,7 @@ namespace Lucene.Net.Util } /// - /// Creates a new {@code WeakIdentityMap} based on a non-synchronized . + /// Creates a new based on a non-synchronized . /// controls if the map cleans up the reference queue on every read operation. public static WeakIdentityMap NewHashMap(bool reapOnRead) { @@ -83,7 +83,7 @@ namespace Lucene.Net.Util } /// - /// Creates a new {@code WeakIdentityMap} based on a . + /// Creates a new based on a . /// The map cleans up the reference queue on every read operation. /// public static WeakIdentityMap NewConcurrentHashMap() @@ -92,7 +92,7 @@ namespace Lucene.Net.Util } /// - /// Creates a new {@code WeakIdentityMap} based on a . + /// Creates a new based on a . /// controls if the map cleans up the reference queue on every read operation. public static WeakIdentityMap NewConcurrentHashMap(bool reapOnRead) { @@ -116,7 +116,7 @@ namespace Lucene.Net.Util } /// - /// Returns {@code true} if this map contains a mapping for the specified key. + /// Returns true if this map contains a mapping for the specified key. public bool ContainsKey(object key) { if (reapOnRead) @@ -157,7 +157,10 @@ namespace Lucene.Net.Util return backingStore[new IdentityWeakReference(key)] = value; } - public IEnumerable Keys + /// + /// Gets an object containing the keys of the . + /// + public IEnumerable Keys // LUCENENET TODO: API - change to ICollection { get { @@ -193,7 +196,10 @@ namespace Lucene.Net.Util } } - public IEnumerable Values + /// + /// Gets an object containing the values of the . + /// + public IEnumerable Values // LUCENENET TODO: API - change to ICollection { get { @@ -203,7 +209,7 @@ namespace Lucene.Net.Util } /// - /// Returns {@code true} if this map contains no key-value mappings. + /// Returns true if this map contains no key-value mappings. public bool IsEmpty { get @@ -215,8 +221,8 @@ namespace Lucene.Net.Util /// /// Removes the mapping for a key from this weak hash map if it is present. /// Returns the value to which this map previously associated the key, - /// or {@code null} if the map contained no mapping for the key. - /// A return value of {@code null} does not necessarily indicate that + /// or null if the map contained no mapping for the key. + /// A return value of null does not necessarily indicate that /// the map contained. /// public bool Remove(object key) @@ -226,9 +232,10 @@ namespace Lucene.Net.Util } /// - /// Returns the number of key-value mappings in this map. this result is a snapshot, + /// Returns the number of key-value mappings in this map. This result is a snapshot, /// and may not reflect unprocessed entries that will be removed before next /// attempted access because they are no longer referenced. + /// /// NOTE: This was size() in Lucene. /// public int Count @@ -308,9 +315,9 @@ namespace Lucene.Net.Util /// /// Returns an iterator over all values of this map. - /// this iterator may return values whose key is already + /// This iterator may return values whose key is already /// garbage collected while iterator is consumed, - /// especially if {@code reapOnRead} is {@code false}. + /// especially if is false. /// /// NOTE: This was valueIterator() in Lucene. /// @@ -324,11 +331,12 @@ namespace Lucene.Net.Util } /// - /// this method manually cleans up the reference queue to remove all garbage + /// This method manually cleans up the reference queue to remove all garbage /// collected key/value pairs from the map. Calling this method is not needed - /// if {@code reapOnRead = true}. Otherwise it might be a good idea - /// to call this method when there is spare time (e.g. from a background thread). - /// Information about the reapOnRead setting + /// if reapOnRead = true. Otherwise it might be a good idea + /// to call this method when there is spare time (e.g. from a background thread). + /// Information about the reapOnRead setting + /// public void Reap() { List keysToRemove = null;