Return-Path: X-Original-To: apmail-hbase-commits-archive@www.apache.org Delivered-To: apmail-hbase-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DE17617B1C for ; Fri, 29 May 2015 18:31:58 +0000 (UTC) Received: (qmail 22671 invoked by uid 500); 29 May 2015 18:31:58 -0000 Delivered-To: apmail-hbase-commits-archive@hbase.apache.org Received: (qmail 22630 invoked by uid 500); 29 May 2015 18:31:58 -0000 Mailing-List: contact commits-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list commits@hbase.apache.org Received: (qmail 22616 invoked by uid 99); 29 May 2015 18:31:58 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 29 May 2015 18:31:58 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 83F4DE0FB8; Fri, 29 May 2015 18:31:58 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: stack@apache.org To: commits@hbase.apache.org Message-Id: X-Mailer: ASF-Git Admin Mailer Subject: hbase git commit: HBASE-13799 javadoc how Scan gets polluted when used; if you set attributes or ask for scan metrics Date: Fri, 29 May 2015 18:31:58 +0000 (UTC) Repository: hbase Updated Branches: refs/heads/master d86f2fa3b -> 62b5e578a HBASE-13799 javadoc how Scan gets polluted when used; if you set attributes or ask for scan metrics Project: http://git-wip-us.apache.org/repos/asf/hbase/repo Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/62b5e578 Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/62b5e578 Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/62b5e578 Branch: refs/heads/master Commit: 62b5e578a8bf413117474816a0d24efe9be4577d Parents: d86f2fa Author: stack Authored: Fri May 29 11:31:33 2015 -0700 Committer: stack Committed: Fri May 29 11:31:49 2015 -0700 ---------------------------------------------------------------------- .../org/apache/hadoop/hbase/client/Scan.java | 33 ++++++++------------ 1 file changed, 13 insertions(+), 20 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/hbase/blob/62b5e578/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java ---------------------------------------------------------------------- diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java index a2e4449..a0193fb 100644 --- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java +++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java @@ -51,40 +51,33 @@ import org.apache.hadoop.hbase.util.Bytes; * and stopRow may be defined. If rows are not specified, the Scanner will * iterate over all rows. *

- * To scan everything for each row, instantiate a Scan object. + * To get all columns from all rows of a Table, create an instance with no constraints; use the + * {@link #Scan()} constructor. To constrain the scan to specific column families, + * call {@link #addFamily(byte[]) addFamily} for each family to retrieve on your Scan instance. *

- * To modify scanner caching for just this scan, use {@link #setCaching(int) setCaching}. - * If caching is NOT set, we will use the caching value of the hosting {@link Table}. - * In addition to row caching, it is possible to specify a - * maximum result size, using {@link #setMaxResultSize(long)}. When both are used, - * single server requests are limited by either number of rows or maximum result size, whichever - * limit comes first. - *

- * To further define the scope of what to get when scanning, perform additional - * methods as outlined below. - *

- * To get all columns from specific families, execute {@link #addFamily(byte[]) addFamily} - * for each family to retrieve. - *

- * To get specific columns, execute {@link #addColumn(byte[], byte[]) addColumn} + * To get specific columns, call {@link #addColumn(byte[], byte[]) addColumn} * for each column to retrieve. *

* To only retrieve columns within a specific range of version timestamps, - * execute {@link #setTimeRange(long, long) setTimeRange}. + * call {@link #setTimeRange(long, long) setTimeRange}. *

- * To only retrieve columns with a specific timestamp, execute + * To only retrieve columns with a specific timestamp, call * {@link #setTimeStamp(long) setTimestamp}. *

- * To limit the number of versions of each column to be returned, execute + * To limit the number of versions of each column to be returned, call * {@link #setMaxVersions(int) setMaxVersions}. *

* To limit the maximum number of values returned for each call to next(), - * execute {@link #setBatch(int) setBatch}. + * call {@link #setBatch(int) setBatch}. *

- * To add a filter, execute {@link #setFilter(org.apache.hadoop.hbase.filter.Filter) setFilter}. + * To add a filter, call {@link #setFilter(org.apache.hadoop.hbase.filter.Filter) setFilter}. *

* Expert: To explicitly disable server-side block caching for this scan, * execute {@link #setCacheBlocks(boolean)}. + *

Note: Usage alters Scan instances. Internally, attributes are updated as the Scan + * runs and if enabled, metrics accumulate in the Scan instance. Be aware this is the case when + * you go to clone a Scan instance or if you go to reuse a created Scan instance; safer is create + * a Scan instance per usage. */ @InterfaceAudience.Public @InterfaceStability.Stable