Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 3F383200C59 for ; Sun, 2 Apr 2017 15:01:45 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 3E464160B8E; Sun, 2 Apr 2017 13:01:45 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 22999160BA4 for ; Sun, 2 Apr 2017 15:01:42 +0200 (CEST) Received: (qmail 9360 invoked by uid 500); 2 Apr 2017 13:01:35 -0000 Mailing-List: contact commits-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list commits@hbase.apache.org Received: (qmail 5377 invoked by uid 99); 2 Apr 2017 13:01:32 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 02 Apr 2017 13:01:32 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 4CBB3F21A2; Sun, 2 Apr 2017 13:01:31 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: git-site-role@apache.org To: commits@hbase.apache.org Date: Sun, 02 Apr 2017 13:02:18 -0000 Message-Id: In-Reply-To: <42115be86bfc420180ddb6e13499ba40@git.apache.org> References: <42115be86bfc420180ddb6e13499ba40@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [49/51] [partial] hbase-site git commit: Published site at 73e1bcd33515061be2dc2e51e6ad19d9798a8ef6. archived-at: Sun, 02 Apr 2017 13:01:45 -0000 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6d254372/apidocs/org/apache/hadoop/hbase/class-use/TableName.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/class-use/TableName.html b/apidocs/org/apache/hadoop/hbase/class-use/TableName.html index e42fa80..3492a80 100644 --- a/apidocs/org/apache/hadoop/hbase/class-use/TableName.html +++ b/apidocs/org/apache/hadoop/hbase/class-use/TableName.html @@ -409,31 +409,31 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. TableName -BufferedMutator.getName() -
Gets the fully qualified table name instance of the table that this BufferedMutator writes to.
+AsyncTableRegionLocator.getName() +
Gets the fully qualified table name instance of the table whose region we want to locate.
TableName -AsyncTableBase.getName() -
Gets the fully qualified table name instance of this table.
+BufferedMutator.getName() +
Gets the fully qualified table name instance of the table that this BufferedMutator writes to.
TableName -RegionLocator.getName() +Table.getName()
Gets the fully qualified table name instance of this table.
TableName -AsyncTableRegionLocator.getName() -
Gets the fully qualified table name instance of the table whose region we want to locate.
+AsyncTableBase.getName() +
Gets the fully qualified table name instance of this table.
TableName -Table.getName() +RegionLocator.getName()
Gets the fully qualified table name instance of this table.
@@ -743,17 +743,17 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. -RegionLocator -Connection.getRegionLocator(TableName tableName) -
Retrieve a RegionLocator implementation to inspect region information on a table.
- - - AsyncTableRegionLocator AsyncConnection.getRegionLocator(TableName tableName)
Retrieve a AsyncRegionLocator implementation to inspect region information on a table.
+ +RegionLocator +Connection.getRegionLocator(TableName tableName) +
Retrieve a RegionLocator implementation to inspect region information on a table.
+ + default Table Connection.getTable(TableName tableName) @@ -761,31 +761,31 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. -default Table -Connection.getTable(TableName tableName, +default AsyncTable +AsyncConnection.getTable(TableName tableName, ExecutorService pool) -
Retrieve a Table implementation for accessing a table.
+
Retrieve an AsyncTable implementation for accessing a table.
-default AsyncTable -AsyncConnection.getTable(TableName tableName, +default Table +Connection.getTable(TableName tableName, ExecutorService pool) -
Retrieve an AsyncTable implementation for accessing a table.
+
Retrieve a Table implementation for accessing a table.
-TableBuilder -Connection.getTableBuilder(TableName tableName, +AsyncTableBuilder<AsyncTable> +AsyncConnection.getTableBuilder(TableName tableName, ExecutorService pool) -
Returns an TableBuilder for creating Table.
+
Returns an AsyncTableBuilder for creating AsyncTable.
-AsyncTableBuilder<AsyncTable> -AsyncConnection.getTableBuilder(TableName tableName, +TableBuilder +Connection.getTableBuilder(TableName tableName, ExecutorService pool) -
Returns an AsyncTableBuilder for creating AsyncTable.
+
Returns an TableBuilder for creating Table.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6d254372/apidocs/org/apache/hadoop/hbase/client/Durability.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/client/Durability.html b/apidocs/org/apache/hadoop/hbase/client/Durability.html index 58db6a9..232f165 100644 --- a/apidocs/org/apache/hadoop/hbase/client/Durability.html +++ b/apidocs/org/apache/hadoop/hbase/client/Durability.html @@ -294,7 +294,7 @@ the order they are declared.
  • values

    -
    public static Durability[] values()
    +
    public static Durability[] values()
    Returns an array containing the constants of this enum type, in the order they are declared. This method may be used to iterate over the constants as follows: @@ -314,7 +314,7 @@ for (Durability c : Durability.values())
    • valueOf

      -
      public static Durability valueOf(String name)
      +
      public static Durability valueOf(String name)
      Returns the enum constant of this type with the specified name. The string must match exactly an identifier used to declare an enum constant in this type. (Extraneous whitespace characters are http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6d254372/apidocs/org/apache/hadoop/hbase/client/class-use/Append.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Append.html b/apidocs/org/apache/hadoop/hbase/client/class-use/Append.html index e4a83fb..a2cc235 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/Append.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Append.html @@ -174,14 +174,14 @@ -CompletableFuture<Result> -AsyncTableBase.append(Append append) +Result +Table.append(Append append)
      Appends values to one or more columns within a single row.
      -Result -Table.append(Append append) +CompletableFuture<Result> +AsyncTableBase.append(Append append)
      Appends values to one or more columns within a single row.
      http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6d254372/apidocs/org/apache/hadoop/hbase/client/class-use/Consistency.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Consistency.html b/apidocs/org/apache/hadoop/hbase/client/class-use/Consistency.html index f1e22a2..4c3092f 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/Consistency.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Consistency.html @@ -146,8 +146,8 @@ the order they are declared.
      -Get -Get.setConsistency(Consistency consistency)  +Scan +Scan.setConsistency(Consistency consistency)  Query @@ -156,8 +156,8 @@ the order they are declared.
    -Scan -Scan.setConsistency(Consistency consistency)  +Get +Get.setConsistency(Consistency consistency)  http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6d254372/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html b/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html index 64ed51a..76d7733 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Delete.html @@ -227,58 +227,58 @@ -default CompletableFuture<Boolean> -AsyncTableBase.checkAndDelete(byte[] row, +boolean +Table.checkAndDelete(byte[] row, byte[] family, byte[] qualifier, byte[] value, Delete delete) -
    Atomically checks if a row/family/qualifier value equals to the expected value.
    +
    Atomically checks if a row/family/qualifier value matches the expected + value.
    -boolean -Table.checkAndDelete(byte[] row, +default CompletableFuture<Boolean> +AsyncTableBase.checkAndDelete(byte[] row, byte[] family, byte[] qualifier, byte[] value, Delete delete) -
    Atomically checks if a row/family/qualifier value matches the expected - value.
    +
    Atomically checks if a row/family/qualifier value equals to the expected value.
    -CompletableFuture<Boolean> -AsyncTableBase.checkAndDelete(byte[] row, +boolean +Table.checkAndDelete(byte[] row, byte[] family, byte[] qualifier, CompareFilter.CompareOp compareOp, byte[] value, Delete delete) -
    Atomically checks if a row/family/qualifier value matches the expected value.
    +
    Atomically checks if a row/family/qualifier value matches the expected + value.
    -boolean -Table.checkAndDelete(byte[] row, +CompletableFuture<Boolean> +AsyncTableBase.checkAndDelete(byte[] row, byte[] family, byte[] qualifier, CompareFilter.CompareOp compareOp, byte[] value, Delete delete) -
    Atomically checks if a row/family/qualifier value matches the expected - value.
    +
    Atomically checks if a row/family/qualifier value matches the expected value.
    -CompletableFuture<Void> -AsyncTableBase.delete(Delete delete) +void +Table.delete(Delete delete)
    Deletes the specified cells/row.
    -void -Table.delete(Delete delete) +CompletableFuture<Void> +AsyncTableBase.delete(Delete delete)
    Deletes the specified cells/row.
    @@ -292,14 +292,14 @@ -List<CompletableFuture<Void>> -AsyncTableBase.delete(List<Delete> deletes) +void +Table.delete(List<Delete> deletes)
    Deletes the specified cells/rows in bulk.
    -void -Table.delete(List<Delete> deletes) +List<CompletableFuture<Void>> +AsyncTableBase.delete(List<Delete> deletes)
    Deletes the specified cells/rows in bulk.
    http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6d254372/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html b/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html index 82d7a30..154b53b 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Durability.html @@ -189,8 +189,8 @@ the order they are declared. -default CompletableFuture<Long> -AsyncTableBase.incrementColumnValue(byte[] row, +long +Table.incrementColumnValue(byte[] row, byte[] family, byte[] qualifier, long amount, @@ -199,8 +199,8 @@ the order they are declared. -long -Table.incrementColumnValue(byte[] row, +default CompletableFuture<Long> +AsyncTableBase.incrementColumnValue(byte[] row, byte[] family, byte[] qualifier, long amount, @@ -209,13 +209,13 @@ the order they are declared. -Append -Append.setDurability(Durability d)  - - Increment Increment.setDurability(Durability d)  + +Delete +Delete.setDurability(Durability d)  + Mutation Mutation.setDurability(Durability d) @@ -227,8 +227,8 @@ the order they are declared. Put.setDurability(Durability d)  -Delete -Delete.setDurability(Durability d)  +Append +Append.setDurability(Durability d)  http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6d254372/apidocs/org/apache/hadoop/hbase/client/class-use/Get.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Get.html b/apidocs/org/apache/hadoop/hbase/client/class-use/Get.html index 5e80df4..9fb7f70 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/Get.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Get.html @@ -246,26 +246,26 @@ -default CompletableFuture<Boolean> -AsyncTableBase.exists(Get get) +boolean +Table.exists(Get get)
    Test for the existence of columns in the table, as specified by the Get.
    -boolean -Table.exists(Get get) +default CompletableFuture<Boolean> +AsyncTableBase.exists(Get get)
    Test for the existence of columns in the table, as specified by the Get.
    -CompletableFuture<Result> -AsyncTableBase.get(Get get) +Result +Table.get(Get get)
    Extracts certain cells from a given row.
    -Result -Table.get(Get get) +CompletableFuture<Result> +AsyncTableBase.get(Get get)
    Extracts certain cells from a given row.
    @@ -285,26 +285,26 @@ -default CompletableFuture<List<Boolean>> -AsyncTableBase.existsAll(List<Get> gets) -
    A simple version for batch exists.
    - - - boolean[] Table.existsAll(List<Get> gets)
    Test for the existence of columns in the table, as specified by the Gets.
    + +default CompletableFuture<List<Boolean>> +AsyncTableBase.existsAll(List<Get> gets) +
    A simple version for batch exists.
    + + -List<CompletableFuture<Result>> -AsyncTableBase.get(List<Get> gets) +Result[] +Table.get(List<Get> gets)
    Extracts certain cells from the given rows, in batch.
    -Result[] -Table.get(List<Get> gets) +List<CompletableFuture<Result>> +AsyncTableBase.get(List<Get> gets)
    Extracts certain cells from the given rows, in batch.
    http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6d254372/apidocs/org/apache/hadoop/hbase/client/class-use/Increment.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Increment.html b/apidocs/org/apache/hadoop/hbase/client/class-use/Increment.html index 52079a5..db23811 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/Increment.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Increment.html @@ -182,14 +182,14 @@ -CompletableFuture<Result> -AsyncTableBase.increment(Increment increment) +Result +Table.increment(Increment increment)
    Increments one or more columns within a single row.
    -Result -Table.increment(Increment increment) +CompletableFuture<Result> +AsyncTableBase.increment(Increment increment)
    Increments one or more columns within a single row.
    http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6d254372/apidocs/org/apache/hadoop/hbase/client/class-use/IsolationLevel.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/IsolationLevel.html b/apidocs/org/apache/hadoop/hbase/client/class-use/IsolationLevel.html index 86f68a7..2a7b626 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/IsolationLevel.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/IsolationLevel.html @@ -139,8 +139,8 @@ the order they are declared. -Get -Get.setIsolationLevel(IsolationLevel level)  +Scan +Scan.setIsolationLevel(IsolationLevel level)  Query @@ -149,8 +149,8 @@ the order they are declared. -Scan -Scan.setIsolationLevel(IsolationLevel level)  +Get +Get.setIsolationLevel(IsolationLevel level)  http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6d254372/apidocs/org/apache/hadoop/hbase/client/class-use/Mutation.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Mutation.html b/apidocs/org/apache/hadoop/hbase/client/class-use/Mutation.html index 0408183..134b3ee 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/Mutation.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Mutation.html @@ -249,15 +249,15 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. MutationSerialization.getDeserializer(Class<Mutation> c)  -org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Mutation> -MultiTableOutputFormat.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)  - - org.apache.hadoop.mapreduce.RecordWriter<KEY,Mutation> TableOutputFormat.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
    Creates a new record writer.
    + +org.apache.hadoop.mapreduce.RecordWriter<ImmutableBytesWritable,Mutation> +MultiTableOutputFormat.getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)  + org.apache.hadoop.io.serializer.Serializer<Mutation> MutationSerialization.getSerializer(Class<Mutation> c)  http://git-wip-us.apache.org/repos/asf/hbase-site/blob/6d254372/apidocs/org/apache/hadoop/hbase/client/class-use/Put.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/client/class-use/Put.html b/apidocs/org/apache/hadoop/hbase/client/class-use/Put.html index 0f6abbd..ca9a462 100644 --- a/apidocs/org/apache/hadoop/hbase/client/class-use/Put.html +++ b/apidocs/org/apache/hadoop/hbase/client/class-use/Put.html @@ -290,47 +290,47 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. -default CompletableFuture<Boolean> -AsyncTableBase.checkAndPut(byte[] row, +boolean +Table.checkAndPut(byte[] row, byte[] family, byte[] qualifier, byte[] value, Put put) -
    Atomically checks if a row/family/qualifier value equals to the expected value.
    +
    Atomically checks if a row/family/qualifier value matches the expected + value.
    -boolean -Table.checkAndPut(byte[] row, +default CompletableFuture<Boolean> +AsyncTableBase.checkAndPut(byte[] row, byte[] family, byte[] qualifier, byte[] value, Put put) -
    Atomically checks if a row/family/qualifier value matches the expected - value.
    +
    Atomically checks if a row/family/qualifier value equals to the expected value.
    -CompletableFuture<Boolean> -AsyncTableBase.checkAndPut(byte[] row, +boolean +Table.checkAndPut(byte[] row, byte[] family, byte[] qualifier, CompareFilter.CompareOp compareOp, byte[] value, Put put) -
    Atomically checks if a row/family/qualifier value matches the expected value.
    +
    Atomically checks if a row/family/qualifier value matches the expected + value.
    -boolean -Table.checkAndPut(byte[] row, +CompletableFuture<Boolean> +AsyncTableBase.checkAndPut(byte[] row, byte[] family, byte[] qualifier, CompareFilter.CompareOp compareOp, byte[] value, Put put) -
    Atomically checks if a row/family/qualifier value matches the expected - value.
    +
    Atomically checks if a row/family/qualifier value matches the expected value.
    @@ -353,17 +353,17 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. -CompletableFuture<Void> -AsyncTableBase.put(Put put) -
    Puts some data to the table.
    - - - void Table.put(Put put)
    Puts some data in the table.
    + +CompletableFuture<Void> +AsyncTableBase.put(Put put) +
    Puts some data to the table.
    + + boolean HTableMultiplexer.put(TableName tableName, @@ -398,14 +398,14 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. -List<CompletableFuture<Void>> -AsyncTableBase.put(List<Put> puts) +void +Table.put(List<Put> puts)
    Puts some data in the table, in batch.
    -void -Table.put(List<Put> puts) +List<CompletableFuture<Void>> +AsyncTableBase.put(List<Put> puts)
    Puts some data in the table, in batch.