Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 07EDE200B53 for ; Tue, 12 Jul 2016 17:41:29 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 069A1160A79; Tue, 12 Jul 2016 15:41:29 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 1BFCF160A56 for ; Tue, 12 Jul 2016 17:41:26 +0200 (CEST) Received: (qmail 44165 invoked by uid 500); 12 Jul 2016 15:41:14 -0000 Mailing-List: contact commits-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list commits@hbase.apache.org Received: (qmail 42946 invoked by uid 99); 12 Jul 2016 15:41:13 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 12 Jul 2016 15:41:13 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 95D05E1021; Tue, 12 Jul 2016 15:41:13 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: busbey@apache.org To: commits@hbase.apache.org Date: Tue, 12 Jul 2016 15:41:33 -0000 Message-Id: <178d249dc2914f47821104dd02995dc6@git.apache.org> In-Reply-To: <8e4bd2f26e134b0daa3779a007c3e443@git.apache.org> References: <8e4bd2f26e134b0daa3779a007c3e443@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [21/52] [partial] hbase-site git commit: Published site at 2650711e944244b3b87e6d6805b7716b216e8786. archived-at: Tue, 12 Jul 2016 15:41:29 -0000 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/27849820/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html b/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html index fced4aa..79477f4 100644 --- a/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html +++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html @@ -381,13 +381,13 @@ service. Result -Table.append(Append append) +HTable.append(Append append)
Appends values to one or more columns within a single row.
Result -HTable.append(Append append) +Table.append(Append append)
Appends values to one or more columns within a single row.
@@ -404,16 +404,16 @@ service. -Result -RpcRetryingCallerWithReadReplicas.ReplicaRegionServerCallable.call(int callTimeout)  +Result[] +ScannerCallable.call(int callTimeout)  -Result[] -ClientSmallScanner.SmallScannerCallable.call(int timeout)  +Result +RpcRetryingCallerWithReadReplicas.ReplicaRegionServerCallable.call(int callTimeout)  Result[] -ScannerCallable.call(int callTimeout)  +ClientSmallScanner.SmallScannerCallable.call(int timeout)  Result[] @@ -480,13 +480,13 @@ service. Result -Table.get(Get get) +HTable.get(Get get)
Extracts certain cells from a given row.
Result -HTable.get(Get get) +Table.get(Get get)
Extracts certain cells from a given row.
@@ -501,13 +501,13 @@ service. Result[] -Table.get(List<Get> gets) +HTable.get(List<Get> gets)
Extracts certain cells from the given rows, in batch.
Result[] -HTable.get(List<Get> gets) +Table.get(List<Get> gets)
Extracts certain cells from the given rows, in batch.
@@ -517,13 +517,13 @@ service. Result -Table.increment(Increment increment) +HTable.increment(Increment increment)
Increments one or more columns within a single row.
Result -HTable.increment(Increment increment) +Table.increment(Increment increment)
Increments one or more columns within a single row.
@@ -537,11 +537,13 @@ service. Result -ClientSmallReversedScanner.next()  +ResultScanner.next() +
Grab the next row's worth of values.
+ Result -ClientSmallScanner.next()  +ClientSmallReversedScanner.next()  Result @@ -549,27 +551,25 @@ service. Result -ResultScanner.next() -
Grab the next row's worth of values.
- +ClientSmallScanner.next()  Result -ClientSideRegionScanner.next()  +TableSnapshotScanner.next()  Result -TableSnapshotScanner.next()  +ClientSideRegionScanner.next()  Result[] -AbstractClientScanner.next(int nbRows) -
Get nbRows rows.
- +ResultScanner.next(int nbRows)  Result[] -ResultScanner.next(int nbRows)  +AbstractClientScanner.next(int nbRows) +
Get nbRows rows.
+ protected Result @@ -914,11 +914,9 @@ service. org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> -TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split, +MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, - org.apache.hadoop.mapred.Reporter reporter) -
Builds a TableRecordReader.
- + org.apache.hadoop.mapred.Reporter reporter)
  org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> @@ -928,9 +926,11 @@ service. org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> -MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, +TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, - org.apache.hadoop.mapred.Reporter reporter)  + org.apache.hadoop.mapred.Reporter reporter) +
Builds a TableRecordReader.
+ @@ -949,28 +949,28 @@ service. void -GroupingTableMap.map(ImmutableBytesWritable key, +IdentityTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, org.apache.hadoop.mapred.Reporter reporter) -
Extract the grouping columns from value to construct a new key.
+
Pass the key, value to reduce
void -RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, - Result values, +GroupingTableMap.map(ImmutableBytesWritable key, + Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, - org.apache.hadoop.mapred.Reporter reporter)  + org.apache.hadoop.mapred.Reporter reporter) +
Extract the grouping columns from value to construct a new key.
+ void -IdentityTableMap.map(ImmutableBytesWritable key, - Result value, +RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, + Result values, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, - org.apache.hadoop.mapred.Reporter reporter) -
Pass the key, value to reduce
- + org.apache.hadoop.mapred.Reporter reporter)
  boolean @@ -998,28 +998,28 @@ service. void -GroupingTableMap.map(ImmutableBytesWritable key, +IdentityTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, org.apache.hadoop.mapred.Reporter reporter) -
Extract the grouping columns from value to construct a new key.
+
Pass the key, value to reduce
void -RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, - Result values, +GroupingTableMap.map(ImmutableBytesWritable key, + Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, - org.apache.hadoop.mapred.Reporter reporter)  + org.apache.hadoop.mapred.Reporter reporter) +
Extract the grouping columns from value to construct a new key.
+ void -IdentityTableMap.map(ImmutableBytesWritable key, - Result value, +RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, + Result values, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, - org.apache.hadoop.mapred.Reporter reporter) -
Pass the key, value to reduce
- + org.apache.hadoop.mapred.Reporter reporter)
  @@ -1049,11 +1049,11 @@ service. private Result -TableRecordReaderImpl.value  +MultithreadedTableMapper.SubMapRecordReader.value  private Result -MultithreadedTableMapper.SubMapRecordReader.value  +TableRecordReaderImpl.value  @@ -1095,27 +1095,27 @@ service. Result -TableSnapshotInputFormatImpl.RecordReader.getCurrentValue()  - - -Result TableRecordReader.getCurrentValue()
Returns the current value.
+ +Result +MultithreadedTableMapper.SubMapRecordReader.getCurrentValue()  + Result TableSnapshotInputFormat.TableSnapshotRegionRecordReader.getCurrentValue()  Result -TableRecordReaderImpl.getCurrentValue() -
Returns the current value.
- +TableSnapshotInputFormatImpl.RecordReader.getCurrentValue()  Result -MultithreadedTableMapper.SubMapRecordReader.getCurrentValue()  +TableRecordReaderImpl.getCurrentValue() +
Returns the current value.
+ @@ -1128,16 +1128,16 @@ service. org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> +TableSnapshotInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, + org.apache.hadoop.mapreduce.TaskAttemptContext context)  + + +org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) - -org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> -TableSnapshotInputFormat.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, - org.apache.hadoop.mapreduce.TaskAttemptContext context)  - org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, @@ -1226,18 +1226,18 @@ service. void -IdentityTableMapper.map(ImmutableBytesWritable key, - Result value, +RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, + Result values, org.apache.hadoop.mapreduce.Mapper.Context context) -
Pass the key, value to reduce.
+
Maps the data.
void -RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, - Result values, +IdentityTableMapper.map(ImmutableBytesWritable key, + Result value, org.apache.hadoop.mapreduce.Mapper.Context context) -
Maps the data.
+
Pass the key, value to reduce.
@@ -1477,11 +1477,11 @@ service. void -DefaultOperationQuota.addGetResult(Result result)  +NoopOperationQuota.addGetResult(Result result)  void -NoopOperationQuota.addGetResult(Result result)  +DefaultOperationQuota.addGetResult(Result result)  void @@ -1546,11 +1546,11 @@ service. void -DefaultOperationQuota.addScanResult(List<Result> results)  +NoopOperationQuota.addScanResult(List<Result> results)  void -NoopOperationQuota.addScanResult(List<Result> results)  +DefaultOperationQuota.addScanResult(List<Result> results)  void http://git-wip-us.apache.org/repos/asf/hbase-site/blob/27849820/devapidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html b/devapidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html index 84cfd1f..e57b499 100644 --- a/devapidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html +++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/ResultScanner.html @@ -218,14 +218,14 @@ service. ResultScanner -Table.getScanner(byte[] family) -
Gets a scanner on the current table for the given family.
+HTable.getScanner(byte[] family) +
The underlying HTable must not be closed.
ResultScanner -HTable.getScanner(byte[] family) -
The underlying HTable must not be closed.
+Table.getScanner(byte[] family) +
Gets a scanner on the current table for the given family.
@@ -234,16 +234,16 @@ service. ResultScanner -Table.getScanner(byte[] family, +HTable.getScanner(byte[] family, byte[] qualifier) -
Gets a scanner on the current table for the given family and qualifier.
+
The underlying HTable must not be closed.
ResultScanner -HTable.getScanner(byte[] family, +Table.getScanner(byte[] family, byte[] qualifier) -
The underlying HTable must not be closed.
+
Gets a scanner on the current table for the given family and qualifier.
@@ -253,15 +253,15 @@ service. ResultScanner -Table.getScanner(Scan scan) -
Returns a scanner on the current table as specified by the Scan - object.
+HTable.getScanner(Scan scan) +
The underlying HTable must not be closed.
ResultScanner -HTable.getScanner(Scan scan) -
The underlying HTable must not be closed.
+Table.getScanner(Scan scan) +
Returns a scanner on the current table as specified by the Scan + object.
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/27849820/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html index 4c957f9..f6dc7bc 100644 --- a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html +++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html @@ -253,29 +253,29 @@ +T +RpcRetryingCallerImpl.callWithoutRetries(RetryingCallable<T> callable, + int callTimeout)  + + T RpcRetryingCaller.callWithoutRetries(RetryingCallable<T> callable, int callTimeout)
Call the server once only.
- + T -RpcRetryingCallerImpl.callWithoutRetries(RetryingCallable<T> callable, - int callTimeout)  +RpcRetryingCallerImpl.callWithRetries(RetryingCallable<T> callable, + int callTimeout)  - + T RpcRetryingCaller.callWithRetries(RetryingCallable<T> callable, int callTimeout)
Retries if invocation fails.
- -T -RpcRetryingCallerImpl.callWithRetries(RetryingCallable<T> callable, - int callTimeout)  - void ResultBoundedCompletionService.submit(RetryingCallable<V> task, http://git-wip-us.apache.org/repos/asf/hbase-site/blob/27849820/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallableBase.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallableBase.html b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallableBase.html index f869f43..df96b10 100644 --- a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallableBase.html +++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallableBase.html @@ -224,13 +224,13 @@ -FastFailInterceptorContext -FastFailInterceptorContext.prepare(RetryingCallableBase callable)  - - RetryingCallerInterceptorContext NoOpRetryingInterceptorContext.prepare(RetryingCallableBase callable)  + +FastFailInterceptorContext +FastFailInterceptorContext.prepare(RetryingCallableBase callable)  + abstract RetryingCallerInterceptorContext RetryingCallerInterceptorContext.prepare(RetryingCallableBase callable, @@ -240,13 +240,13 @@ -FastFailInterceptorContext -FastFailInterceptorContext.prepare(RetryingCallableBase callable, +RetryingCallerInterceptorContext +NoOpRetryingInterceptorContext.prepare(RetryingCallableBase callable, int tries)  -RetryingCallerInterceptorContext -NoOpRetryingInterceptorContext.prepare(RetryingCallableBase callable, +FastFailInterceptorContext +FastFailInterceptorContext.prepare(RetryingCallableBase callable, int tries)  http://git-wip-us.apache.org/repos/asf/hbase-site/blob/27849820/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallerInterceptor.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallerInterceptor.html b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallerInterceptor.html index d4a8e71..22e92a1 100644 --- a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallerInterceptor.html +++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallerInterceptor.html @@ -120,7 +120,7 @@ private RetryingCallerInterceptor -ConnectionImplementation.interceptor  +RpcRetryingCallerImpl.interceptor  private RetryingCallerInterceptor @@ -128,7 +128,7 @@ private RetryingCallerInterceptor -RpcRetryingCallerImpl.interceptor  +ConnectionImplementation.interceptor  static RetryingCallerInterceptor http://git-wip-us.apache.org/repos/asf/hbase-site/blob/27849820/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallerInterceptorContext.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallerInterceptorContext.html b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallerInterceptorContext.html index 24c6e4f..7a9d607 100644 --- a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallerInterceptorContext.html +++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallerInterceptorContext.html @@ -131,15 +131,15 @@ -RetryingCallerInterceptorContext -PreemptiveFastFailInterceptor.createEmptyContext()  - - abstract RetryingCallerInterceptorContext RetryingCallerInterceptor.createEmptyContext()
This returns the context object for the current call.
+ +RetryingCallerInterceptorContext +PreemptiveFastFailInterceptor.createEmptyContext()  + RetryingCallerInterceptorContext NoOpRetryableCallerInterceptor.createEmptyContext()  @@ -179,46 +179,46 @@ -void -PreemptiveFastFailInterceptor.handleFailure(RetryingCallerInterceptorContext context, - Throwable t)  - - abstract void RetryingCallerInterceptor.handleFailure(RetryingCallerInterceptorContext context, Throwable t)
Call this function in case we caught a failure during retries.
+ +void +PreemptiveFastFailInterceptor.handleFailure(RetryingCallerInterceptorContext context, + Throwable t)  + void NoOpRetryableCallerInterceptor.handleFailure(RetryingCallerInterceptorContext context, Throwable t)  -void -PreemptiveFastFailInterceptor.intercept(RetryingCallerInterceptorContext context)  - - abstract void RetryingCallerInterceptor.intercept(RetryingCallerInterceptorContext abstractRetryingCallerInterceptorContext)
Call this function alongside the actual call done on the callable.
+ +void +PreemptiveFastFailInterceptor.intercept(RetryingCallerInterceptorContext context)  + void NoOpRetryableCallerInterceptor.intercept(RetryingCallerInterceptorContext abstractRetryingCallerInterceptorContext)  -void -PreemptiveFastFailInterceptor.updateFailureInfo(RetryingCallerInterceptorContext context)  - - abstract void RetryingCallerInterceptor.updateFailureInfo(RetryingCallerInterceptorContext context)
Call this function to update at the end of the retry.
+ +void +PreemptiveFastFailInterceptor.updateFailureInfo(RetryingCallerInterceptorContext context)  + void NoOpRetryableCallerInterceptor.updateFailureInfo(RetryingCallerInterceptorContext context)