Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 299A4200B5E for ; Tue, 26 Jul 2016 17:56:13 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 28521160A75; Tue, 26 Jul 2016 15:56:13 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id CAA24160A9A for ; Tue, 26 Jul 2016 17:56:10 +0200 (CEST) Received: (qmail 15615 invoked by uid 500); 26 Jul 2016 15:56:07 -0000 Mailing-List: contact commits-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list commits@hbase.apache.org Received: (qmail 14347 invoked by uid 99); 26 Jul 2016 15:56:06 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 26 Jul 2016 15:56:06 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 536D6E08E8; Tue, 26 Jul 2016 15:56:06 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: misty@apache.org To: commits@hbase.apache.org Date: Tue, 26 Jul 2016 15:56:26 -0000 Message-Id: <3d82d0bc525c4013b9760f9540451296@git.apache.org> In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [22/52] [partial] hbase-site git commit: Published site at bcf409e11f081be077f1232c987d05fa78a1793c. archived-at: Tue, 26 Jul 2016 15:56:13 -0000 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/56b04875/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html b/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html index 8796b51..d4a6be0 100644 --- a/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html +++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html @@ -397,9 +397,13 @@ service. Result[] -ScannerCallable.call(int callTimeout)  +ClientSmallScanner.SmallScannerCallable.call(int timeout)  +Result[] +ScannerCallableWithReplicas.call(int timeout)  + + Result RpcRetryingCallerWithReadReplicas.call(int operationTimeout)
@@ -407,17 +411,13 @@ service.
- we put the query into the execution pool. - + Result RpcRetryingCallerWithReadReplicas.ReplicaRegionServerCallable.call(int callTimeout)  - -Result[] -ClientSmallScanner.SmallScannerCallable.call(int timeout)  - Result[] -ScannerCallableWithReplicas.call(int timeout)  +ScannerCallable.call(int callTimeout)  (package private) Result[] @@ -537,15 +537,15 @@ service. Result -ClientSmallReversedScanner.next()  +ClientSmallScanner.next()  Result -ClientSmallScanner.next()  +ClientAsyncPrefetchScanner.next()  Result -ClientAsyncPrefetchScanner.next()  +ClientSmallReversedScanner.next()  Result @@ -555,11 +555,11 @@ service. Result -ClientSideRegionScanner.next()  +TableSnapshotScanner.next()  Result -TableSnapshotScanner.next()  +ClientSideRegionScanner.next()  Result[] @@ -721,19 +721,25 @@ service. Result +BaseRegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> e, + Append append, + Result result)  + + +Result RegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> c, Append append, Result result)
Called after Append
- + Result -BaseRegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> e, - Append append, - Result result)  +BaseRegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> e, + Increment increment, + Result result)  - + Result RegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment, @@ -741,60 +747,54 @@ service.
Called after increment
- + Result -BaseRegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> e, - Increment increment, - Result result)  +BaseRegionObserver.preAppend(ObserverContext<RegionCoprocessorEnvironment> e, + Append append)  - + Result RegionObserver.preAppend(ObserverContext<RegionCoprocessorEnvironment> c, Append append)
Called before Append.
- + Result -BaseRegionObserver.preAppend(ObserverContext<RegionCoprocessorEnvironment> e, - Append append)  +BaseRegionObserver.preAppendAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> e, + Append append)  - + Result RegionObserver.preAppendAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> c, Append append)
Called before Append but after acquiring rowlock.
- + Result -BaseRegionObserver.preAppendAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> e, - Append append)  +BaseRegionObserver.preIncrement(ObserverContext<RegionCoprocessorEnvironment> e, + Increment increment)  - + Result RegionObserver.preIncrement(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment)
Called before Increment.
- + Result -BaseRegionObserver.preIncrement(ObserverContext<RegionCoprocessorEnvironment> e, - Increment increment)  +BaseRegionObserver.preIncrementAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> e, + Increment increment)  - + Result RegionObserver.preIncrementAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment)
Called before Increment but after acquiring rowlock.
- -Result -BaseRegionObserver.preIncrementAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> e, - Increment increment)  - @@ -806,19 +806,25 @@ service. + + + + - + - + - + - - - -
ResultBaseRegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> e, + Append append, + Result result) 
Result RegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> c, Append append, Result result)
Called after Append
ResultBaseRegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> e, - Append append, - Result result) BaseRegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> e, + Increment increment, + Result result) 
Result RegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment, @@ -826,12 +832,6 @@ service.
Called after increment
ResultBaseRegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> e, - Increment increment, - Result result) 
@@ -843,6 +843,14 @@ service. + + + + - + - + - + - - - -
booleanBaseRegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> e, + InternalScanner s, + List<Result> results, + int limit, + boolean hasMore) 
boolean RegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> c, InternalScanner s, List<Result> result, @@ -851,15 +859,15 @@ service.
Called after the client asks for the next row on a scanner.
booleanBaseRegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> e, - InternalScanner s, - List<Result> results, - int limit, - boolean hasMore) BaseRegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> e, + InternalScanner s, + List<Result> results, + int limit, + boolean hasMore) 
boolean RegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c, InternalScanner s, @@ -869,14 +877,6 @@ service.
Called before the client asks for the next row on a scanner.
booleanBaseRegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> e, - InternalScanner s, - List<Result> results, - int limit, - boolean hasMore) 
@@ -914,13 +914,13 @@ service. org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> -MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, +TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, org.apache.hadoop.mapred.Reporter reporter)  org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> -TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, +MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, org.apache.hadoop.mapred.Reporter reporter)  @@ -958,6 +958,13 @@ service. void +RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, + Result values, + org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, + org.apache.hadoop.mapred.Reporter reporter)  + + +void GroupingTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, @@ -965,13 +972,6 @@ service.
Extract the grouping columns from value to construct a new key.
- -void -RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, - Result values, - org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, - org.apache.hadoop.mapred.Reporter reporter)  - boolean TableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritable key, @@ -1007,6 +1007,13 @@ service. void +RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, + Result values, + org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, + org.apache.hadoop.mapred.Reporter reporter)  + + +void GroupingTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, @@ -1014,13 +1021,6 @@ service.
Extract the grouping columns from value to construct a new key.
- -void -RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, - Result values, - org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, - org.apache.hadoop.mapred.Reporter reporter)  - @@ -1049,11 +1049,11 @@ service. private Result -TableRecordReaderImpl.value  +MultithreadedTableMapper.SubMapRecordReader.value  private Result -MultithreadedTableMapper.SubMapRecordReader.value  +TableRecordReaderImpl.value  @@ -1099,23 +1099,23 @@ service. Result -TableSnapshotInputFormatImpl.RecordReader.getCurrentValue()  - - -Result TableRecordReader.getCurrentValue()
Returns the current value.
+ +Result +TableSnapshotInputFormatImpl.RecordReader.getCurrentValue()  + Result -TableRecordReaderImpl.getCurrentValue() -
Returns the current value.
- +MultithreadedTableMapper.SubMapRecordReader.getCurrentValue()  Result -MultithreadedTableMapper.SubMapRecordReader.getCurrentValue()  +TableRecordReaderImpl.getCurrentValue() +
Returns the current value.
+ @@ -1133,16 +1133,16 @@ service. org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> -TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, +MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) - +
Builds a TableRecordReader.
org.apache.hadoop.mapreduce.RecordReader<ImmutableBytesWritable,Result> -MultiTableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, +TableInputFormatBase.createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) -
Builds a TableRecordReader.
+ @@ -1218,26 +1218,26 @@ service. void -GroupingTableMapper.map(ImmutableBytesWritable key, +IdentityTableMapper.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapreduce.Mapper.Context context) -
Extract the grouping columns from value to construct a new key.
+
Pass the key, value to reduce.
void -IdentityTableMapper.map(ImmutableBytesWritable key, - Result value, +RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, + Result values, org.apache.hadoop.mapreduce.Mapper.Context context) -
Pass the key, value to reduce.
+
Maps the data.
void -RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, - Result values, +GroupingTableMapper.map(ImmutableBytesWritable key, + Result value, org.apache.hadoop.mapreduce.Mapper.Context context) -
Maps the data.
+
Extract the grouping columns from value to construct a new key.
@@ -1477,21 +1477,21 @@ service. void -OperationQuota.addGetResult(Result result) -
Add a get result.
- +DefaultOperationQuota.addGetResult(Result result)  void -OperationQuota.AvgOperationSize.addGetResult(Result result)  +NoopOperationQuota.addGetResult(Result result)  void -NoopOperationQuota.addGetResult(Result result)  +OperationQuota.addGetResult(Result result) +
Add a get result.
+ void -DefaultOperationQuota.addGetResult(Result result)  +OperationQuota.AvgOperationSize.addGetResult(Result result)  static long @@ -1546,21 +1546,21 @@ service. void -OperationQuota.addScanResult(List<Result> results) -
Add a scan result.
- +DefaultOperationQuota.addScanResult(List<Result> results)  void -OperationQuota.AvgOperationSize.addScanResult(List<Result> results)  +NoopOperationQuota.addScanResult(List<Result> results)  void -NoopOperationQuota.addScanResult(List<Result> results)  +OperationQuota.addScanResult(List<Result> results) +
Add a scan result.
+ void -DefaultOperationQuota.addScanResult(List<Result> results)  +OperationQuota.AvgOperationSize.addScanResult(List<Result> results)  static long http://git-wip-us.apache.org/repos/asf/hbase-site/blob/56b04875/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html index 4c957f9..f6dc7bc 100644 --- a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html +++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallable.html @@ -253,29 +253,29 @@ +T +RpcRetryingCallerImpl.callWithoutRetries(RetryingCallable<T> callable, + int callTimeout)  + + T RpcRetryingCaller.callWithoutRetries(RetryingCallable<T> callable, int callTimeout)
Call the server once only.
- + T -RpcRetryingCallerImpl.callWithoutRetries(RetryingCallable<T> callable, - int callTimeout)  +RpcRetryingCallerImpl.callWithRetries(RetryingCallable<T> callable, + int callTimeout)  - + T RpcRetryingCaller.callWithRetries(RetryingCallable<T> callable, int callTimeout)
Retries if invocation fails.
- -T -RpcRetryingCallerImpl.callWithRetries(RetryingCallable<T> callable, - int callTimeout)  - void ResultBoundedCompletionService.submit(RetryingCallable<V> task, http://git-wip-us.apache.org/repos/asf/hbase-site/blob/56b04875/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallableBase.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallableBase.html b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallableBase.html index 463b0cb..fae319d 100644 --- a/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallableBase.html +++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/RetryingCallableBase.html @@ -216,14 +216,10 @@ -RetryingCallerInterceptorContext -NoOpRetryingInterceptorContext.prepare(RetryingCallableBase callable)  - - FastFailInterceptorContext FastFailInterceptorContext.prepare(RetryingCallableBase callable)  - + abstract RetryingCallerInterceptorContext RetryingCallerInterceptorContext.prepare(RetryingCallableBase callable)
This prepares the context object by populating it with information specific @@ -231,17 +227,16 @@ which this will be used.
- + RetryingCallerInterceptorContext -NoOpRetryingInterceptorContext.prepare(RetryingCallableBase callable, - int tries)  +NoOpRetryingInterceptorContext.prepare(RetryingCallableBase callable)  - + FastFailInterceptorContext FastFailInterceptorContext.prepare(RetryingCallableBase callable, int tries)  - + abstract RetryingCallerInterceptorContext RetryingCallerInterceptorContext.prepare(RetryingCallableBase callable, int tries) @@ -249,6 +244,11 @@ in. + +RetryingCallerInterceptorContext +NoOpRetryingInterceptorContext.prepare(RetryingCallableBase callable, + int tries)  +