Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 9076C200B43 for ; Tue, 19 Jul 2016 17:46:53 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 8F590160A94; Tue, 19 Jul 2016 15:46:53 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id D462C160A90 for ; Tue, 19 Jul 2016 17:46:49 +0200 (CEST) Received: (qmail 95588 invoked by uid 500); 19 Jul 2016 15:46:46 -0000 Mailing-List: contact commits-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list commits@hbase.apache.org Received: (qmail 94717 invoked by uid 99); 19 Jul 2016 15:46:46 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 19 Jul 2016 15:46:46 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 2898AE0B40; Tue, 19 Jul 2016 15:46:46 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: busbey@apache.org To: commits@hbase.apache.org Date: Tue, 19 Jul 2016 15:47:05 -0000 Message-Id: <1e8f6b14cdb64e4c853ead05474737eb@git.apache.org> In-Reply-To: <07c7522d8a5c41308b69f13e278bc3fa@git.apache.org> References: <07c7522d8a5c41308b69f13e278bc3fa@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [21/52] [partial] hbase-site git commit: Published site at 9454daf25bcc704d8403a403282c9bf0090b1101. archived-at: Tue, 19 Jul 2016 15:46:53 -0000 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/f94f7f0f/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html b/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html index abb33f6..4eba670 100644 --- a/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html +++ b/devapidocs/org/apache/hadoop/hbase/client/class-use/Result.html @@ -381,13 +381,13 @@ service. Result -Table.append(Append append) +HTable.append(Append append)
Appends values to one or more columns within a single row.
Result -HTable.append(Append append) +Table.append(Append append)
Appends values to one or more columns within a single row.
@@ -396,12 +396,8 @@ service. HTableWrapper.append(Append append)  -Result -RpcRetryingCallerWithReadReplicas.call() -
- Algo: - - we put the query into the execution pool.
- +Result[] +ClientSmallScanner.SmallScannerCallable.call(int timeout)  Result[] @@ -413,11 +409,15 @@ service. Result -RpcRetryingCallerWithReadReplicas.ReplicaRegionServerCallable.call(int callTimeout)  +RpcRetryingCallerWithReadReplicas.call(int operationTimeout) +
+ Algo: + - we put the query into the execution pool.
+ -Result[] -ClientSmallScanner.SmallScannerCallable.call(int timeout)  +Result +RpcRetryingCallerWithReadReplicas.ReplicaRegionServerCallable.call(int callTimeout)  (package private) Result[] @@ -480,13 +480,13 @@ service. Result -Table.get(Get get) +HTable.get(Get get)
Extracts certain cells from a given row.
Result -HTable.get(Get get) +Table.get(Get get)
Extracts certain cells from a given row.
@@ -501,13 +501,13 @@ service. Result[] -Table.get(List<Get> gets) +HTable.get(List<Get> gets)
Extracts certain cells from the given rows, in batch.
Result[] -HTable.get(List<Get> gets) +Table.get(List<Get> gets)
Extracts certain cells from the given rows, in batch.
@@ -517,13 +517,13 @@ service. Result -Table.increment(Increment increment) +HTable.increment(Increment increment)
Increments one or more columns within a single row.
Result -HTable.increment(Increment increment) +Table.increment(Increment increment)
Increments one or more columns within a single row.
@@ -533,21 +533,21 @@ service. Result -ResultScanner.next() -
Grab the next row's worth of values.
- +ClientSmallScanner.next()  Result -ClientAsyncPrefetchScanner.next()  +ClientSimpleScanner.next()  Result -ClientSimpleScanner.next()  +ClientAsyncPrefetchScanner.next()  Result -ClientSmallScanner.next()  +ResultScanner.next() +
Grab the next row's worth of values.
+ Result @@ -555,22 +555,22 @@ service. Result -TableSnapshotScanner.next()  +ClientSideRegionScanner.next()  Result -ClientSideRegionScanner.next()  +TableSnapshotScanner.next()  Result[] -ResultScanner.next(int nbRows)  - - -Result[] AbstractClientScanner.next(int nbRows)
Get nbRows rows.
+ +Result[] +ResultScanner.next(int nbRows)  + protected Result ClientScanner.nextWithSyncCache()  @@ -721,19 +721,25 @@ service. Result +BaseRegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> e, + Append append, + Result result)  + + +Result RegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> c, Append append, Result result)
Called after Append
- + Result -BaseRegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> e, - Append append, - Result result)  +BaseRegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> e, + Increment increment, + Result result)  - + Result RegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment, @@ -741,60 +747,54 @@ service.
Called after increment
- + Result -BaseRegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> e, - Increment increment, - Result result)  +BaseRegionObserver.preAppend(ObserverContext<RegionCoprocessorEnvironment> e, + Append append)  - + Result RegionObserver.preAppend(ObserverContext<RegionCoprocessorEnvironment> c, Append append)
Called before Append.
- + Result -BaseRegionObserver.preAppend(ObserverContext<RegionCoprocessorEnvironment> e, - Append append)  +BaseRegionObserver.preAppendAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> e, + Append append)  - + Result RegionObserver.preAppendAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> c, Append append)
Called before Append but after acquiring rowlock.
- + Result -BaseRegionObserver.preAppendAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> e, - Append append)  +BaseRegionObserver.preIncrement(ObserverContext<RegionCoprocessorEnvironment> e, + Increment increment)  - + Result RegionObserver.preIncrement(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment)
Called before Increment.
- + Result -BaseRegionObserver.preIncrement(ObserverContext<RegionCoprocessorEnvironment> e, - Increment increment)  +BaseRegionObserver.preIncrementAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> e, + Increment increment)  - + Result RegionObserver.preIncrementAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment)
Called before Increment but after acquiring rowlock.
- -Result -BaseRegionObserver.preIncrementAfterRowLock(ObserverContext<RegionCoprocessorEnvironment> e, - Increment increment)  - @@ -806,19 +806,25 @@ service. + + + + - + - + - + - - - -
ResultBaseRegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> e, + Append append, + Result result) 
Result RegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> c, Append append, Result result)
Called after Append
ResultBaseRegionObserver.postAppend(ObserverContext<RegionCoprocessorEnvironment> e, - Append append, - Result result) BaseRegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> e, + Increment increment, + Result result) 
Result RegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> c, Increment increment, @@ -826,12 +832,6 @@ service.
Called after increment
ResultBaseRegionObserver.postIncrement(ObserverContext<RegionCoprocessorEnvironment> e, - Increment increment, - Result result) 
@@ -843,6 +843,14 @@ service. + + + + - + - + - + - - - -
booleanBaseRegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> e, + InternalScanner s, + List<Result> results, + int limit, + boolean hasMore) 
boolean RegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> c, InternalScanner s, List<Result> result, @@ -851,15 +859,15 @@ service.
Called after the client asks for the next row on a scanner.
booleanBaseRegionObserver.postScannerNext(ObserverContext<RegionCoprocessorEnvironment> e, - InternalScanner s, - List<Result> results, - int limit, - boolean hasMore) BaseRegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> e, + InternalScanner s, + List<Result> results, + int limit, + boolean hasMore) 
boolean RegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> c, InternalScanner s, @@ -869,14 +877,6 @@ service.
Called before the client asks for the next row on a scanner.
booleanBaseRegionObserver.preScannerNext(ObserverContext<RegionCoprocessorEnvironment> e, - InternalScanner s, - List<Result> results, - int limit, - boolean hasMore) 
@@ -893,15 +893,15 @@ service. Result -TableSnapshotInputFormat.TableSnapshotRecordReader.createValue()  +TableRecordReaderImpl.createValue()  Result -TableRecordReader.createValue()  +TableSnapshotInputFormat.TableSnapshotRecordReader.createValue()  Result -TableRecordReaderImpl.createValue()  +TableRecordReader.createValue()  @@ -914,23 +914,23 @@ service. org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> -TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, +MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, org.apache.hadoop.mapred.Reporter reporter)  org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> -TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split, +TableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, - org.apache.hadoop.mapred.Reporter reporter) -
Builds a TableRecordReader.
- + org.apache.hadoop.mapred.Reporter reporter)
  org.apache.hadoop.mapred.RecordReader<ImmutableBytesWritable,Result> -MultiTableSnapshotInputFormat.getRecordReader(org.apache.hadoop.mapred.InputSplit split, +TableInputFormatBase.getRecordReader(org.apache.hadoop.mapred.InputSplit split, org.apache.hadoop.mapred.JobConf job, - org.apache.hadoop.mapred.Reporter reporter)  + org.apache.hadoop.mapred.Reporter reporter) +
Builds a TableRecordReader.
+ @@ -949,42 +949,42 @@ service. void -IdentityTableMap.map(ImmutableBytesWritable key, +GroupingTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, org.apache.hadoop.mapred.Reporter reporter) -
Pass the key, value to reduce
+
Extract the grouping columns from value to construct a new key.
void -RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, - Result values, +IdentityTableMap.map(ImmutableBytesWritable key, + Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, - org.apache.hadoop.mapred.Reporter reporter)  + org.apache.hadoop.mapred.Reporter reporter) +
Pass the key, value to reduce
+ void -GroupingTableMap.map(ImmutableBytesWritable key, - Result value, +RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, + Result values, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, - org.apache.hadoop.mapred.Reporter reporter) -
Extract the grouping columns from value to construct a new key.
- + org.apache.hadoop.mapred.Reporter reporter)
  boolean -TableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritable key, +TableRecordReaderImpl.next(ImmutableBytesWritable key, Result value)  boolean -TableRecordReader.next(ImmutableBytesWritable key, +TableSnapshotInputFormat.TableSnapshotRecordReader.next(ImmutableBytesWritable key, Result value)  boolean -TableRecordReaderImpl.next(ImmutableBytesWritable key, +TableRecordReader.next(ImmutableBytesWritable key, Result value)  @@ -998,28 +998,28 @@ service. void -IdentityTableMap.map(ImmutableBytesWritable key, +GroupingTableMap.map(ImmutableBytesWritable key, Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, org.apache.hadoop.mapred.Reporter reporter) -
Pass the key, value to reduce
+
Extract the grouping columns from value to construct a new key.
void -RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, - Result values, +IdentityTableMap.map(ImmutableBytesWritable key, + Result value, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, - org.apache.hadoop.mapred.Reporter reporter)  + org.apache.hadoop.mapred.Reporter reporter) +
Pass the key, value to reduce
+ void -GroupingTableMap.map(ImmutableBytesWritable key, - Result value, +RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, + Result values, org.apache.hadoop.mapred.OutputCollector<ImmutableBytesWritable,Result> output, - org.apache.hadoop.mapred.Reporter reporter) -
Extract the grouping columns from value to construct a new key.
- + org.apache.hadoop.mapred.Reporter reporter)
  @@ -1049,11 +1049,11 @@ service. private Result -MultithreadedTableMapper.SubMapRecordReader.value  +TableRecordReaderImpl.value  private Result -TableRecordReaderImpl.value  +MultithreadedTableMapper.SubMapRecordReader.value  @@ -1095,25 +1095,25 @@ service. Result -TableSnapshotInputFormat.TableSnapshotRegionRecordReader.getCurrentValue()  +TableSnapshotInputFormatImpl.RecordReader.getCurrentValue()  Result -MultithreadedTableMapper.SubMapRecordReader.getCurrentValue()  +TableRecordReaderImpl.getCurrentValue() +
Returns the current value.
+ Result -TableRecordReader.getCurrentValue() -
Returns the current value.
- +TableSnapshotInputFormat.TableSnapshotRegionRecordReader.getCurrentValue()  Result -TableSnapshotInputFormatImpl.RecordReader.getCurrentValue()  +MultithreadedTableMapper.SubMapRecordReader.getCurrentValue()  Result -TableRecordReaderImpl.getCurrentValue() +TableRecordReader.getCurrentValue()
Returns the current value.
@@ -1226,18 +1226,18 @@ service. void -RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, - Result values, +GroupingTableMapper.map(ImmutableBytesWritable key, + Result value, org.apache.hadoop.mapreduce.Mapper.Context context) -
Maps the data.
+
Extract the grouping columns from value to construct a new key.
void -GroupingTableMapper.map(ImmutableBytesWritable key, - Result value, +RowCounter.RowCounterMapper.map(ImmutableBytesWritable row, + Result values, org.apache.hadoop.mapreduce.Mapper.Context context) -
Extract the grouping columns from value to construct a new key.
+
Maps the data.
@@ -1477,22 +1477,22 @@ service. void -DefaultOperationQuota.addGetResult(Result result)  - - -void NoopOperationQuota.addGetResult(Result result)  - + void OperationQuota.addGetResult(Result result)
Add a get result.
- + void OperationQuota.AvgOperationSize.addGetResult(Result result)  + +void +DefaultOperationQuota.addGetResult(Result result)  + static long QuotaUtil.calculateResultSize(Result result)  @@ -1546,22 +1546,22 @@ service. void -DefaultOperationQuota.addScanResult(List<Result> results)  - - -void NoopOperationQuota.addScanResult(List<Result> results)  - + void OperationQuota.