Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id D4054200C4D for ; Wed, 5 Apr 2017 09:19:24 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id D27CC160B9E; Wed, 5 Apr 2017 07:19:24 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 00649160BA6 for ; Wed, 5 Apr 2017 09:19:22 +0200 (CEST) Received: (qmail 17793 invoked by uid 500); 5 Apr 2017 07:19:21 -0000 Mailing-List: contact commits-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list commits@hbase.apache.org Received: (qmail 16882 invoked by uid 99); 5 Apr 2017 07:19:21 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 05 Apr 2017 07:19:21 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id D6C53E9624; Wed, 5 Apr 2017 07:19:20 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: git-site-role@apache.org To: commits@hbase.apache.org Date: Wed, 05 Apr 2017 07:19:27 -0000 Message-Id: <9a07e9eb74ab4b17ab1af0d27c9b8d20@git.apache.org> In-Reply-To: <3dfe011b9b104ec3844e8cd308b23e7f@git.apache.org> References: <3dfe011b9b104ec3844e8cd308b23e7f@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [08/51] [partial] hbase-site git commit: Published site at 910b68082c8f200f0ba6395a76b7ee1c8917e401. archived-at: Wed, 05 Apr 2017 07:19:25 -0000 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7d957e04/devapidocs/org/apache/hadoop/hbase/util/class-use/Bytes.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/util/class-use/Bytes.html b/devapidocs/org/apache/hadoop/hbase/util/class-use/Bytes.html index 8b145a8..2e5d264 100644 --- a/devapidocs/org/apache/hadoop/hbase/util/class-use/Bytes.html +++ b/devapidocs/org/apache/hadoop/hbase/util/class-use/Bytes.html @@ -183,19 +183,11 @@ private static Set<Bytes> -HColumnDescriptor.RESERVED_KEYWORDS  - - -private static Set<Bytes> HTableDescriptor.RESERVED_KEYWORDS  - -private Map<Bytes,Bytes> -HColumnDescriptor.values  - -private Map<Bytes,Bytes> -HColumnDescriptor.values  +private static Set<Bytes> +HColumnDescriptor.RESERVED_KEYWORDS  private Map<Bytes,Bytes> @@ -209,6 +201,14 @@
A map which holds the metadata information of the table.
+ +private Map<Bytes,Bytes> +HColumnDescriptor.values  + + +private Map<Bytes,Bytes> +HColumnDescriptor.values  + @@ -220,14 +220,6 @@ - - - - - - - - @@ -238,6 +230,14 @@
Getter for fetching an unmodifiable HTableDescriptor.values map.
+ + + + + + + +
Map<Bytes,Bytes>HColumnDescriptor.getValues() 
Map<Bytes,Bytes>HColumnDescriptor.getValues() 
Map<Bytes,Bytes> HTableDescriptor.getValues()
Getter for fetching an unmodifiable HTableDescriptor.values map.
Map<Bytes,Bytes>HColumnDescriptor.getValues() 
Map<Bytes,Bytes>HColumnDescriptor.getValues() 
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7d957e04/devapidocs/org/apache/hadoop/hbase/util/class-use/CancelableProgressable.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/util/class-use/CancelableProgressable.html b/devapidocs/org/apache/hadoop/hbase/util/class-use/CancelableProgressable.html index 36af182..8fef4bd 100644 --- a/devapidocs/org/apache/hadoop/hbase/util/class-use/CancelableProgressable.html +++ b/devapidocs/org/apache/hadoop/hbase/util/class-use/CancelableProgressable.html @@ -133,13 +133,13 @@ - - @@ -339,19 +339,19 @@ - + CancelableProgressable reporter) +
Recover the lease from HDFS, retrying multiple times.
+ - + CancelableProgressable reporter)  http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7d957e04/devapidocs/org/apache/hadoop/hbase/util/class-use/ChecksumType.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/util/class-use/ChecksumType.html b/devapidocs/org/apache/hadoop/hbase/util/class-use/ChecksumType.html index 9144ae4..b9fc0c4 100644 --- a/devapidocs/org/apache/hadoop/hbase/util/class-use/ChecksumType.html +++ b/devapidocs/org/apache/hadoop/hbase/util/class-use/ChecksumType.html @@ -119,13 +119,13 @@ - - http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7d957e04/devapidocs/org/apache/hadoop/hbase/util/class-use/ObjectIntPair.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/util/class-use/ObjectIntPair.html b/devapidocs/org/apache/hadoop/hbase/util/class-use/ObjectIntPair.html index d9e351b..f1b9c32 100644 --- a/devapidocs/org/apache/hadoop/hbase/util/class-use/ObjectIntPair.html +++ b/devapidocs/org/apache/hadoop/hbase/util/class-use/ObjectIntPair.html @@ -136,17 +136,17 @@ - - - - - + + + + +
voidFanOutOneBlockAsyncDFSOutput.recoverAndClose(CancelableProgressable reporter) +AsyncFSOutput.recoverAndClose(CancelableProgressable reporter)
The close method when error occurred.
voidAsyncFSOutput.recoverAndClose(CancelableProgressable reporter) +FanOutOneBlockAsyncDFSOutput.recoverAndClose(CancelableProgressable reporter)
The close method when error occurred.
voidFSMapRUtils.recoverFileLease(org.apache.hadoop.fs.FileSystem fs, +FSHDFSUtils.recoverFileLease(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path p, org.apache.hadoop.conf.Configuration conf, - CancelableProgressable reporter) 
voidFSHDFSUtils.recoverFileLease(org.apache.hadoop.fs.FileSystem fs, +FSMapRUtils.recoverFileLease(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path p, org.apache.hadoop.conf.Configuration conf, - CancelableProgressable reporter) -
Recover the lease from HDFS, retrying multiple times.
-
abstract void
private ChecksumTypeHFileContextBuilder.checksumType +HFileContext.checksumType
the checksum type
private ChecksumTypeHFileContext.checksumType +HFileContextBuilder.checksumType
the checksum type
protected ObjectIntPair<ByteBuffer>RowIndexSeekerV1.tmpPair 
private ObjectIntPair<ByteBuffer> BufferedDataBlockEncoder.SeekerState.tmpPair 
protected ObjectIntPair<ByteBuffer> BufferedDataBlockEncoder.BufferedEncodedSeeker.tmpPair 
protected ObjectIntPair<ByteBuffer>RowIndexSeekerV1.tmpPair 
@@ -200,26 +200,26 @@ - - + - - + + ObjectIntPair<ByteBuffer> pair) +
Returns bytes from given offset till length specified, as a single ByteBuffer.
+ - + ObjectIntPair<ByteBuffer> pair) 
abstract voidByteBuff.asSubByteBuffer(int offset, +voidMultiByteBuff.asSubByteBuffer(int offset, int length, ObjectIntPair<ByteBuffer> pair)
Returns bytes from given offset till length specified, as a single ByteBuffer.
voidSingleByteBuff.asSubByteBuffer(int offset, +abstract voidByteBuff.asSubByteBuffer(int offset, int length, - ObjectIntPair<ByteBuffer> pair) 
voidMultiByteBuff.asSubByteBuffer(int offset, +SingleByteBuff.asSubByteBuffer(int offset, int length, - ObjectIntPair<ByteBuffer> pair) -
Returns bytes from given offset till length specified, as a single ByteBuffer.
-
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7d957e04/devapidocs/org/apache/hadoop/hbase/util/class-use/Order.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/util/class-use/Order.html b/devapidocs/org/apache/hadoop/hbase/util/class-use/Order.html index 69c288f..1b4d7c5 100644 --- a/devapidocs/org/apache/hadoop/hbase/util/class-use/Order.html +++ b/devapidocs/org/apache/hadoop/hbase/util/class-use/Order.html @@ -133,74 +133,74 @@ Order -Union4.getOrder()  +OrderedBytesBase.getOrder()  Order -Struct.getOrder()  +FixedLengthWrapper.getOrder()  Order -RawByte.getOrder()  +Union4.getOrder()  Order -Union3.getOrder()  +RawLong.getOrder()  Order -OrderedBytesBase.getOrder()  +Union2.getOrder()  Order -RawInteger.getOrder()  +RawByte.getOrder()  Order -TerminatedWrapper.getOrder()  +RawString.getOrder()  Order -FixedLengthWrapper.getOrder()  +Struct.getOrder()  Order -RawDouble.getOrder()  +RawShort.getOrder()  Order -RawString.getOrder()  +Union3.getOrder()  Order +PBType.getOrder()  + + +Order DataType.getOrder()
Retrieve the sort Order imposed by this data type, or null when natural ordering is not preserved.
- -Order -RawShort.getOrder()  - Order -RawLong.getOrder()  +RawFloat.getOrder()  Order -RawBytes.getOrder()  +TerminatedWrapper.getOrder()  Order -PBType.getOrder()  +RawBytes.getOrder()  Order -RawFloat.getOrder()  +RawInteger.getOrder()  Order -Union2.getOrder()  +RawDouble.getOrder()  http://git-wip-us.apache.org/repos/asf/hbase-site/blob/7d957e04/devapidocs/org/apache/hadoop/hbase/util/class-use/Pair.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/util/class-use/Pair.html b/devapidocs/org/apache/hadoop/hbase/util/class-use/Pair.html index 7b4d9fc..5b544b1 100644 --- a/devapidocs/org/apache/hadoop/hbase/util/class-use/Pair.html +++ b/devapidocs/org/apache/hadoop/hbase/util/class-use/Pair.html @@ -487,14 +487,14 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. Pair<byte[][],byte[][]> -RegionLocator.getStartEndKeys() +HRegionLocator.getStartEndKeys()
Gets the starting and ending row keys for every region in the currently open table.
Pair<byte[][],byte[][]> -HRegionLocator.getStartEndKeys() +RegionLocator.getStartEndKeys()
Gets the starting and ending row keys for every region in the currently open table.
@@ -518,15 +518,15 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. CompletableFuture<Pair<Integer,Integer>> +AsyncHBaseAdmin.getAlterStatus(TableName tableName)  + + +CompletableFuture<Pair<Integer,Integer>> AsyncAdmin.getAlterStatus(TableName tableName)
Get the status of alter command - indicates how many regions have received the updated schema Asynchronous operation.
- -CompletableFuture<Pair<Integer,Integer>> -AsyncHBaseAdmin.getAlterStatus(TableName tableName)  - (package private) CompletableFuture<Pair<HRegionInfo,ServerName>> AsyncHBaseAdmin.getRegion(byte[] regionName)  @@ -903,15 +903,6 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. Pair<org.apache.hadoop.hbase.shaded.com.google.protobuf.Message,CellScanner> -SimpleRpcServer.call(org.apache.hadoop.hbase.shaded.com.google.protobuf.BlockingService service, - org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.MethodDescriptor md, - org.apache.hadoop.hbase.shaded.com.google.protobuf.Message param, - CellScanner cellScanner, - long receiveTime, - MonitoredRPCHandler status)  - - -Pair<org.apache.hadoop.hbase.shaded.com.google.protobuf.Message,CellScanner> RpcServerInterface.call(org.apache.hadoop.hbase.shaded.com.google.protobuf.BlockingService service, org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.MethodDescriptor md, org.apache.hadoop.hbase.shaded.com.google.protobuf.Message param, @@ -923,18 +914,16 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. - + Pair<org.apache.hadoop.hbase.shaded.com.google.protobuf.Message,CellScanner> -SimpleRpcServer.call(org.apache.hadoop.hbase.shaded.com.google.protobuf.BlockingService service, +SimpleRpcServer.call(org.apache.hadoop.hbase.shaded.com.google.protobuf.BlockingService service, org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.MethodDescriptor md, org.apache.hadoop.hbase.shaded.com.google.protobuf.Message param, CellScanner cellScanner, long receiveTime, - MonitoredRPCHandler status, - long startTime, - int timeout)  + MonitoredRPCHandler status)  - + Pair<org.apache.hadoop.hbase.shaded.com.google.protobuf.Message,CellScanner> RpcServerInterface.call(org.apache.hadoop.hbase.shaded.com.google.protobuf.BlockingService service, org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.MethodDescriptor md, @@ -949,6 +938,17 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. + +Pair<org.apache.hadoop.hbase.shaded.com.google.protobuf.Message,CellScanner> +SimpleRpcServer.call(org.apache.hadoop.hbase.shaded.com.google.protobuf.BlockingService service, + org.apache.hadoop.hbase.shaded.com.google.protobuf.Descriptors.MethodDescriptor md, + org.apache.hadoop.hbase.shaded.com.google.protobuf.Message param, + CellScanner cellScanner, + long receiveTime, + MonitoredRPCHandler status, + long startTime, + int timeout)  + Pair<org.apache.hadoop.hbase.shaded.com.google.protobuf.Message,CellScanner> RpcServerInterface.call(RpcCall call, @@ -1273,10 +1273,8 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. -protected abstract void -TakeSnapshotHandler.snapshotRegions(List<Pair<HRegionInfo,ServerName>> regions) -
Snapshot the specified regions
- +void +DisabledTableSnapshotHandler.snapshotRegions(List<Pair<HRegionInfo,ServerName>> regionsAndLocations)  protected void @@ -1285,8 +1283,10 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. -void -DisabledTableSnapshotHandler.snapshotRegions(List<Pair<HRegionInfo,ServerName>> regionsAndLocations)  +protected abstract void +TakeSnapshotHandler.snapshotRegions(List<Pair<HRegionInfo,ServerName>> regions) +
Snapshot the specified regions
+ @@ -1508,10 +1508,8 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. Pair<String,SortedSet<String>> -ReplicationQueues.claimQueue(String regionserver, - String queueId) -
Take ownership for the queue identified by queueId and belongs to a dead region server.
- +TableBasedReplicationQueuesImpl.claimQueue(String regionserver, + String queueId)  Pair<String,SortedSet<String>> @@ -1520,8 +1518,10 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. Pair<String,SortedSet<String>> -TableBasedReplicationQueuesImpl.claimQueue(String regionserver, - String queueId)  +ReplicationQueues.claimQueue(String regionserver, + String queueId) +
Take ownership for the queue identified by queueId and belongs to a dead region server.
+ Pair<ReplicationPeerConfig,org.apache.hadoop.conf.Configuration> @@ -1552,10 +1552,8 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. void -ReplicationQueues.addHFileRefs(String peerId, - List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) -
Add new hfile references to the queue.
- +TableBasedReplicationQueuesImpl.addHFileRefs(String peerId, + List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)  void @@ -1564,8 +1562,10 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. void -TableBasedReplicationQueuesImpl.addHFileRefs(String peerId, - List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)  +ReplicationQueues.addHFileRefs(String peerId, + List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs) +
Add new hfile references to the queue.
+ @@ -1617,17 +1617,11 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. void -ReplicationSourceManager.addHFileRefs(TableName tableName, - byte[] family, - List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)  - - -void ReplicationSource.addHFileRefs(TableName tableName, byte[] family, List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)  - + void ReplicationSourceInterface.addHFileRefs(TableName tableName, byte[] family, @@ -1635,6 +1629,12 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Add hfile names to the queue to be replicated.
+ +void +ReplicationSourceManager.addHFileRefs(TableName tableName, + byte[] family, + List<Pair<org.apache.hadoop.fs.Path,org.apache.hadoop.fs.Path>> pairs)  + (package private) void Replication.addHFileRefsToQueue(TableName tableName,