Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 0CE9E200CF4 for ; Sun, 3 Sep 2017 17:13:38 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 094C7164205; Sun, 3 Sep 2017 15:13:38 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 63DE61641FF for ; Sun, 3 Sep 2017 17:13:36 +0200 (CEST) Received: (qmail 63778 invoked by uid 500); 3 Sep 2017 15:13:29 -0000 Mailing-List: contact commits-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list commits@hbase.apache.org Received: (qmail 60171 invoked by uid 99); 3 Sep 2017 15:13:26 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 03 Sep 2017 15:13:26 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 6856EF561A; Sun, 3 Sep 2017 15:13:24 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: git-site-role@apache.org To: commits@hbase.apache.org Date: Sun, 03 Sep 2017 15:14:11 -0000 Message-Id: In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [49/51] [partial] hbase-site git commit: Published site at . archived-at: Sun, 03 Sep 2017 15:13:38 -0000 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/3b220124/apidocs/index-all.html ---------------------------------------------------------------------- diff --git a/apidocs/index-all.html b/apidocs/index-all.html index 73c20a5..d866bd8 100644 --- a/apidocs/index-all.html +++ b/apidocs/index-all.html @@ -453,8 +453,6 @@
Define for 'return-all-versions'.
-
ALWAYS_COPY_FILES - Static variable in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
 
AND - Static variable in class org.apache.hadoop.hbase.filter.ParseConstants
AND Byte Array
@@ -865,8 +863,6 @@
 
build() - Method in class org.apache.hadoop.hbase.NamespaceDescriptor.Builder
 
-
buildClientServiceCallable(Connection, TableName, byte[], Collection<LoadIncrementalHFiles.LoadQueueItem>, boolean) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
 
buildDependencyClasspath(Configuration) - Static method in class org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil
Returns a classpath string built from the content of the "tmpjars" value in conf.
@@ -899,11 +895,6 @@
Staging dir used by bulk load
-
bulkLoadPhase(Table, Connection, ExecutorService, Deque<LoadIncrementalHFiles.LoadQueueItem>, Multimap<ByteBuffer, LoadIncrementalHFiles.LoadQueueItem>, boolean, Map<LoadIncrementalHFiles.LoadQueueItem, ByteBuffer>) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
-
This takes the LQI's grouped by likely regions and attempts to bulk load - them.
-
BypassCoprocessorException - Exception in org.apache.hadoop.hbase.coprocessor
Thrown if a coprocessor rules we should bypass an operation
@@ -2302,8 +2293,6 @@
CREATE_TABLE_CONF_KEY - Static variable in class org.apache.hadoop.hbase.mapreduce.ImportTsv
 
-
CREATE_TABLE_CONF_KEY - Static variable in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
 
createAsyncConnection() - Static method in class org.apache.hadoop.hbase.client.ConnectionFactory
Call ConnectionFactory.createAsyncConnection(Configuration) using default HBaseConfiguration.
@@ -3917,21 +3906,6 @@
Conf key that enables unflushed WAL edits directly being replayed to region servers
-
doBulkLoad(Path, Admin, Table, RegionLocator) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
-
Perform a bulk load of the given directory into the given - pre-existing table.
-
-
doBulkLoad(Map<byte[], List<Path>>, Admin, Table, RegionLocator, boolean, boolean) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
-
Perform a bulk load of the given directory into the given - pre-existing table.
-
-
doBulkLoad(Path, Admin, Table, RegionLocator, boolean, boolean) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
-
Perform a bulk load of the given directory into the given - pre-existing table.
-
doLoadColumnFamiliesOnDemand() - Method in class org.apache.hadoop.hbase.client.Query
Get the logical value indicating whether on-demand CF loading should be allowed.
@@ -5751,6 +5725,8 @@
getCompressionType() - Method in interface org.apache.hadoop.hbase.client.ColumnFamilyDescriptor
 
+
getCompressionType() - Method in class org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder
+
 
getCompressionType() - Method in class org.apache.hadoop.hbase.HColumnDescriptor
Deprecated.
@@ -6609,6 +6585,8 @@
 
getNameAsString() - Method in interface org.apache.hadoop.hbase.client.ColumnFamilyDescriptor
 
+
getNameAsString() - Method in class org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder
+
 
getNameAsString() - Method in class org.apache.hadoop.hbase.HColumnDescriptor
Deprecated.
@@ -7998,10 +7976,6 @@
GroupingTableMapper() - Constructor for class org.apache.hadoop.hbase.mapreduce.GroupingTableMapper
 
-
groupOrSplit(Multimap<ByteBuffer, LoadIncrementalHFiles.LoadQueueItem>, LoadIncrementalHFiles.LoadQueueItem, Table, Pair<byte[][], byte[][]>) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
-
Attempt to assign the given load queue item into its target region group.
-
GZIP - Static variable in class org.apache.hadoop.hbase.util.Base64
Specify that data should be gzip-compressed.
@@ -8766,8 +8740,6 @@
 
IGNORE_MISSING_FILES - Static variable in class org.apache.hadoop.hbase.mapreduce.WALPlayer
 
-
IGNORE_UNMATCHED_CF_CONF_KEY - Static variable in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
 
ImmutableBytesWritable - Class in org.apache.hadoop.hbase.io
A byte sequence that is usable as a key or value.
@@ -8939,8 +8911,6 @@
Returns the start position of the first occurrence of the specified target within array, or -1 if there is no such occurrence.
-
inferBoundaries(TreeMap<byte[], Integer>) - Static method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
 
init(String) - Method in interface org.apache.hadoop.hbase.io.crypto.KeyProvider
Initialize the key provider
@@ -10354,20 +10324,28 @@
 
loadColumnFamiliesOnDemand - Variable in class org.apache.hadoop.hbase.client.Query
 
-
loadHFileQueue(Table, Connection, Deque<LoadIncrementalHFiles.LoadQueueItem>, Pair<byte[][], byte[][]>) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
+
LoadIncrementalHFiles - Class in org.apache.hadoop.hbase.mapreduce
-
Used by the replication sink to load the hfiles from the source cluster.
+
Deprecated. +
As of release 2.0.0, this will be removed in HBase 3.0.0. Use + LoadIncrementalHFiles instead.
+
-
loadHFileQueue(Table, Connection, Deque<LoadIncrementalHFiles.LoadQueueItem>, Pair<byte[][], byte[][]>, boolean) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
+
LoadIncrementalHFiles(Configuration) - Constructor for class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
Used by the replication sink to load the hfiles from the source cluster.
-
-
LoadIncrementalHFiles - Class in org.apache.hadoop.hbase.mapreduce
+
Deprecated.
+  +
LoadIncrementalHFiles.LoadQueueItem - Class in org.apache.hadoop.hbase.mapreduce
-
Tool to load the output of HFileOutputFormat into an existing table.
+
Deprecated. +
As of release 2.0.0, this will be removed in HBase 3.0.0. Use + LoadIncrementalHFiles.LoadQueueItem instead.
+
-
LoadIncrementalHFiles(Configuration) - Constructor for class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
 
+
LoadQueueItem(byte[], Path) - Constructor for class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.LoadQueueItem
+
+
Deprecated.
loadValue(byte[], byte[], ByteBuffer) - Method in class org.apache.hadoop.hbase.client.Result
Loads the latest version of the specified column into the provided ByteBuffer.
@@ -10489,8 +10467,6 @@
main(String[]) - Static method in class org.apache.hadoop.hbase.mapreduce.ImportTsv
 
-
main(String[]) - Static method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
 
main(String[]) - Static method in class org.apache.hadoop.hbase.mapreduce.RowCounter
Main entry point.
@@ -10694,8 +10670,6 @@
 
MAX_BACKOFF_KEY - Static variable in class org.apache.hadoop.hbase.client.backoff.ExponentialClientBackoffPolicy
 
-
MAX_FILES_PER_REGION_PER_FAMILY - Static variable in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
 
MAX_FILESIZE - Static variable in class org.apache.hadoop.hbase.client.TableDescriptorBuilder
Used by HBase Shell interface to access this metadata @@ -11174,8 +11148,6 @@
NAME - Static variable in class org.apache.hadoop.hbase.HConstants
 
-
NAME - Static variable in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
 
NAME - Static variable in class org.apache.hadoop.hbase.snapshot.ExportSnapshot
 
NAMESPACE_COL_DESC_BYTES - Static variable in class org.apache.hadoop.hbase.client.TableDescriptorBuilder
@@ -12191,24 +12163,6 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods.
PrefixFilter(byte[]) - Constructor for class org.apache.hadoop.hbase.filter.PrefixFilter
 
-
prepareHFileQueue(Path, Table, Deque<LoadIncrementalHFiles.LoadQueueItem>, boolean) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
-
Prepare a collection of LoadIncrementalHFiles.LoadQueueItem from list of source hfiles contained in the - passed directory and validates whether the prepared queue has all the valid table column - families in it.
-
-
prepareHFileQueue(Path, Table, Deque<LoadIncrementalHFiles.LoadQueueItem>, boolean, boolean) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
-
Prepare a collection of LoadIncrementalHFiles.LoadQueueItem from list of source hfiles contained in the - passed directory and validates whether the prepared queue has all the valid table column - families in it.
-
-
prepareHFileQueue(Map<byte[], List<Path>>, Table, Deque<LoadIncrementalHFiles.LoadQueueItem>, boolean) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
-
Prepare a collection of LoadIncrementalHFiles.LoadQueueItem from list of source hfiles contained in the - passed directory and validates whether the prepared queue has all the valid table column - families in it.
-
prettyPrint(String) - Static method in class org.apache.hadoop.hbase.HRegionInfo
Use logging.
@@ -13902,9 +13856,9 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods.
run(String[]) - Method in class org.apache.hadoop.hbase.mapreduce.ImportTsv
 
run(String, Map<byte[], List<Path>>, TableName) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
 
-
run(String[]) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
 
+
+
Deprecated.
run(String[]) - Method in class org.apache.hadoop.hbase.mapreduce.RowCounter
 
run(String[]) - Method in class org.apache.hadoop.hbase.mapreduce.WALPlayer
@@ -14346,10 +14300,6 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods.  
setBody(byte[]) - Method in class org.apache.hadoop.hbase.rest.client.Response
 
-
setBulkToken(String) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
-
Called from replication sink, where it manages bulkToken(staging directory) by itself.
-
setCacheBlocks(boolean) - Method in class org.apache.hadoop.hbase.client.Get
Set whether blocks should be cached for this Get.
@@ -16021,8 +15971,6 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Split an individual region.
-
splitStoreFile(LoadIncrementalHFiles.LoadQueueItem, Table, byte[], byte[]) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
 
src - Variable in class org.apache.hadoop.hbase.types.StructIterator
 
stampSet - Variable in class org.apache.hadoop.hbase.filter.DependentColumnFilter
@@ -17253,10 +17201,6 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Truncate the table but does not block and wait for it be completely enabled.
-
tryAtomicRegionLoad(ClientServiceCallable<byte[]>, TableName, byte[], Collection<LoadIncrementalHFiles.LoadQueueItem>) - Method in class org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
-
-
Attempts to do an atomic load of many hfiles into a region.
-
ts - Variable in class org.apache.hadoop.hbase.client.Mutation
 
ts - Variable in class org.apache.hadoop.hbase.mapreduce.TsvImporterMapper
http://git-wip-us.apache.org/repos/asf/hbase-site/blob/3b220124/apidocs/org/apache/hadoop/hbase/class-use/TableName.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/class-use/TableName.html b/apidocs/org/apache/hadoop/hbase/class-use/TableName.html index 4a40934..93e143e 100644 --- a/apidocs/org/apache/hadoop/hbase/class-use/TableName.html +++ b/apidocs/org/apache/hadoop/hbase/class-use/TableName.html @@ -1455,28 +1455,20 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. -protected org.apache.hadoop.hbase.client.ClientServiceCallable<byte[]> -LoadIncrementalHFiles.buildClientServiceCallable(Connection conn, - TableName tableName, - byte[] first, - Collection<org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.LoadQueueItem> lqis, - boolean copyFile)  - - static void TableInputFormat.configureSplitTable(org.apache.hadoop.mapreduce.Job job, TableName tableName)
Sets split table in map-reduce job.
- + protected void TableInputFormatBase.initializeTable(Connection connection, TableName tableName)
Allows subclasses to initialize the table information.
- + static void TableMapReduceUtil.initTableMapperJob(TableName table, Scan scan, @@ -1487,20 +1479,13 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Use this before submitting a TableMap job.
- -Map<org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> + +Map<LoadIncrementalHFiles.LoadQueueItem,ByteBuffer> LoadIncrementalHFiles.run(String dirPath, Map<byte[],List<org.apache.hadoop.fs.Path>> map, - TableName tableName)  - - -protected List<org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.LoadQueueItem> -LoadIncrementalHFiles.tryAtomicRegionLoad(org.apache.hadoop.hbase.client.ClientServiceCallable<byte[]> serviceCallable, - TableName tableName, - byte[] first, - Collection<org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.LoadQueueItem> lqis) -
Attempts to do an atomic load of many hfiles into a region.
- + TableName tableName)
+
Deprecated. 
+  http://git-wip-us.apache.org/repos/asf/hbase-site/blob/3b220124/apidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html ---------------------------------------------------------------------- diff --git a/apidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html b/apidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html index 3bbff6d..d438d45 100644 --- a/apidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html +++ b/apidocs/org/apache/hadoop/hbase/class-use/TableNotFoundException.html @@ -88,13 +88,6 @@
Provides HBase Client
- -org.apache.hadoop.hbase.mapreduce - -
Provides HBase MapReduce -Input/OutputFormats, a table indexing MapReduce job, and utility methods.
- - @@ -129,54 +122,6 @@ Input/OutputFormats, a table indexing MapReduce job, and utility methods. -
  • - - -

    Uses of TableNotFoundException in org.apache.hadoop.hbase.mapreduce

    - - - - - - - - - - - - - - - - - - - - -
    Methods in org.apache.hadoop.hbase.mapreduce that throw TableNotFoundException 
    Modifier and TypeMethod and Description
    voidLoadIncrementalHFiles.doBulkLoad(Map<byte[],List<org.apache.hadoop.fs.Path>> map, - Admin admin, - Table table, - RegionLocator regionLocator, - boolean silence, - boolean copyFile) -
    Perform a bulk load of the given directory into the given - pre-existing table.
    -
    voidLoadIncrementalHFiles.doBulkLoad(org.apache.hadoop.fs.Path hfofDir, - Admin admin, - Table table, - RegionLocator regionLocator) -
    Perform a bulk load of the given directory into the given - pre-existing table.
    -
    voidLoadIncrementalHFiles.doBulkLoad(org.apache.hadoop.fs.Path hfofDir, - Admin admin, - Table table, - RegionLocator regionLocator, - boolean silence, - boolean copyFile) -
    Perform a bulk load of the given directory into the given - pre-existing table.
    -
    -