Return-Path:
public static final String
OUTPUT_CONF_PREFIX
"hbase.mapred.output."
public static final String
OUTPUT_TABLE
"hbase.mapred.outputtable"
public static final String
QUORUM_ADDRESS
"hbase.mapred.output.quorum"
public static final String
QUORUM_PORT
"hbase.mapred.output.quorum.port"
public static final String
REGION_SERVER_CLASS
"hbase.mapred.output.rs.class"
public static final String
TableOutputFormat.setConf(Configuration)
.static org.apache.hadoop.conf.Configuration
subset(org.apache.hadoop.conf.Configuration srcConf,
+ String prefix)
+public static org.apache.hadoop.conf.Configuration subset(org.apache.hadoop.conf.Configuration srcConf, + String prefix)+
public static boolean isShowConfInServlet()+
public static boolean isShowConfInServlet()
public static int getInt(org.apache.hadoop.conf.Configuration conf, +public static int getInt(org.apache.hadoop.conf.Configuration conf, String name, String deprecatedName, int defaultValue)@@ -416,7 +438,7 @@ publicgetPassword
-public static String getPassword(org.apache.hadoop.conf.Configuration conf, +public static String getPassword(org.apache.hadoop.conf.Configuration conf, String alias, String defPass) throws IOException@@ -435,7 +457,7 @@ publicmain
-public static void main(String[] args) +public static void main(String[] args) throws ExceptionFor debugging. Dump configurations to system output as xml format. Master and RS configurations can also be dumped using http://git-wip-us.apache.org/repos/asf/hbase/blob/a986dfe6/devapidocs/org/apache/hadoop/hbase/ZNodeClearer.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/ZNodeClearer.html b/devapidocs/org/apache/hadoop/hbase/ZNodeClearer.html index feb394c..244cf87 100644 --- a/devapidocs/org/apache/hadoop/hbase/ZNodeClearer.html +++ b/devapidocs/org/apache/hadoop/hbase/ZNodeClearer.html @@ -95,7 +95,7 @@- @@ -327,7 +327,7 @@ extends
-public class ZNodeClearer +public class ZNodeClearer extends Objecthttp://git-wip-us.apache.org/repos/asf/hbase/blob/a986dfe6/devapidocs/org/apache/hadoop/hbase/client/package-tree.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/client/package-tree.html b/devapidocs/org/apache/hadoop/hbase/client/package-tree.html index b5e6aa1..d54b926 100644 --- a/devapidocs/org/apache/hadoop/hbase/client/package-tree.html +++ b/devapidocs/org/apache/hadoop/hbase/client/package-tree.html @@ -388,12 +388,12 @@Contains a set of methods for the collaboration between the start/stop scripts and the servers. It allows to delete immediately the znode when the master or the regions server crashes. @@ -184,10 +184,20 @@ extends
+ static String
+ + parseMasterServerName(String rsZnodePath)
+See HBASE-14861.++ +static String
readMyEphemeralNodeOnDisk()
read the content of znode file, expects a single line.+ + private static boolean
+ tablesOnMaster(org.apache.hadoop.conf.Configuration conf)
static void
writeMyEphemeralNodeOnDisk(String fileContent)
@@ -222,7 +232,7 @@ extends- @@ -239,7 +249,7 @@ extends
LOG
-private static final org.apache.commons.logging.Log LOG+private static final org.apache.commons.logging.Log LOG- @@ -256,7 +266,7 @@ extends
ZNodeClearer
-private ZNodeClearer()+private ZNodeClearer()- @@ -266,7 +276,7 @@ extends
writeMyEphemeralNodeOnDisk
-public static void writeMyEphemeralNodeOnDisk(String fileContent)+public static void writeMyEphemeralNodeOnDisk(String fileContent)Logs the errors without failing on exception.readMyEphemeralNodeOnDisk
-public static String readMyEphemeralNodeOnDisk() +public static String readMyEphemeralNodeOnDisk() throws IOExceptionread the content of znode file, expects a single line.
- Throws:
@@ -279,7 +289,7 @@ extends- @@ -289,19 +299,43 @@ extends
getMyEphemeralNodeFileName
-public static String getMyEphemeralNodeFileName()+public static String getMyEphemeralNodeFileName()Get the name of the file used to store the znode contents- + + + +
deleteMyEphemeralNodeOnDisk
-public static void deleteMyEphemeralNodeOnDisk()+public static void deleteMyEphemeralNodeOnDisk()delete the znode file+
+ + + +- +
+parseMasterServerName
+public static String parseMasterServerName(String rsZnodePath)+See HBASE-14861. We are extracting master ServerName from rsZnodePath + example: "/hbase/rs/server.example.com,16020,1448266496481"++
- Parameters:
- +
rsZnodePath
- from HBASE_ZNODE_FILE- Returns:
- String representation of ServerName or null if fails
+
- +
+tablesOnMaster
+private static boolean tablesOnMaster(org.apache.hadoop.conf.Configuration conf)++
- Returns:
- true if cluster is configured with master-rs collocation
http://git-wip-us.apache.org/repos/asf/hbase/blob/a986dfe6/devapidocs/org/apache/hadoop/hbase/classification/class-use/InterfaceStability.Unstable.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/classification/class-use/InterfaceStability.Unstable.html b/devapidocs/org/apache/hadoop/hbase/classification/class-use/InterfaceStability.Unstable.html index 0e68d71..c86084e 100644 --- a/devapidocs/org/apache/hadoop/hbase/classification/class-use/InterfaceStability.Unstable.html +++ b/devapidocs/org/apache/hadoop/hbase/classification/class-use/InterfaceStability.Unstable.html @@ -136,19 +136,19 @@
clear
-public static boolean clear(org.apache.hadoop.conf.Configuration conf)+public static boolean clear(org.apache.hadoop.conf.Configuration conf)Delete the master znode if its content (ServerName string) is the same - as the one in the znode file. (env: HBASE_ZNODE_FILE).+ as the one in the znode file. (env: HBASE_ZNODE_FILE). I case of master-rs + colloaction we extract ServerName string from rsZnode path.(HBASE-14861)
- Returns:
- true on successful deletion, false otherwise.
- org.apache.hadoop.hbase.http.lib +org.apache.hadoop.hbase.http - This package provides user-selectable (via configuration) classes that add - functionality to the web UI.+ Copied from hadoop source code.
+ See https://issues.apache.org/jira/browse/HADOOP-10232 to know why.- http://git-wip-us.apache.org/repos/asf/hbase/blob/a986dfe6/devapidocs/org/apache/hadoop/hbase/classification/package-tree.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/classification/package-tree.html b/devapidocs/org/apache/hadoop/hbase/classification/package-tree.html index da8cca8..b7f9699 100644 --- a/devapidocs/org/apache/hadoop/hbase/classification/package-tree.html +++ b/devapidocs/org/apache/hadoop/hbase/classification/package-tree.html @@ -80,12 +80,12 @@org.apache.hadoop.hbase.http +org.apache.hadoop.hbase.http.lib - Copied from hadoop source code.+ This package provides user-selectable (via configuration) classes that add + functionality to the web UI.
- See https://issues.apache.org/jira/browse/HADOOP-10232 to know why.Annotation Type Hierarchy
-
- org.apache.hadoop.hbase.classification.InterfaceStability.Unstable (implements java.lang.annotation.Annotation)
-- org.apache.hadoop.hbase.classification.InterfaceAudience.Public (implements java.lang.annotation.Annotation)
-- org.apache.hadoop.hbase.classification.InterfaceStability.Evolving (implements java.lang.annotation.Annotation)
-- org.apache.hadoop.hbase.classification.InterfaceAudience.LimitedPrivate (implements java.lang.annotation.Annotation)
- org.apache.hadoop.hbase.classification.InterfaceStability.Stable (implements java.lang.annotation.Annotation)
+- org.apache.hadoop.hbase.classification.InterfaceStability.Evolving (implements java.lang.annotation.Annotation)
- org.apache.hadoop.hbase.classification.InterfaceAudience.Private (implements java.lang.annotation.Annotation)
+- org.apache.hadoop.hbase.classification.InterfaceAudience.LimitedPrivate (implements java.lang.annotation.Annotation)
+- org.apache.hadoop.hbase.classification.InterfaceStability.Unstable (implements java.lang.annotation.Annotation)
+- org.apache.hadoop.hbase.classification.InterfaceAudience.Public (implements java.lang.annotation.Annotation)
http://git-wip-us.apache.org/repos/asf/hbase/blob/a986dfe6/devapidocs/org/apache/hadoop/hbase/filter/package-tree.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/filter/package-tree.html b/devapidocs/org/apache/hadoop/hbase/filter/package-tree.html index 87a52ea..106f48f 100644 --- a/devapidocs/org/apache/hadoop/hbase/filter/package-tree.html +++ b/devapidocs/org/apache/hadoop/hbase/filter/package-tree.html @@ -161,13 +161,13 @@
- java.lang.Enum<E> (implements java.lang.Comparable<T>, java.io.Serializable)
-
- org.apache.hadoop.hbase.client.Consistency
- org.apache.hadoop.hbase.client.Admin.CompactType
-- org.apache.hadoop.hbase.client.TableState.State
+- org.apache.hadoop.hbase.client.Consistency
+- org.apache.hadoop.hbase.client.IsolationLevel
- org.apache.hadoop.hbase.client.AsyncProcess.Retry
- org.apache.hadoop.hbase.client.Durability
-- org.apache.hadoop.hbase.client.IsolationLevel
+- org.apache.hadoop.hbase.client.TableState.State
- java.lang.Enum<E> (implements java.lang.Comparable<T>, java.io.Serializable)
http://git-wip-us.apache.org/repos/asf/hbase/blob/a986dfe6/devapidocs/org/apache/hadoop/hbase/io/BoundedByteBufferPool.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/io/BoundedByteBufferPool.html b/devapidocs/org/apache/hadoop/hbase/io/BoundedByteBufferPool.html index 181b1aa..38b9e9a 100644 --- a/devapidocs/org/apache/hadoop/hbase/io/BoundedByteBufferPool.html +++ b/devapidocs/org/apache/hadoop/hbase/io/BoundedByteBufferPool.html @@ -211,21 +211,21 @@ extends-
- org.apache.hadoop.hbase.filter.BitComparator.BitwiseOp
-- org.apache.hadoop.hbase.filter.FilterWrapper.FilterRowRetCode
+- org.apache.hadoop.hbase.filter.FuzzyRowFilter.SatisfiesCode
- org.apache.hadoop.hbase.filter.FuzzyRowFilter.Order
+- org.apache.hadoop.hbase.filter.CompareFilter.CompareOp
+- org.apache.hadoop.hbase.filter.FilterWrapper.FilterRowRetCode
+- org.apache.hadoop.hbase.filter.BitComparator.BitwiseOp
- org.apache.hadoop.hbase.filter.RegexStringComparator.EngineType
- org.apache.hadoop.hbase.filter.Filter.ReturnCode
-- org.apache.hadoop.hbase.filter.CompareFilter.CompareOp
-- org.apache.hadoop.hbase.filter.FuzzyRowFilter.SatisfiesCode
- org.apache.hadoop.hbase.filter.FilterList.Operator
putBuffer(ByteBuffer bb)
- + private static long
(package private) static long
subtractOneBufferFromState(long state, int capacity)
- + private static int
(package private) static int
toCountOfBuffers(long state)
- + private static long
(package private) static long
toState(int countOfBuffers, int totalCapacity)
- @@ -310,7 +310,7 @@ extends+ private static int
(package private) static int
toTotalCapacity(long state)
allocationsRef
-private final AtomicLong allocationsRef+private final AtomicLong allocationsRef- http://git-wip-us.apache.org/repos/asf/hbase/blob/a986dfe6/devapidocs/org/apache/hadoop/hbase/io/hfile/package-tree.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/io/hfile/package-tree.html b/devapidocs/org/apache/hadoop/hbase/io/hfile/package-tree.html index 7d659b5..ed83b5c 100644 --- a/devapidocs/org/apache/hadoop/hbase/io/hfile/package-tree.html +++ b/devapidocs/org/apache/hadoop/hbase/io/hfile/package-tree.html @@ -291,11 +291,11 @@
BoundedByteBufferPool
-public BoundedByteBufferPool(int maxByteBufferSizeToCache, +public BoundedByteBufferPool(int maxByteBufferSizeToCache, int initialByteBufferSize, int maxToCache)@@ -365,7 +365,7 @@ extends
- Parameters:
maxByteBufferSizeToCache
-initialByteBufferSize
-maxToCache
-- @@ -374,7 +374,7 @@ extends
toCountOfBuffers
-private static int toCountOfBuffers(long state)+static int toCountOfBuffers(long state)- @@ -383,7 +383,7 @@ extends
toTotalCapacity
-private static int toTotalCapacity(long state)+static int toTotalCapacity(long state)- @@ -393,7 +393,7 @@ extends
toState
-private static long toState(int countOfBuffers, +static long toState(int countOfBuffers, int totalCapacity)- @@ -403,7 +403,7 @@ extends
subtractOneBufferFromState
-private static long subtractOneBufferFromState(long state, +static long subtractOneBufferFromState(long state, int capacity)- @@ -412,7 +412,7 @@ extends
getBuffer
-public ByteBuffer getBuffer()+public ByteBuffer getBuffer()putBuffer
-public void putBuffer(ByteBuffer bb)+public void putBuffer(ByteBuffer bb)- java.lang.Enum<E> (implements java.lang.Comparable<T>, java.io.Serializable)
http://git-wip-us.apache.org/repos/asf/hbase/blob/a986dfe6/devapidocs/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html b/devapidocs/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html index c22e1ac..8fd8ed7 100644 --- a/devapidocs/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html +++ b/devapidocs/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.html @@ -847,7 +847,7 @@ extends
- org.apache.hadoop.hbase.io.hfile.Cacheable.MemoryType
+- org.apache.hadoop.hbase.io.hfile.CacheConfig.ExternalBlockCaches
- org.apache.hadoop.hbase.io.hfile.BlockType
- org.apache.hadoop.hbase.io.hfile.HFileBlock.Writer.State
-- org.apache.hadoop.hbase.io.hfile.CacheConfig.ExternalBlockCaches
-- org.apache.hadoop.hbase.io.hfile.BlockPriority
- org.apache.hadoop.hbase.io.hfile.BlockType.BlockCategory
+- org.apache.hadoop.hbase.io.hfile.BlockPriority
initCredentialsForCluster
-public static void initCredentialsForCluster(org.apache.hadoop.mapreduce.Job job, +public static void initCredentialsForCluster(org.apache.hadoop.mapreduce.Job job, String quorumAddress) throws IOExceptionObtain an authentication token, for the specified cluster, on behalf of the current user @@ -867,7 +867,7 @@ extendsconvertScanToString
-static String convertScanToString(Scan scan) +static String convertScanToString(Scan scan) throws IOExceptionWrites the given scan into a Base64 encoded string.
- Parameters:
- @@ -882,7 +882,7 @@ extends
scan
- The scan to write out.convertStringToScan
-static Scan convertStringToScan(String base64) +static Scan convertStringToScan(String base64) throws IOExceptionConverts the given Base64 string back into a Scan instance.
- Parameters:
- @@ -897,7 +897,7 @@ extends
base64
- The scan details.initTableReducerJob
-public static void initTableReducerJob(String table, +public static void initTableReducerJob(String table, Class<? extends TableReducer> reducer, org.apache.hadoop.mapreduce.Job job) throws IOException@@ -914,7 +914,7 @@ extendsinitTableReducerJob
-public static void initTableReducerJob(String table, +public static void initTableReducerJob(String table, Class<? extends TableReducer> reducer, org.apache.hadoop.mapreduce.Job job, Class partitioner) @@ -933,7 +933,7 @@ extendsinitTableReducerJob
-public static void initTableReducerJob(String table, +public static void initTableReducerJob(String table, Class<? extends TableReducer> reducer, org.apache.hadoop.mapreduce.Job job, Class partitioner, @@ -965,7 +965,7 @@ extendsinitTableReducerJob
-public static void initTableReducerJob(String table, +public static void initTableReducerJob(String table, Class<? extends TableReducer> reducer, org.apache.hadoop.mapreduce.Job job, Class partitioner, @@ -999,7 +999,7 @@ extendslimitNumReduceTasks
-public static void limitNumReduceTasks(String table, +public static void limitNumReduceTasks(String table, org.apache.hadoop.mapreduce.Job job) throws IOExceptionEnsures that the given number of reduce tasks for the given job @@ -1015,7 +1015,7 @@ extendssetNumReduceTasks
-public static void setNumReduceTasks(String table, +public static void setNumReduceTasks(String table, org.apache.hadoop.mapreduce.Job job) throws IOExceptionSets the number of reduce tasks for the given job configuration to the @@ -1031,7 +1031,7 @@ extendssetScannerCaching
-public static void setScannerCaching(org.apache.hadoop.mapreduce.Job job, +public static void setScannerCaching(org.apache.hadoop.mapreduce.Job job, int batchSize)Sets the number of rows to return and cache with each scanner iteration. Higher caching values will enable faster mapreduce jobs at the expense of @@ -1046,7 +1046,7 @@ extendsaddHBaseDependencyJars
-public static void addHBaseDependencyJars(org.apache.hadoop.conf.Configuration conf) +public static void addHBaseDependencyJars(org.apache.hadoop.conf.Configuration conf) throws IOExceptionAdd HBase and its dependencies (only) to the job configuration.@@ -1067,7 +1067,7 @@ extends
- @@ -1078,7 +1078,7 @@ extends
buildDependencyClasspath
-public static String buildDependencyClasspath(org.apache.hadoop.conf.Configuration conf)+public static String buildDependencyClasspath(org.apache.hadoop.conf.Configuration conf)Returns a classpath string built from the content of the "tmpjars" value inconf
. Also exposed to shell scripts via `bin/hbase mapredcp`.addDependencyJars
-public static void addDependencyJars(org.apache.hadoop.mapreduce.Job job) +public static void addDependencyJars(org.apache.hadoop.mapreduce.Job job) throws IOExceptionAdd the HBase dependency jars as well as jars for any of the configured job classes to the job configuration, so that JobClient will ship them @@ -1093,7 +1093,7 @@ extendsaddDependencyJars
-public static void addDependencyJars(org.apache.hadoop.conf.Configuration conf, +public static void addDependencyJars(org.apache.hadoop.conf.Configuration conf, Class<?>... classes) throws IOExceptionAdd the jars containing the given classes to the job's configuration @@ -1109,7 +1109,7 @@ extendsfindOrCreateJar
-private static org.apache.hadoop.fs.Path findOrCreateJar(Class<?> my_class, +private static org.apache.hadoop.fs.Path findOrCreateJar(Class<?> my_class, org.apache.hadoop.fs.FileSystem fs, Map<String,String> packagedClasses) throws IOException@@ -1131,7 +1131,7 @@ extendsupdateMap
-private static void updateMap(String jar, +private static void updateMap(String jar, Map<String,String> packagedClasses) throws IOExceptionAdd entries topackagedClasses
corresponding to class files @@ -1147,7 +1147,7 @@ extendsfindContainingJar
-private static String findContainingJar(Class<?> my_class, +private static String findContainingJar(Class<?> my_class, Map<String,String> packagedClasses) throws IOExceptionFind a jar that contains a class of the same name, if any. It will return @@ -1166,7 +1166,7 @@ extends- @@ -206,7 +206,7 @@ extends org.apache.hadoop.mapreduce.RecordWriter<KEY,
getJar
-private static String getJar(Class<?> my_class)+private static String getJar(Class<?> my_class)Invoke 'getJar' on a custom JarFinder implementation. Useful for some job configuration contexts (HBASE-8140) and also for testing on MRv2. check if we have HADOOP-9426.http://git-wip-us.apache.org/repos/asf/hbase/blob/a986dfe6/devapidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.TableRecordWriter.html ---------------------------------------------------------------------- diff --git a/devapidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.TableRecordWriter.html b/devapidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.TableRecordWriter.html index 38c54bf..f6a6459 100644 --- a/devapidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.TableRecordWriter.html +++ b/devapidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.TableRecordWriter.html @@ -104,7 +104,7 @@
-protected class TableOutputFormat.TableRecordWriter +protected class TableOutputFormat.TableRecordWriter extends org.apache.hadoop.mapreduce.RecordWriter<KEY,Mutation>Writes the reducer output to an HBase table.- @@ -215,7 +215,7 @@ extends org.apache.hadoop.mapreduce.RecordWriter<KEY,
connection
-private Connection connection+private Connection connection- @@ -232,7 +232,7 @@ extends org.apache.hadoop.mapreduce.RecordWriter<KEY,
mutator
-private BufferedMutator mutator+private BufferedMutator mutatorTableOutputFormat.TableRecordWriter
-public TableOutputFormat.TableRecordWriter() +public TableOutputFormat.TableRecordWriter() throws IOException@@ -252,7 +252,7 @@ extends org.apache.hadoop.mapreduce.RecordWriter<KEY,
- Throws:
IOException
close
-public void close(org.apache.hadoop.mapreduce.TaskAttemptContext context) +public void close(org.apache.hadoop.mapreduce.TaskAttemptContext context) throws IOExceptionCloses the writer, in this case flush table commits.