Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 18767200D41 for ; Tue, 17 Oct 2017 22:16:46 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 1735F1609D9; Tue, 17 Oct 2017 20:16:46 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id B8159160BFF for ; Tue, 17 Oct 2017 22:16:42 +0200 (CEST) Received: (qmail 77474 invoked by uid 500); 17 Oct 2017 20:16:37 -0000 Mailing-List: contact commits-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list commits@spark.apache.org Received: (qmail 75997 invoked by uid 99); 17 Oct 2017 20:16:36 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Oct 2017 20:16:36 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 46A1BDFF06; Tue, 17 Oct 2017 20:16:35 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: holden@apache.org To: commits@spark.apache.org Date: Tue, 17 Oct 2017 20:17:04 -0000 Message-Id: <3ce0812b854948bcbe96443b50f22b7e@git.apache.org> In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [31/51] [partial] spark-website git commit: Add 2.1.2 docs archived-at: Tue, 17 Oct 2017 20:16:46 -0000 http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/index-all.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/java/index-all.html b/site/docs/2.1.2/api/java/index-all.html new file mode 100644 index 0000000..67505d4 --- /dev/null +++ b/site/docs/2.1.2/api/java/index-all.html @@ -0,0 +1,46481 @@ + + + + + +Index (Spark 2.1.2 JavaDoc) + + + + + + + +
+ + + + + +
+ + +
$ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z _  + + +

$

+
+
$colon$bslash(B, Function2<A, B, B>) - Static method in class org.apache.spark.sql.types.StructType
+
 
+
$colon$plus(B, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
+
 
+
$div$colon(B, Function2<B, A, B>) - Static method in class org.apache.spark.sql.types.StructType
+
 
+
$greater(A) - Static method in class org.apache.spark.sql.types.Decimal
+
 
+
$greater(A) - Static method in class org.apache.spark.storage.RDDInfo
+
 
+
$greater$eq(A) - Static method in class org.apache.spark.sql.types.Decimal
+
 
+
$greater$eq(A) - Static method in class org.apache.spark.storage.RDDInfo
+
 
+
$less(A) - Static method in class org.apache.spark.sql.types.Decimal
+
 
+
$less(A) - Static method in class org.apache.spark.storage.RDDInfo
+
 
+
$less$eq(A) - Static method in class org.apache.spark.sql.types.Decimal
+
 
+
$less$eq(A) - Static method in class org.apache.spark.storage.RDDInfo
+
 
+
$minus$greater(T) - Static method in class org.apache.spark.ml.param.DoubleParam
+
 
+
$minus$greater(T) - Static method in class org.apache.spark.ml.param.FloatParam
+
 
+
$plus$colon(B, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
+
 
+
$plus$eq(T) - Static method in class org.apache.spark.Accumulator
+
+
Deprecated.
+
$plus$plus(RDD<T>) - Static method in class org.apache.spark.api.r.RRDD
+
 
+
$plus$plus(RDD<T>) - Static method in class org.apache.spark.graphx.EdgeRDD
+
 
+
$plus$plus(RDD<T>) - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
+
 
+
$plus$plus(RDD<T>) - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
+
 
+
$plus$plus(RDD<T>) - Static method in class org.apache.spark.graphx.VertexRDD
+
 
+
$plus$plus(RDD<T>) - Static method in class org.apache.spark.rdd.HadoopRDD
+
 
+
$plus$plus(RDD<T>) - Static method in class org.apache.spark.rdd.JdbcRDD
+
 
+
$plus$plus(RDD<T>) - Static method in class org.apache.spark.rdd.NewHadoopRDD
+
 
+
$plus$plus(RDD<T>) - Static method in class org.apache.spark.rdd.PartitionPruningRDD
+
 
+
$plus$plus(RDD<T>) - Static method in class org.apache.spark.rdd.UnionRDD
+
 
+
$plus$plus(GenTraversableOnce<B>, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
+
 
+
$plus$plus$colon(TraversableOnce<B>, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
+
 
+
$plus$plus$colon(Traversable<B>, CanBuildFrom<Repr, B, That>) - Static method in class org.apache.spark.sql.types.StructType
+
 
+
$plus$plus$eq(R) - Static method in class org.apache.spark.Accumulator
+
+
Deprecated.
+
+ + + +

A

+
+
abortJob(JobContext) - Method in class org.apache.spark.internal.io.FileCommitProtocol
+
+
Aborts a job after the writes fail.
+
+
abortJob(JobContext) - Method in class org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
+
 
+
abortTask(TaskAttemptContext) - Method in class org.apache.spark.internal.io.FileCommitProtocol
+
+
Aborts a task after the writes have failed.
+
+
abortTask(TaskAttemptContext) - Method in class org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
+
 
+
abs(Column) - Static method in class org.apache.spark.sql.functions
+
+
Computes the absolute value.
+
+
abs() - Method in class org.apache.spark.sql.types.Decimal
+
 
+
absent() - Static method in class org.apache.spark.api.java.Optional
+
 
+
AbsoluteError - Class in org.apache.spark.mllib.tree.loss
+
+
:: DeveloperApi :: + Class for absolute error loss calculation (for regression).
+
+
AbsoluteError() - Constructor for class org.apache.spark.mllib.tree.loss.AbsoluteError
+
 
+
accept(Parsers) - Static method in class org.apache.spark.ml.feature.RFormulaParser
+
 
+
accept(ES, Function1<ES, List<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
+
 
+
accept(String, PartialFunction<Object, U>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
+
 
+
acceptIf(Function1<Object, Object>, Function1<Object, String>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
+
 
+
acceptMatch(String, PartialFunction<Object, U>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
+
 
+
acceptSeq(ES, Function1<ES, Iterable<Object>>) - Static method in class org.apache.spark.ml.feature.RFormulaParser
+
 
+
acceptsType(DataType) - Method in class org.apache.spark.sql.types.ObjectType
+
 
+
accId() - Method in class org.apache.spark.CleanAccum
+
 
+
Accumulable<R,T> - Class in org.apache.spark
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
Accumulable(R, AccumulableParam<R, T>) - Constructor for class org.apache.spark.Accumulable
+
+
Deprecated.
+
accumulable(T, AccumulableParam<T, R>) - Method in class org.apache.spark.api.java.JavaSparkContext
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
accumulable(T, String, AccumulableParam<T, R>) - Method in class org.apache.spark.api.java.JavaSparkContext
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
accumulable(R, AccumulableParam<R, T>) - Method in class org.apache.spark.SparkContext
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
accumulable(R, String, AccumulableParam<R, T>) - Method in class org.apache.spark.SparkContext
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
accumulableCollection(R, Function1<R, Growable<T>>, ClassTag<R>) - Method in class org.apache.spark.SparkContext
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
AccumulableInfo - Class in org.apache.spark.scheduler
+
+
:: DeveloperApi :: + Information about an Accumulable modified during a task or stage.
+
+
AccumulableInfo - Class in org.apache.spark.status.api.v1
+
 
+
accumulableInfoFromJson(JsonAST.JValue) - Static method in class org.apache.spark.util.JsonProtocol
+
 
+
accumulableInfoToJson(AccumulableInfo) - Static method in class org.apache.spark.util.JsonProtocol
+
 
+
AccumulableParam<R,T> - Interface in org.apache.spark
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
accumulables() - Method in class org.apache.spark.scheduler.StageInfo
+
+
Terminal values of accumulables updated during this stage, including all the user-defined + accumulators.
+
+
accumulables() - Method in class org.apache.spark.scheduler.TaskInfo
+
+
Intermediate updates to accumulables during this task.
+
+
accumulables() - Method in class org.apache.spark.ui.jobs.UIData.StageUIData
+
 
+
accumulablesToJson(Traversable<AccumulableInfo>) - Static method in class org.apache.spark.util.JsonProtocol
+
 
+
Accumulator<T> - Class in org.apache.spark
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
accumulator(int) - Method in class org.apache.spark.api.java.JavaSparkContext
+
+
Deprecated. +
use sc().longAccumulator(). Since 2.0.0.
+
+
+
accumulator(int, String) - Method in class org.apache.spark.api.java.JavaSparkContext
+
+
Deprecated. +
use sc().longAccumulator(String). Since 2.0.0.
+
+
+
accumulator(double) - Method in class org.apache.spark.api.java.JavaSparkContext
+
+
Deprecated. +
use sc().doubleAccumulator(). Since 2.0.0.
+
+
+
accumulator(double, String) - Method in class org.apache.spark.api.java.JavaSparkContext
+
+
Deprecated. +
use sc().doubleAccumulator(String). Since 2.0.0.
+
+
+
accumulator(T, AccumulatorParam<T>) - Method in class org.apache.spark.api.java.JavaSparkContext
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
accumulator(T, String, AccumulatorParam<T>) - Method in class org.apache.spark.api.java.JavaSparkContext
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
accumulator(T, AccumulatorParam<T>) - Method in class org.apache.spark.SparkContext
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
accumulator(T, String, AccumulatorParam<T>) - Method in class org.apache.spark.SparkContext
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
AccumulatorContext - Class in org.apache.spark.util
+
+
An internal class used to track accumulators by Spark itself.
+
+
AccumulatorContext() - Constructor for class org.apache.spark.util.AccumulatorContext
+
 
+
AccumulatorParam<T> - Interface in org.apache.spark
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
AccumulatorParam.DoubleAccumulatorParam$ - Class in org.apache.spark
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
AccumulatorParam.DoubleAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.DoubleAccumulatorParam$
+
+
Deprecated.
+
AccumulatorParam.FloatAccumulatorParam$ - Class in org.apache.spark
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
AccumulatorParam.FloatAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.FloatAccumulatorParam$
+
+
Deprecated.
+
AccumulatorParam.IntAccumulatorParam$ - Class in org.apache.spark
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
AccumulatorParam.IntAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.IntAccumulatorParam$
+
+
Deprecated.
+
AccumulatorParam.LongAccumulatorParam$ - Class in org.apache.spark
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
AccumulatorParam.LongAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.LongAccumulatorParam$
+
+
Deprecated.
+
AccumulatorParam.StringAccumulatorParam$ - Class in org.apache.spark
+
+
Deprecated. +
use AccumulatorV2. Since 2.0.0.
+
+
+
AccumulatorParam.StringAccumulatorParam$() - Constructor for class org.apache.spark.AccumulatorParam.StringAccumulatorParam$
+
+
Deprecated.
+
accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.StageData
+
 
+
accumulatorUpdates() - Method in class org.apache.spark.status.api.v1.TaskData
+
 
+
AccumulatorV2<IN,OUT> - Class in org.apache.spark.util
+
+
The base class for accumulators, that can accumulate inputs of type IN, and produce output of + type OUT.
+
+
AccumulatorV2() - Constructor for class org.apache.spark.util.AccumulatorV2
+
 
+
accumUpdates() - Method in class org.apache.spark.ExceptionFailure
+
 
+
accumUpdates() - Method in class org.apache.spark.scheduler.SparkListenerExecutorMetricsUpdate
+
 
+
accuracy() - Method in class org.apache.spark.mllib.evaluation.MulticlassMetrics
+
+
Returns accuracy + (equals to the total number of correctly classified instances + out of the total number of instances.)
+
+
accuracy() - Method in class org.apache.spark.mllib.evaluation.MultilabelMetrics
+
+
Returns accuracy
+
+
acos(Column) - Static method in class org.apache.spark.sql.functions
+
+
Computes the cosine inverse of the given value; the returned angle is in the range + 0.0 through pi.
+
+
acos(String) - Static method in class org.apache.spark.sql.functions
+
+
Computes the cosine inverse of the given column; the returned angle is in the range + 0.0 through pi.
+
+
active() - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
+
+
Returns a list of active queries associated with this SQLContext
+
+
active() - Method in class org.apache.spark.streaming.scheduler.ReceiverInfo
+
 
+
ACTIVE() - Static method in class org.apache.spark.streaming.scheduler.ReceiverState
+
 
+
activeJobs() - Method in class org.apache.spark.ui.jobs.JobProgressListener
+
 
+
activeStages() - Method in class org.apache.spark.ui.jobs.JobProgressListener
+
 
+
activeStorageStatusList() - Method in class org.apache.spark.ui.exec.ExecutorsListener
+
 
+
activeStorageStatusList() - Method in class org.apache.spark.ui.storage.StorageListener
+
 
+
activeTasks() - Method in class org.apache.spark.status.api.v1.ExecutorSummary
+
 
+
add(T) - Method in class org.apache.spark.Accumulable
+
+
Deprecated.
+
Add more data to this accumulator / accumulable
+
+
add(T) - Static method in class org.apache.spark.Accumulator
+
+
Deprecated.
+
add(org.apache.spark.ml.feature.Instance) - Method in class org.apache.spark.ml.classification.LogisticAggregator
+
+
Add a new training instance to this LogisticAggregator, and update the loss and gradient + of the objective function.
+
+
add(AFTPoint) - Method in class org.apache.spark.ml.regression.AFTAggregator
+
+
Add a new training data to this AFTAggregator, and update the loss and gradient + of the objective function.
+
+
add(org.apache.spark.ml.feature.Instance) - Method in class org.apache.spark.ml.regression.LeastSquaresAggregator
+
+
Add a new training instance to this LeastSquaresAggregator, and update the loss and gradient + of the objective function.
+
+
add(double[], MultivariateGaussian[], ExpectationSum, Vector<Object>) - Static method in class org.apache.spark.mllib.clustering.ExpectationSum
+
 
+
add(Vector) - Method in class org.apache.spark.mllib.feature.IDF.DocumentFrequencyAggregator
+
+
Adds a new document.
+
+
add(BlockMatrix) - Method in class org.apache.spark.mllib.linalg.distributed.BlockMatrix
+
+
Adds the given block matrix other to this block matrix: this + other.
+
+
add(Vector) - Method in class org.apache.spark.mllib.stat.MultivariateOnlineSummarizer
+
+
Add a new sample to this summarizer, and update the statistical summary.
+
+
add(StructField) - Method in class org.apache.spark.sql.types.StructType
+
+
Creates a new StructType by adding a new field.
+
+
add(String, DataType) - Method in class org.apache.spark.sql.types.StructType
+
+
Creates a new StructType by adding a new nullable field with no metadata.
+
+
add(String, DataType, boolean) - Method in class org.apache.spark.sql.types.StructType
+
+
Creates a new StructType by adding a new field with no metadata.
+
+
add(String, DataType, boolean, Metadata) - Method in class org.apache.spark.sql.types.StructType
+
+
Creates a new StructType by adding a new field and specifying metadata.
+
+
add(String, DataType, boolean, String) - Method in class org.apache.spark.sql.types.StructType
+
+
Creates a new StructType by adding a new field and specifying metadata.
+
+
add(String, String) - Method in class org.apache.spark.sql.types.StructType
+
+
Creates a new StructType by adding a new nullable field with no metadata where the + dataType is specified as a String.
+
+
add(String, String, boolean) - Method in class org.apache.spark.sql.types.StructType
+
+
Creates a new StructType by adding a new field with no metadata where the + dataType is specified as a String.
+
+
add(String, String, boolean, Metadata) - Method in class org.apache.spark.sql.types.StructType
+
+
Creates a new StructType by adding a new field and specifying metadata where the + dataType is specified as a String.
+
+
add(String, String, boolean, String) - Method in class org.apache.spark.sql.types.StructType
+
+
Creates a new StructType by adding a new field and specifying metadata where the + dataType is specified as a String.
+
+
add(long, long) - Static method in class org.apache.spark.streaming.util.RawTextHelper
+
 
+
add(IN) - Method in class org.apache.spark.util.AccumulatorV2
+
+
Takes the inputs and accumulates.
+
+
add(T) - Method in class org.apache.spark.util.CollectionAccumulator
+
 
+
add(Double) - Method in class org.apache.spark.util.DoubleAccumulator
+
+
Adds v to the accumulator, i.e.
+
+
add(double) - Method in class org.apache.spark.util.DoubleAccumulator
+
+
Adds v to the accumulator, i.e.
+
+
add(T) - Method in class org.apache.spark.util.LegacyAccumulatorWrapper
+
 
+
add(Long) - Method in class org.apache.spark.util.LongAccumulator
+
+
Adds v to the accumulator, i.e.
+
+
add(long) - Method in class org.apache.spark.util.LongAccumulator
+
+
Adds v to the accumulator, i.e.
+
+
add(Object) - Method in class org.apache.spark.util.sketch.CountMinSketch
+
+
Increments item's count by one.
+
+
add(Object, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
+
+
Increments item's count by count.
+
+
add_months(Column, int) - Static method in class org.apache.spark.sql.functions
+
+
Returns the date that is numMonths after startDate.
+
+
addAccumulator(R, T) - Method in interface org.apache.spark.AccumulableParam
+
+
Deprecated.
+
Add additional data to the accumulator value.
+
+
addAccumulator(T, T) - Method in interface org.apache.spark.AccumulatorParam
+
+
Deprecated.
+
addAppArgs(String...) - Method in class org.apache.spark.launcher.SparkLauncher
+
+
Adds command line arguments for the application.
+
+
addBinary(byte[]) - Method in class org.apache.spark.util.sketch.CountMinSketch
+
+
Increments item's count by one.
+
+
addBinary(byte[], long) - Method in class org.apache.spark.util.sketch.CountMinSketch
+
+
Increments item's count by count.
+
+
addFile(String) - Method in class org.apache.spark.api.java.JavaSparkContext
+
+
Add a file to be downloaded with this Spark job on every node.
+
+
addFile(String, boolean) - Method in class org.apache.spark.api.java.JavaSparkContext
+
+
Add a file to be downloaded with this Spark job on every node.
+
+
addFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
+
+
Adds a file to be submitted with the application.
+
+
addFile(String) - Method in class org.apache.spark.SparkContext
+
+
Add a file to be downloaded with this Spark job on every node.
+
+
addFile(String, boolean) - Method in class org.apache.spark.SparkContext
+
+
Add a file to be downloaded with this Spark job on every node.
+
+
addFilters(Seq<ServletContextHandler>, SparkConf) - Static method in class org.apache.spark.ui.JettyUtils
+
+
Add filters, if any, to the given list of ServletContextHandlers
+
+
addGrid(Param<T>, Iterable<T>) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
+
+
Adds a param with multiple values (overwrites if the input param exists).
+
+
addGrid(DoubleParam, double[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
+
+
Adds a double param with multiple values.
+
+
addGrid(IntParam, int[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
+
+
Adds an int param with multiple values.
+
+
addGrid(FloatParam, float[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
+
+
Adds a float param with multiple values.
+
+
addGrid(LongParam, long[]) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
+
+
Adds a long param with multiple values.
+
+
addGrid(BooleanParam) - Method in class org.apache.spark.ml.tuning.ParamGridBuilder
+
+
Adds a boolean param with true and false.
+
+
addInPlace(R, R) - Method in interface org.apache.spark.AccumulableParam
+
+
Deprecated.
+
Merge two accumulated values together.
+
+
addInPlace(double, double) - Method in class org.apache.spark.AccumulatorParam.DoubleAccumulatorParam$
+
+
Deprecated.
+
addInPlace(float, float) - Method in class org.apache.spark.AccumulatorParam.FloatAccumulatorParam$
+
+
Deprecated.
+
addInPlace(int, int) - Method in class org.apache.spark.AccumulatorParam.IntAccumulatorParam$
+
+
Deprecated.
+
addInPlace(long, long) - Method in class org.apache.spark.AccumulatorParam.LongAccumulatorParam$
+
+
Deprecated.
+
addInPlace(String, String) - Method in class org.apache.spark.AccumulatorParam.StringAccumulatorParam$
+
+
Deprecated.
+
addJar(String) - Method in class org.apache.spark.api.java.JavaSparkContext
+
+
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
+
+
addJar(String) - Method in class org.apache.spark.launcher.SparkLauncher
+
+
Adds a jar file to be submitted with the application.
+
+
addJar(String) - Method in class org.apache.spark.SparkContext
+
+
Adds a JAR dependency for all tasks to be executed on this SparkContext in the future.
+
+
addListener(SparkAppHandle.Listener) - Method in interface org.apache.spark.launcher.SparkAppHandle
+
+
Adds a listener to be notified of changes to the handle's information.
+
+
addListener(StreamingQueryListener) - Method in class org.apache.spark.sql.streaming.StreamingQueryManager
+
+
Register a StreamingQueryListener to receive up-calls for life cycle events of + StreamingQuery.
+
+
addLocalConfiguration(String, int, int, int, JobConf) - Static method in class org.apache.spark.rdd.HadoopRDD
+
+
Add Hadoop configuration specific to a single partition and attempt.
+
+
addLong(long) - Method in class org.apache.spark.util.sketch.CountMinSketch
+
+
Increments item's count by one.
+
+
addLong(long, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
+
+
Increments item's count by count.
+
+
addPartToPGroup(Partition, PartitionGroup) - Method in class org.apache.spark.rdd.DefaultPartitionCoalescer
+
 
+
addPyFile(String) - Method in class org.apache.spark.launcher.SparkLauncher
+
+
Adds a python file / zip / egg to be submitted with the application.
+
+
address() - Method in class org.apache.spark.status.api.v1.RDDDataDistribution
+
 
+
addShutdownHook(Function0<BoxedUnit>) - Static method in class org.apache.spark.util.ShutdownHookManager
+
+
Adds a shutdown hook with default priority.
+
+
addShutdownHook(int, Function0<BoxedUnit>) - Static method in class org.apache.spark.util.ShutdownHookManager
+
+
Adds a shutdown hook with the given priority.
+
+
addSparkArg(String) - Method in class org.apache.spark.launcher.SparkLauncher
+
+
Adds a no-value argument to the Spark invocation.
+
+
addSparkArg(String, String) - Method in class org.apache.spark.launcher.SparkLauncher
+
+
Adds an argument with a value to the Spark invocation.
+
+
addSparkListener(org.apache.spark.scheduler.SparkListenerInterface) - Method in class org.apache.spark.SparkContext
+
+
:: DeveloperApi :: + Register a listener to receive up-calls from events that happen during execution.
+
+
addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.api.java.JavaStreamingContext
+
+
Add a StreamingListener object for + receiving system events related to streaming.
+
+
addStreamingListener(StreamingListener) - Method in class org.apache.spark.streaming.StreamingContext
+
+
Add a StreamingListener object for + receiving system events related to streaming.
+
+
addString(StringBuilder, String, String, String) - Static method in class org.apache.spark.sql.types.StructType
+
 
+
addString(StringBuilder, String) - Static method in class org.apache.spark.sql.types.StructType
+
 
+
addString(StringBuilder) - Static method in class org.apache.spark.sql.types.StructType
+
 
+
addString(String) - Method in class org.apache.spark.util.sketch.CountMinSketch
+
+
Increments item's count by one.
+
+
addString(String, long) - Method in class org.apache.spark.util.sketch.CountMinSketch
+
+
Increments item's count by count.
+
+
addSuppressed(Throwable) - Static method in exception org.apache.spark.sql.AnalysisException
+
 
+
addTaskCompletionListener(TaskCompletionListener) - Method in class org.apache.spark.TaskContext
+
+
Adds a (Java friendly) listener to be executed on task completion.
+
+
addTaskCompletionListener(Function1<TaskContext, BoxedUnit>) - Method in class org.apache.spark.TaskContext
+
+
Adds a listener in the form of a Scala closure to be executed on task completion.
+
+
addTaskFailureListener(TaskFailureListener) - Method in class org.apache.spark.TaskContext
+
+
Adds a listener to be executed on task failure.
+
+
addTaskFailureListener(Function2<TaskContext, Throwable, BoxedUnit>) - Method in class org.apache.spark.TaskContext
+
+
Adds a listener to be executed on task failure.
+
+
AFTAggregator - Class in org.apache.spark.ml.regression
+
+
AFTAggregator computes the gradient and loss for a AFT loss function, + as used in AFT survival regression for samples in sparse or dense vector in an online fashion.
+
+
AFTAggregator(Broadcast<DenseVector<Object>>, boolean, Broadcast<double[]>) - Constructor for class org.apache.spark.ml.regression.AFTAggregator
+
 
+
AFTCostFun - Class in org.apache.spark.ml.regression
+
+
AFTCostFun implements Breeze's DiffFunction[T] for AFT cost.
+
+
AFTCostFun(RDD<AFTPoint>, boolean, Broadcast<double[]>, int) - Constructor for class org.apache.spark.ml.regression.AFTCostFun
+
 
+
AFTSurvivalRegression - Class in org.apache.spark.ml.regression
+
+
:: Experimental :: + Fit a parametric survival regression model named accelerated failure time (AFT) model + (see + Accelerated failure time model (Wikipedia)) + based on the Weibull distribution of the survival time.
+
+
AFTSurvivalRegression(String) - Constructor for class org.apache.spark.ml.regression.AFTSurvivalRegression
+
 
+
AFTSurvivalRegression() - Constructor for class org.apache.spark.ml.regression.AFTSurvivalRegression
+
 
+
AFTSurvivalRegressionModel - Class in org.apache.spark.ml.regression
+
+
:: Experimental :: + Model produced by AFTSurvivalRegression.
+
+
agg(Column, Column...) - Method in class org.apache.spark.sql.Dataset
+
+
Aggregates on the entire Dataset without groups.
+
+
agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.Dataset
+
+
(Scala-specific) Aggregates on the entire Dataset without groups.
+
+
agg(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
+
+
(Scala-specific) Aggregates on the entire Dataset without groups.
+
+
agg(Map<String, String>) - Method in class org.apache.spark.sql.Dataset
+
+
(Java-specific) Aggregates on the entire Dataset without groups.
+
+
agg(Column, Seq<Column>) - Method in class org.apache.spark.sql.Dataset
+
+
Aggregates on the entire Dataset without groups.
+
+
agg(TypedColumn<V, U1>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
+
+
Computes the given aggregation, returning a Dataset of tuples for each unique key + and the result of computing this aggregation over all elements in the group.
+
+
agg(TypedColumn<V, U1>, TypedColumn<V, U2>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
+
+
Computes the given aggregations, returning a Dataset of tuples for each unique key + and the result of computing these aggregations over all elements in the group.
+
+
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
+
+
Computes the given aggregations, returning a Dataset of tuples for each unique key + and the result of computing these aggregations over all elements in the group.
+
+
agg(TypedColumn<V, U1>, TypedColumn<V, U2>, TypedColumn<V, U3>, TypedColumn<V, U4>) - Method in class org.apache.spark.sql.KeyValueGroupedDataset
+
+
Computes the given aggregations, returning a Dataset of tuples for each unique key + and the result of computing these aggregations over all elements in the group.
+
+
agg(Column, Column...) - Method in class org.apache.spark.sql.RelationalGroupedDataset
+
+
Compute aggregates by specifying a series of aggregate columns.
+
+
agg(Tuple2<String, String>, Seq<Tuple2<String, String>>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
+
+
(Scala-specific) Compute aggregates by specifying the column names and + aggregate methods.
+
+
agg(Map<String, String>) - Method in class org.apache.spark.sql.RelationalGroupedDataset
+
+
(Scala-specific) Compute aggregates by specifying a map from column name to + aggregate methods.
+
+