Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 06806200D2F for ; Tue, 17 Oct 2017 22:12:42 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 05532160BEC; Tue, 17 Oct 2017 20:12:42 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 345C5160C08 for ; Tue, 17 Oct 2017 22:12:38 +0200 (CEST) Received: (qmail 59203 invoked by uid 500); 17 Oct 2017 20:12:37 -0000 Mailing-List: contact commits-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list commits@spark.apache.org Received: (qmail 58607 invoked by uid 99); 17 Oct 2017 20:12:36 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Oct 2017 20:12:36 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id CF35DDFE15; Tue, 17 Oct 2017 20:12:35 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: holden@apache.org To: commits@spark.apache.org Date: Tue, 17 Oct 2017 20:13:08 -0000 Message-Id: <1687d707ba004a36819063dcdc4a7e9c@git.apache.org> In-Reply-To: <17e3fa49d6af41788fee1a5698d5ac55@git.apache.org> References: <17e3fa49d6af41788fee1a5698d5ac55@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [35/51] [partial] spark-website git commit: Add 2.1.2 docs archived-at: Tue, 17 Oct 2017 20:12:42 -0000 http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/window.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/window.html b/site/docs/2.1.2/api/R/window.html new file mode 100644 index 0000000..fc41b25 --- /dev/null +++ b/site/docs/2.1.2/api/R/window.html @@ -0,0 +1,122 @@ +R: window + + + + + + + + + +
window {SparkR}R Documentation
+ +

window

+ +

Description

+ +

Bucketize rows into one or more time windows given a timestamp specifying column. Window +starts are inclusive but the window ends are exclusive, e.g. 12:05 will be in the window +[12:05,12:10) but not in [12:00,12:05). Windows can support microsecond precision. Windows in +the order of months are not supported. +

+ + +

Usage

+ +
+window(x, ...)
+
+## S4 method for signature 'Column'
+window(x, windowDuration, slideDuration = NULL,
+  startTime = NULL)
+
+ + +

Arguments

+ + + + + + + + + + + + +
x +

a time Column. Must be of TimestampType.

+
... +

further arguments to be passed to or from other methods.

+
windowDuration +

a string specifying the width of the window, e.g. '1 second', +'1 day 12 hours', '2 minutes'. Valid interval strings are 'week', +'day', 'hour', 'minute', 'second', 'millisecond', 'microsecond'. Note that +the duration is a fixed length of time, and does not vary over time +according to a calendar. For example, '1 day' always means 86,400,000 +milliseconds, not a calendar day.

+
slideDuration +

a string specifying the sliding interval of the window. Same format as +windowDuration. A new window will be generated every +slideDuration. Must be less than or equal to +the windowDuration. This duration is likewise absolute, and does not +vary according to a calendar.

+
startTime +

the offset with respect to 1970-01-01 00:00:00 UTC with which to start +window intervals. For example, in order to have hourly tumbling windows +that start 15 minutes past the hour, e.g. 12:15-13:15, 13:15-14:15... provide +startTime as "15 minutes".

+
+ + +

Value

+ +

An output column of struct called 'window' by default with the nested columns 'start' +and 'end'. +

+ + +

Note

+ +

window since 2.0.0 +

+ + +

See Also

+ +

Other datetime_funcs: add_months, +date_add, date_format, +date_sub, datediff, +dayofmonth, dayofyear, +from_unixtime, +from_utc_timestamp, hour, +last_day, minute, +months_between, month, +next_day, quarter, +second, to_date, +to_utc_timestamp, +unix_timestamp, weekofyear, +year +

+ + +

Examples

+ +
## Not run: 
+##D   # One minute windows every 15 seconds 10 seconds after the minute, e.g. 09:00:10-09:01:10,
+##D   # 09:00:25-09:01:25, 09:00:40-09:01:40, ...
+##D   window(df$time, "1 minute", "15 seconds", "10 seconds")
+##D 
+##D   # One minute tumbling windows 15 seconds after the minute, e.g. 09:00:15-09:01:15,
+##D    # 09:01:15-09:02:15...
+##D   window(df$time, "1 minute", startTime = "15 seconds")
+##D 
+##D   # Thirty-second windows every 10 seconds, e.g. 09:00:00-09:00:30, 09:00:10-09:00:40, ...
+##D   window(df$time, "30 seconds", "10 seconds")
+## End(Not run)
+
+ + +
[Package SparkR version 2.1.2 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/windowOrderBy.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/windowOrderBy.html b/site/docs/2.1.2/api/R/windowOrderBy.html new file mode 100644 index 0000000..19a48fe --- /dev/null +++ b/site/docs/2.1.2/api/R/windowOrderBy.html @@ -0,0 +1,71 @@ +R: windowOrderBy + + + + + + + + + +
windowOrderBy {SparkR}R Documentation
+ +

windowOrderBy

+ +

Description

+ +

Creates a WindowSpec with the ordering defined. +

+ + +

Usage

+ +
+windowOrderBy(col, ...)
+
+## S4 method for signature 'character'
+windowOrderBy(col, ...)
+
+## S4 method for signature 'Column'
+windowOrderBy(col, ...)
+
+ + +

Arguments

+ + + + + + +
col +

A column name or Column by which rows are ordered within +windows.

+
... +

Optional column names or Columns in addition to col, by +which rows are ordered within windows.

+
+ + +

Note

+ +

windowOrderBy(character) since 2.0.0 +

+

windowOrderBy(Column) since 2.0.0 +

+ + +

Examples

+ +
## Not run: 
+##D   ws <- windowOrderBy("key1", "key2")
+##D   df1 <- select(df, over(lead("value", 1), ws))
+##D 
+##D   ws <- windowOrderBy(df$key1, df$key2)
+##D   df1 <- select(df, over(lead("value", 1), ws))
+## End(Not run)
+
+ + +
[Package SparkR version 2.1.2 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/windowPartitionBy.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/windowPartitionBy.html b/site/docs/2.1.2/api/R/windowPartitionBy.html new file mode 100644 index 0000000..931a709 --- /dev/null +++ b/site/docs/2.1.2/api/R/windowPartitionBy.html @@ -0,0 +1,71 @@ +R: windowPartitionBy + + + + + + + + + +
windowPartitionBy {SparkR}R Documentation
+ +

windowPartitionBy

+ +

Description

+ +

Creates a WindowSpec with the partitioning defined. +

+ + +

Usage

+ +
+windowPartitionBy(col, ...)
+
+## S4 method for signature 'character'
+windowPartitionBy(col, ...)
+
+## S4 method for signature 'Column'
+windowPartitionBy(col, ...)
+
+ + +

Arguments

+ + + + + + +
col +

A column name or Column by which rows are partitioned to +windows.

+
... +

Optional column names or Columns in addition to col, by +which rows are partitioned to windows.

+
+ + +

Note

+ +

windowPartitionBy(character) since 2.0.0 +

+

windowPartitionBy(Column) since 2.0.0 +

+ + +

Examples

+ +
## Not run: 
+##D   ws <- orderBy(windowPartitionBy("key1", "key2"), "key3")
+##D   df1 <- select(df, over(lead("value", 1), ws))
+##D 
+##D   ws <- orderBy(windowPartitionBy(df$key1, df$key2), df$key3)
+##D   df1 <- select(df, over(lead("value", 1), ws))
+## End(Not run)
+
+ + +
[Package SparkR version 2.1.2 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/with.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/with.html b/site/docs/2.1.2/api/R/with.html new file mode 100644 index 0000000..e14d13c --- /dev/null +++ b/site/docs/2.1.2/api/R/with.html @@ -0,0 +1,110 @@ +R: Evaluate a R expression in an environment constructed from a... + + + + + + + + + +
with {SparkR}R Documentation
+ +

Evaluate a R expression in an environment constructed from a SparkDataFrame

+ +

Description

+ +

Evaluate a R expression in an environment constructed from a SparkDataFrame +with() allows access to columns of a SparkDataFrame by simply referring to +their name. It appends every column of a SparkDataFrame into a new +environment. Then, the given expression is evaluated in this new +environment. +

+ + +

Usage

+ +
+with(data, expr, ...)
+
+## S4 method for signature 'SparkDataFrame'
+with(data, expr, ...)
+
+ + +

Arguments

+ + + + + + + + +
data +

(SparkDataFrame) SparkDataFrame to use for constructing an environment.

+
expr +

(expression) Expression to evaluate.

+
... +

arguments to be passed to future methods.

+
+ + +

Note

+ +

with since 1.6.0 +

+ + +

See Also

+ +

attach +

+

Other SparkDataFrame functions: SparkDataFrame-class, +agg, arrange, +as.data.frame, attach, +cache, coalesce, +collect, colnames, +coltypes, +createOrReplaceTempView, +crossJoin, dapplyCollect, +dapply, describe, +dim, distinct, +dropDuplicates, dropna, +drop, dtypes, +except, explain, +filter, first, +gapplyCollect, gapply, +getNumPartitions, group_by, +head, histogram, +insertInto, intersect, +isLocal, join, +limit, merge, +mutate, ncol, +nrow, persist, +printSchema, randomSplit, +rbind, registerTempTable, +rename, repartition, +sample, saveAsTable, +schema, selectExpr, +select, showDF, +show, storageLevel, +str, subset, +take, union, +unpersist, withColumn, +write.df, write.jdbc, +write.json, write.orc, +write.parquet, write.text +

+ + +

Examples

+ +
## Not run: 
+##D with(irisDf, nrow(Sepal_Width))
+## End(Not run)
+
+ + +
[Package SparkR version 2.1.2 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/withColumn.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/withColumn.html b/site/docs/2.1.2/api/R/withColumn.html new file mode 100644 index 0000000..8d719ee --- /dev/null +++ b/site/docs/2.1.2/api/R/withColumn.html @@ -0,0 +1,123 @@ +R: WithColumn + + + + + + + + + +
withColumn {SparkR}R Documentation
+ +

WithColumn

+ +

Description

+ +

Return a new SparkDataFrame by adding a column or replacing the existing column +that has the same name. +

+ + +

Usage

+ +
+withColumn(x, colName, col)
+
+## S4 method for signature 'SparkDataFrame,character'
+withColumn(x, colName, col)
+
+ + +

Arguments

+ + + + + + + + +
x +

a SparkDataFrame.

+
colName +

a column name.

+
col +

a Column expression, or an atomic vector in the length of 1 as literal value.

+
+ + +

Value

+ +

A SparkDataFrame with the new column added or the existing column replaced. +

+ + +

Note

+ +

withColumn since 1.4.0 +

+ + +

See Also

+ +

rename mutate subset +

+

Other SparkDataFrame functions: SparkDataFrame-class, +agg, arrange, +as.data.frame, attach, +cache, coalesce, +collect, colnames, +coltypes, +createOrReplaceTempView, +crossJoin, dapplyCollect, +dapply, describe, +dim, distinct, +dropDuplicates, dropna, +drop, dtypes, +except, explain, +filter, first, +gapplyCollect, gapply, +getNumPartitions, group_by, +head, histogram, +insertInto, intersect, +isLocal, join, +limit, merge, +mutate, ncol, +nrow, persist, +printSchema, randomSplit, +rbind, registerTempTable, +rename, repartition, +sample, saveAsTable, +schema, selectExpr, +select, showDF, +show, storageLevel, +str, subset, +take, union, +unpersist, with, +write.df, write.jdbc, +write.json, write.orc, +write.parquet, write.text +

+ + +

Examples

+ +
## Not run: 
+##D sparkR.session()
+##D path <- "path/to/file.json"
+##D df <- read.json(path)
+##D newDF <- withColumn(df, "newCol", df$col1 * 5)
+##D # Replace an existing column
+##D newDF2 <- withColumn(newDF, "newCol", newDF$col1)
+##D newDF3 <- withColumn(newDF, "newCol", 42)
+##D # Use extract operator to set an existing or new column
+##D df[["age"]] <- 23
+##D df[[2]] <- df$col1
+##D df[[2]] <- NULL # drop column
+## End(Not run)
+
+ + +
[Package SparkR version 2.1.2 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/write.df.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/write.df.html b/site/docs/2.1.2/api/R/write.df.html new file mode 100644 index 0000000..515c420 --- /dev/null +++ b/site/docs/2.1.2/api/R/write.df.html @@ -0,0 +1,153 @@ +R: Save the contents of SparkDataFrame to a data source. + + + + + + + + + +
write.df {SparkR}R Documentation
+ +

Save the contents of SparkDataFrame to a data source.

+ +

Description

+ +

The data source is specified by the source and a set of options (...). +If source is not specified, the default data source configured by +spark.sql.sources.default will be used. +

+ + +

Usage

+ +
+write.df(df, path = NULL, ...)
+
+saveDF(df, path, source = NULL, mode = "error", ...)
+
+write.df(df, path = NULL, ...)
+
+## S4 method for signature 'SparkDataFrame'
+write.df(df, path = NULL, source = NULL,
+  mode = "error", ...)
+
+## S4 method for signature 'SparkDataFrame,character'
+saveDF(df, path, source = NULL,
+  mode = "error", ...)
+
+ + +

Arguments

+ + + + + + + + + + + + +
df +

a SparkDataFrame.

+
path +

a name for the table.

+
... +

additional argument(s) passed to the method.

+
source +

a name for external data source.

+
mode +

one of 'append', 'overwrite', 'error', 'ignore' save mode (it is 'error' by default)

+
+ + +

Details

+ +

Additionally, mode is used to specify the behavior of the save operation when data already +exists in the data source. There are four modes: +

+ +
    +
  • append: Contents of this SparkDataFrame are expected to be appended to existing data. +

    +
  • +
  • overwrite: Existing data is expected to be overwritten by the contents of this +SparkDataFrame. +

    +
  • +
  • error: An exception is expected to be thrown. +

    +
  • +
  • ignore: The save operation is expected to not save the contents of the SparkDataFrame +and to not change the existing data. +

    +
+ + + +

Note

+ +

write.df since 1.4.0 +

+

saveDF since 1.4.0 +

+ + +

See Also

+ +

Other SparkDataFrame functions: SparkDataFrame-class, +agg, arrange, +as.data.frame, attach, +cache, coalesce, +collect, colnames, +coltypes, +createOrReplaceTempView, +crossJoin, dapplyCollect, +dapply, describe, +dim, distinct, +dropDuplicates, dropna, +drop, dtypes, +except, explain, +filter, first, +gapplyCollect, gapply, +getNumPartitions, group_by, +head, histogram, +insertInto, intersect, +isLocal, join, +limit, merge, +mutate, ncol, +nrow, persist, +printSchema, randomSplit, +rbind, registerTempTable, +rename, repartition, +sample, saveAsTable, +schema, selectExpr, +select, showDF, +show, storageLevel, +str, subset, +take, union, +unpersist, withColumn, +with, write.jdbc, +write.json, write.orc, +write.parquet, write.text +

+ + +

Examples

+ +
## Not run: 
+##D sparkR.session()
+##D path <- "path/to/file.json"
+##D df <- read.json(path)
+##D write.df(df, "myfile", "parquet", "overwrite")
+##D saveDF(df, parquetPath2, "parquet", mode = saveMode, mergeSchema = mergeSchema)
+## End(Not run)
+
+ + +
[Package SparkR version 2.1.2 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/write.jdbc.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/write.jdbc.html b/site/docs/2.1.2/api/R/write.jdbc.html new file mode 100644 index 0000000..bf86b5d --- /dev/null +++ b/site/docs/2.1.2/api/R/write.jdbc.html @@ -0,0 +1,140 @@ +R: Save the content of SparkDataFrame to an external database... + + + + + + + + + +
write.jdbc {SparkR}R Documentation
+ +

Save the content of SparkDataFrame to an external database table via JDBC.

+ +

Description

+ +

Save the content of the SparkDataFrame to an external database table via JDBC. Additional JDBC +database connection properties can be set (...) +

+ + +

Usage

+ +
+write.jdbc(x, url, tableName, mode = "error", ...)
+
+## S4 method for signature 'SparkDataFrame,character,character'
+write.jdbc(x, url, tableName,
+  mode = "error", ...)
+
+ + +

Arguments

+ + + + + + + + + + + + +
x +

a SparkDataFrame.

+
url +

JDBC database url of the form jdbc:subprotocol:subname.

+
tableName +

yhe name of the table in the external database.

+
mode +

one of 'append', 'overwrite', 'error', 'ignore' save mode (it is 'error' by default).

+
... +

additional JDBC database connection properties.

+
+ + +

Details

+ +

Also, mode is used to specify the behavior of the save operation when +data already exists in the data source. There are four modes: +

+ +
    +
  • append: Contents of this SparkDataFrame are expected to be appended to existing data. +

    +
  • +
  • overwrite: Existing data is expected to be overwritten by the contents of this +SparkDataFrame. +

    +
  • +
  • error: An exception is expected to be thrown. +

    +
  • +
  • ignore: The save operation is expected to not save the contents of the SparkDataFrame +and to not change the existing data. +

    +
+ + + +

Note

+ +

write.jdbc since 2.0.0 +

+ + +

See Also

+ +

Other SparkDataFrame functions: SparkDataFrame-class, +agg, arrange, +as.data.frame, attach, +cache, coalesce, +collect, colnames, +coltypes, +createOrReplaceTempView, +crossJoin, dapplyCollect, +dapply, describe, +dim, distinct, +dropDuplicates, dropna, +drop, dtypes, +except, explain, +filter, first, +gapplyCollect, gapply, +getNumPartitions, group_by, +head, histogram, +insertInto, intersect, +isLocal, join, +limit, merge, +mutate, ncol, +nrow, persist, +printSchema, randomSplit, +rbind, registerTempTable, +rename, repartition, +sample, saveAsTable, +schema, selectExpr, +select, showDF, +show, storageLevel, +str, subset, +take, union, +unpersist, withColumn, +with, write.df, +write.json, write.orc, +write.parquet, write.text +

+ + +

Examples

+ +
## Not run: 
+##D sparkR.session()
+##D jdbcUrl <- "jdbc:mysql://localhost:3306/databasename"
+##D write.jdbc(df, jdbcUrl, "table", user = "username", password = "password")
+## End(Not run)
+
+ + +
[Package SparkR version 2.1.2 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/write.json.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/write.json.html b/site/docs/2.1.2/api/R/write.json.html new file mode 100644 index 0000000..53a1d7f --- /dev/null +++ b/site/docs/2.1.2/api/R/write.json.html @@ -0,0 +1,113 @@ +R: Save the contents of SparkDataFrame as a JSON file + + + + + + + + + +
write.json {SparkR}R Documentation
+ +

Save the contents of SparkDataFrame as a JSON file

+ +

Description

+ +

Save the contents of a SparkDataFrame as a JSON file ( +JSON Lines text format or newline-delimited JSON). Files written out +with this method can be read back in as a SparkDataFrame using read.json(). +

+ + +

Usage

+ +
+write.json(x, path, ...)
+
+## S4 method for signature 'SparkDataFrame,character'
+write.json(x, path, mode = "error", ...)
+
+ + +

Arguments

+ + + + + + + + + + +
x +

A SparkDataFrame

+
path +

The directory where the file is saved

+
... +

additional argument(s) passed to the method.

+
mode +

one of 'append', 'overwrite', 'error', 'ignore' save mode (it is 'error' by default)

+
+ + +

Note

+ +

write.json since 1.6.0 +

+ + +

See Also

+ +

Other SparkDataFrame functions: SparkDataFrame-class, +agg, arrange, +as.data.frame, attach, +cache, coalesce, +collect, colnames, +coltypes, +createOrReplaceTempView, +crossJoin, dapplyCollect, +dapply, describe, +dim, distinct, +dropDuplicates, dropna, +drop, dtypes, +except, explain, +filter, first, +gapplyCollect, gapply, +getNumPartitions, group_by, +head, histogram, +insertInto, intersect, +isLocal, join, +limit, merge, +mutate, ncol, +nrow, persist, +printSchema, randomSplit, +rbind, registerTempTable, +rename, repartition, +sample, saveAsTable, +schema, selectExpr, +select, showDF, +show, storageLevel, +str, subset, +take, union, +unpersist, withColumn, +with, write.df, +write.jdbc, write.orc, +write.parquet, write.text +

+ + +

Examples

+ +
## Not run: 
+##D sparkR.session()
+##D path <- "path/to/file.json"
+##D df <- read.json(path)
+##D write.json(df, "/tmp/sparkr-tmp/")
+## End(Not run)
+
+ + +
[Package SparkR version 2.1.2 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/write.ml.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/write.ml.html b/site/docs/2.1.2/api/R/write.ml.html new file mode 100644 index 0000000..92e700e --- /dev/null +++ b/site/docs/2.1.2/api/R/write.ml.html @@ -0,0 +1,58 @@ +R: Saves the MLlib model to the input path + + + + +
write.ml {SparkR}R Documentation
+ +

Saves the MLlib model to the input path

+ +

Description

+ +

Saves the MLlib model to the input path. For more information, see the specific +MLlib model below. +

+ + +

Usage

+ +
+write.ml(object, path, ...)
+
+ + +

Arguments

+ + + + + + + + +
object +

a fitted ML model object.

+
path +

the directory where the model is saved.

+
... +

additional argument(s) passed to the method.

+
+ + +

See Also

+ +

spark.glm, glm, +

+

spark.als, spark.gaussianMixture, spark.gbt, spark.isoreg, +

+

spark.kmeans, +

+

spark.lda, spark.logit, spark.mlp, spark.naiveBayes, +

+

spark.randomForest, spark.survreg, +

+

read.ml +

+ +
[Package SparkR version 2.1.2 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/write.orc.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/write.orc.html b/site/docs/2.1.2/api/R/write.orc.html new file mode 100644 index 0000000..d66ca97 --- /dev/null +++ b/site/docs/2.1.2/api/R/write.orc.html @@ -0,0 +1,112 @@ +R: Save the contents of SparkDataFrame as an ORC file,... + + + + + + + + + +
write.orc {SparkR}R Documentation
+ +

Save the contents of SparkDataFrame as an ORC file, preserving the schema.

+ +

Description

+ +

Save the contents of a SparkDataFrame as an ORC file, preserving the schema. Files written out +with this method can be read back in as a SparkDataFrame using read.orc(). +

+ + +

Usage

+ +
+write.orc(x, path, ...)
+
+## S4 method for signature 'SparkDataFrame,character'
+write.orc(x, path, mode = "error", ...)
+
+ + +

Arguments

+ + + + + + + + + + +
x +

A SparkDataFrame

+
path +

The directory where the file is saved

+
... +

additional argument(s) passed to the method.

+
mode +

one of 'append', 'overwrite', 'error', 'ignore' save mode (it is 'error' by default)

+
+ + +

Note

+ +

write.orc since 2.0.0 +

+ + +

See Also

+ +

Other SparkDataFrame functions: SparkDataFrame-class, +agg, arrange, +as.data.frame, attach, +cache, coalesce, +collect, colnames, +coltypes, +createOrReplaceTempView, +crossJoin, dapplyCollect, +dapply, describe, +dim, distinct, +dropDuplicates, dropna, +drop, dtypes, +except, explain, +filter, first, +gapplyCollect, gapply, +getNumPartitions, group_by, +head, histogram, +insertInto, intersect, +isLocal, join, +limit, merge, +mutate, ncol, +nrow, persist, +printSchema, randomSplit, +rbind, registerTempTable, +rename, repartition, +sample, saveAsTable, +schema, selectExpr, +select, showDF, +show, storageLevel, +str, subset, +take, union, +unpersist, withColumn, +with, write.df, +write.jdbc, write.json, +write.parquet, write.text +

+ + +

Examples

+ +
## Not run: 
+##D sparkR.session()
+##D path <- "path/to/file.json"
+##D df <- read.json(path)
+##D write.orc(df, "/tmp/sparkr-tmp1/")
+## End(Not run)
+
+ + +
[Package SparkR version 2.1.2 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/write.parquet.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/write.parquet.html b/site/docs/2.1.2/api/R/write.parquet.html new file mode 100644 index 0000000..bbe3987 --- /dev/null +++ b/site/docs/2.1.2/api/R/write.parquet.html @@ -0,0 +1,121 @@ +R: Save the contents of SparkDataFrame as a Parquet file,... + + + + + + + + + +
write.parquet {SparkR}R Documentation
+ +

Save the contents of SparkDataFrame as a Parquet file, preserving the schema.

+ +

Description

+ +

Save the contents of a SparkDataFrame as a Parquet file, preserving the schema. Files written out +with this method can be read back in as a SparkDataFrame using read.parquet(). +

+ + +

Usage

+ +
+write.parquet(x, path, ...)
+
+saveAsParquetFile(x, path)
+
+## S4 method for signature 'SparkDataFrame,character'
+write.parquet(x, path, mode = "error",
+  ...)
+
+## S4 method for signature 'SparkDataFrame,character'
+saveAsParquetFile(x, path)
+
+ + +

Arguments

+ + + + + + + + + + +
x +

A SparkDataFrame

+
path +

The directory where the file is saved

+
... +

additional argument(s) passed to the method.

+
mode +

one of 'append', 'overwrite', 'error', 'ignore' save mode (it is 'error' by default)

+
+ + +

Note

+ +

write.parquet since 1.6.0 +

+

saveAsParquetFile since 1.4.0 +

+ + +

See Also

+ +

Other SparkDataFrame functions: SparkDataFrame-class, +agg, arrange, +as.data.frame, attach, +cache, coalesce, +collect, colnames, +coltypes, +createOrReplaceTempView, +crossJoin, dapplyCollect, +dapply, describe, +dim, distinct, +dropDuplicates, dropna, +drop, dtypes, +except, explain, +filter, first, +gapplyCollect, gapply, +getNumPartitions, group_by, +head, histogram, +insertInto, intersect, +isLocal, join, +limit, merge, +mutate, ncol, +nrow, persist, +printSchema, randomSplit, +rbind, registerTempTable, +rename, repartition, +sample, saveAsTable, +schema, selectExpr, +select, showDF, +show, storageLevel, +str, subset, +take, union, +unpersist, withColumn, +with, write.df, +write.jdbc, write.json, +write.orc, write.text +

+ + +

Examples

+ +
## Not run: 
+##D sparkR.session()
+##D path <- "path/to/file.json"
+##D df <- read.json(path)
+##D write.parquet(df, "/tmp/sparkr-tmp1/")
+##D saveAsParquetFile(df, "/tmp/sparkr-tmp2/")
+## End(Not run)
+
+ + +
[Package SparkR version 2.1.2 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/write.text.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/write.text.html b/site/docs/2.1.2/api/R/write.text.html new file mode 100644 index 0000000..6fa1cf8 --- /dev/null +++ b/site/docs/2.1.2/api/R/write.text.html @@ -0,0 +1,113 @@ +R: Save the content of SparkDataFrame in a text file at the... + + + + + + + + + +
write.text {SparkR}R Documentation
+ +

Save the content of SparkDataFrame in a text file at the specified path.

+ +

Description

+ +

Save the content of the SparkDataFrame in a text file at the specified path. +The SparkDataFrame must have only one column of string type with the name "value". +Each row becomes a new line in the output file. +

+ + +

Usage

+ +
+write.text(x, path, ...)
+
+## S4 method for signature 'SparkDataFrame,character'
+write.text(x, path, mode = "error", ...)
+
+ + +

Arguments

+ + + + + + + + + + +
x +

A SparkDataFrame

+
path +

The directory where the file is saved

+
... +

additional argument(s) passed to the method.

+
mode +

one of 'append', 'overwrite', 'error', 'ignore' save mode (it is 'error' by default)

+
+ + +

Note

+ +

write.text since 2.0.0 +

+ + +

See Also

+ +

Other SparkDataFrame functions: SparkDataFrame-class, +agg, arrange, +as.data.frame, attach, +cache, coalesce, +collect, colnames, +coltypes, +createOrReplaceTempView, +crossJoin, dapplyCollect, +dapply, describe, +dim, distinct, +dropDuplicates, dropna, +drop, dtypes, +except, explain, +filter, first, +gapplyCollect, gapply, +getNumPartitions, group_by, +head, histogram, +insertInto, intersect, +isLocal, join, +limit, merge, +mutate, ncol, +nrow, persist, +printSchema, randomSplit, +rbind, registerTempTable, +rename, repartition, +sample, saveAsTable, +schema, selectExpr, +select, showDF, +show, storageLevel, +str, subset, +take, union, +unpersist, withColumn, +with, write.df, +write.jdbc, write.json, +write.orc, write.parquet +

+ + +

Examples

+ +
## Not run: 
+##D sparkR.session()
+##D path <- "path/to/file.txt"
+##D df <- read.text(path)
+##D write.text(df, "/tmp/sparkr-tmp/")
+## End(Not run)
+
+ + +
[Package SparkR version 2.1.2 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/0b563c84/site/docs/2.1.2/api/R/year.html ---------------------------------------------------------------------- diff --git a/site/docs/2.1.2/api/R/year.html b/site/docs/2.1.2/api/R/year.html new file mode 100644 index 0000000..dd118ae --- /dev/null +++ b/site/docs/2.1.2/api/R/year.html @@ -0,0 +1,72 @@ +R: year + + + + + + + + + +
year {SparkR}R Documentation
+ +

year

+ +

Description

+ +

Extracts the year as an integer from a given date/timestamp/string. +

+ + +

Usage

+ +
+year(x)
+
+## S4 method for signature 'Column'
+year(x)
+
+ + +

Arguments

+ + + + +
x +

Column to compute on.

+
+ + +

Note

+ +

year since 1.5.0 +

+ + +

See Also

+ +

Other datetime_funcs: add_months, +date_add, date_format, +date_sub, datediff, +dayofmonth, dayofyear, +from_unixtime, +from_utc_timestamp, hour, +last_day, minute, +months_between, month, +next_day, quarter, +second, to_date, +to_utc_timestamp, +unix_timestamp, weekofyear, +window +

+ + +

Examples

+ +
## Not run: year(df$c)
+
+ + +
[Package SparkR version 2.1.2 Index]
+ --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org For additional commands, e-mail: commits-help@spark.apache.org