Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id CC8A7200CEE for ; Wed, 12 Jul 2017 00:36:15 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id CA99A167633; Tue, 11 Jul 2017 22:36:15 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 544351654CA for ; Wed, 12 Jul 2017 00:36:11 +0200 (CEST) Received: (qmail 74439 invoked by uid 500); 11 Jul 2017 22:36:09 -0000 Mailing-List: contact commits-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list commits@spark.apache.org Received: (qmail 74121 invoked by uid 99); 11 Jul 2017 22:36:09 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 11 Jul 2017 22:36:09 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 22843F553A; Tue, 11 Jul 2017 22:36:07 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: marmbrus@apache.org To: commits@spark.apache.org Date: Tue, 11 Jul 2017 22:36:46 -0000 Message-Id: <3d34b76b6c9b40408cf6798b5279fbe6@git.apache.org> In-Reply-To: <2fe6ed46a8094d44b92bbba65f926e62@git.apache.org> References: <2fe6ed46a8094d44b92bbba65f926e62@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [41/56] [partial] spark-website git commit: Add Spark 2.2.0 docs archived-at: Tue, 11 Jul 2017 22:36:16 -0000 http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/dayofyear.html ---------------------------------------------------------------------- diff --git a/site/docs/2.2.0/api/R/dayofyear.html b/site/docs/2.2.0/api/R/dayofyear.html new file mode 100644 index 0000000..a6da86e --- /dev/null +++ b/site/docs/2.2.0/api/R/dayofyear.html @@ -0,0 +1,118 @@ + +R: dayofyear + + + + + + + + + +
dayofyear {SparkR}R Documentation
+ +

dayofyear

+ +

Description

+ +

Extracts the day of the year as an integer from a given date/timestamp/string. +

+ + +

Usage

+ +
+## S4 method for signature 'Column'
+dayofyear(x)
+
+dayofyear(x)
+
+ + +

Arguments

+ + + + +
x +

Column to compute on.

+
+ + +

Note

+ +

dayofyear since 1.5.0 +

+ + +

See Also

+ +

Other datetime_funcs: add_months, +add_months, +add_months,Column,numeric-method; +date_add, date_add, +date_add,Column,numeric-method; +date_format, date_format, +date_format,Column,character-method; +date_sub, date_sub, +date_sub,Column,numeric-method; +datediff, datediff, +datediff,Column-method; +dayofmonth, dayofmonth, +dayofmonth,Column-method; +from_unixtime, from_unixtime, +from_unixtime,Column-method; +from_utc_timestamp, +from_utc_timestamp, +from_utc_timestamp,Column,character-method; +hour, hour, +hour,Column-method; last_day, +last_day, +last_day,Column-method; +minute, minute, +minute,Column-method; +months_between, +months_between, +months_between,Column-method; +month, month, +month,Column-method; +next_day, next_day, +next_day,Column,character-method; +quarter, quarter, +quarter,Column-method; +second, second, +second,Column-method; +to_date, to_date, +to_date, +to_date,Column,character-method, +to_date,Column,missing-method; +to_timestamp, to_timestamp, +to_timestamp, +to_timestamp,Column,character-method, +to_timestamp,Column,missing-method; +to_utc_timestamp, +to_utc_timestamp, +to_utc_timestamp,Column,character-method; +unix_timestamp, +unix_timestamp, +unix_timestamp, +unix_timestamp, +unix_timestamp,Column,character-method, +unix_timestamp,Column,missing-method, +unix_timestamp,missing,missing-method; +weekofyear, weekofyear, +weekofyear,Column-method; +window, window, +window,Column-method; year, +year, year,Column-method +

+ + +

Examples

+ +
## Not run: dayofyear(df$c)
+
+ + +
[Package SparkR version 2.2.0 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/decode.html ---------------------------------------------------------------------- diff --git a/site/docs/2.2.0/api/R/decode.html b/site/docs/2.2.0/api/R/decode.html new file mode 100644 index 0000000..f5500b1 --- /dev/null +++ b/site/docs/2.2.0/api/R/decode.html @@ -0,0 +1,119 @@ + +R: decode + + + + + + + + + +
decode {SparkR}R Documentation
+ +

decode

+ +

Description

+ +

Computes the first argument into a string from a binary using the provided character set +(one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16'). +

+ + +

Usage

+ +
+## S4 method for signature 'Column,character'
+decode(x, charset)
+
+decode(x, charset)
+
+ + +

Arguments

+ + + + + + +
x +

Column to compute on.

+
charset +

Character set to use

+
+ + +

Note

+ +

decode since 1.6.0 +

+ + +

See Also

+ +

Other string_funcs: ascii, +ascii, ascii,Column-method; +base64, base64, +base64,Column-method; +concat_ws, concat_ws, +concat_ws,character,Column-method; +concat, concat, +concat,Column-method; encode, +encode, +encode,Column,character-method; +format_number, format_number, +format_number,Column,numeric-method; +format_string, format_string, +format_string,character,Column-method; +initcap, initcap, +initcap,Column-method; instr, +instr, +instr,Column,character-method; +length, length,Column-method; +levenshtein, levenshtein, +levenshtein,Column-method; +locate, locate, +locate,character,Column-method; +lower, lower, +lower,Column-method; lpad, +lpad, +lpad,Column,numeric,character-method; +ltrim, ltrim, +ltrim,Column-method; +regexp_extract, +regexp_extract, +regexp_extract,Column,character,numeric-method; +regexp_replace, +regexp_replace, +regexp_replace,Column,character,character-method; +reverse, reverse, +reverse,Column-method; rpad, +rpad, +rpad,Column,numeric,character-method; +rtrim, rtrim, +rtrim,Column-method; soundex, +soundex, +soundex,Column-method; +substring_index, +substring_index, +substring_index,Column,character,numeric-method; +translate, translate, +translate,Column,character,character-method; +trim, trim, +trim,Column-method; unbase64, +unbase64, +unbase64,Column-method; +upper, upper, +upper,Column-method +

+ + +

Examples

+ +
## Not run: decode(df$c, "UTF-8")
+
+ + +
[Package SparkR version 2.2.0 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/dense_rank.html ---------------------------------------------------------------------- diff --git a/site/docs/2.2.0/api/R/dense_rank.html b/site/docs/2.2.0/api/R/dense_rank.html new file mode 100644 index 0000000..4fda62f --- /dev/null +++ b/site/docs/2.2.0/api/R/dense_rank.html @@ -0,0 +1,91 @@ + +R: dense_rank + + + + + + + + + +
dense_rank {SparkR}R Documentation
+ +

dense_rank

+ +

Description

+ +

Window function: returns the rank of rows within a window partition, without any gaps. +The difference between rank and dense_rank is that dense_rank leaves no gaps in ranking +sequence when there are ties. That is, if you were ranking a competition using dense_rank +and had three people tie for second place, you would say that all three were in second +place and that the next person came in third. Rank would give me sequential numbers, making +the person that came in third place (after the ties) would register as coming in fifth. +

+ + +

Usage

+ +
+## S4 method for signature 'missing'
+dense_rank()
+
+dense_rank(x = "missing")
+
+ + +

Arguments

+ + + + +
x +

empty. Should be used with no argument.

+
+ + +

Details

+ +

This is equivalent to the DENSE_RANK function in SQL. +

+ + +

Note

+ +

dense_rank since 1.6.0 +

+ + +

See Also

+ +

Other window_funcs: cume_dist, +cume_dist, +cume_dist,missing-method; +lag, lag, +lag,characterOrColumn-method; +lead, lead, +lead,characterOrColumn,numeric-method; +ntile, ntile, +ntile,numeric-method; +percent_rank, percent_rank, +percent_rank,missing-method; +rank, rank, +rank, rank,ANY-method, +rank,missing-method; +row_number, row_number, +row_number,missing-method +

+ + +

Examples

+ +
## Not run: 
+##D   df <- createDataFrame(mtcars)
+##D   ws <- orderBy(windowPartitionBy("am"), "hp")
+##D   out <- select(df, over(dense_rank(), ws), df$hp, df$am)
+## End(Not run)
+
+ + +
[Package SparkR version 2.2.0 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/dim.html ---------------------------------------------------------------------- diff --git a/site/docs/2.2.0/api/R/dim.html b/site/docs/2.2.0/api/R/dim.html new file mode 100644 index 0000000..254752b --- /dev/null +++ b/site/docs/2.2.0/api/R/dim.html @@ -0,0 +1,282 @@ + +R: Returns the dimensions of SparkDataFrame + + + + + + + + + +
dim {SparkR}R Documentation
+ +

Returns the dimensions of SparkDataFrame

+ +

Description

+ +

Returns the dimensions (number of rows and columns) of a SparkDataFrame +

+ + +

Usage

+ +
+## S4 method for signature 'SparkDataFrame'
+dim(x)
+
+ + +

Arguments

+ + + + +
x +

a SparkDataFrame

+
+ + +

Note

+ +

dim since 1.5.0 +

+ + +

See Also

+ +

Other SparkDataFrame functions: $, +$,SparkDataFrame-method, $<-, +$<-,SparkDataFrame-method, +select, select, +select,SparkDataFrame,Column-method, +select,SparkDataFrame,character-method, +select,SparkDataFrame,list-method; +SparkDataFrame-class; [, +[,SparkDataFrame-method, [[, +[[,SparkDataFrame,numericOrcharacter-method, +[[<-, +[[<-,SparkDataFrame,numericOrcharacter-method, +subset, subset, +subset,SparkDataFrame-method; +agg, agg, agg, +agg,GroupedData-method, +agg,SparkDataFrame-method, +summarize, summarize, +summarize, +summarize,GroupedData-method, +summarize,SparkDataFrame-method; +arrange, arrange, +arrange, +arrange,SparkDataFrame,Column-method, +arrange,SparkDataFrame,character-method, +orderBy,SparkDataFrame,characterOrColumn-method; +as.data.frame, +as.data.frame,SparkDataFrame-method; +attach, +attach,SparkDataFrame-method; +cache, cache, +cache,SparkDataFrame-method; +checkpoint, checkpoint, +checkpoint,SparkDataFrame-method; +coalesce, coalesce, +coalesce, +coalesce,Column-method, +coalesce,SparkDataFrame-method; +collect, collect, +collect,SparkDataFrame-method; +colnames, colnames, +colnames,SparkDataFrame-method, +colnames<-, colnames<-, +colnames<-,SparkDataFrame-method, +columns, columns, +columns,SparkDataFrame-method, +names, +names,SparkDataFrame-method, +names<-, +names<-,SparkDataFrame-method; +coltypes, coltypes, +coltypes,SparkDataFrame-method, +coltypes<-, coltypes<-, +coltypes<-,SparkDataFrame,character-method; +count,SparkDataFrame-method, +nrow, nrow, +nrow,SparkDataFrame-method; +createOrReplaceTempView, +createOrReplaceTempView, +createOrReplaceTempView,SparkDataFrame,character-method; +crossJoin, +crossJoin,SparkDataFrame,SparkDataFrame-method; +dapplyCollect, dapplyCollect, +dapplyCollect,SparkDataFrame,function-method; +dapply, dapply, +dapply,SparkDataFrame,function,structType-method; +describe, describe, +describe, +describe,SparkDataFrame,ANY-method, +describe,SparkDataFrame,character-method, +describe,SparkDataFrame-method, +summary, summary, +summary,SparkDataFrame-method; +distinct, distinct, +distinct,SparkDataFrame-method, +unique, +unique,SparkDataFrame-method; +dropDuplicates, +dropDuplicates, +dropDuplicates,SparkDataFrame-method; +dropna, dropna, +dropna,SparkDataFrame-method, +fillna, fillna, +fillna,SparkDataFrame-method, +na.omit, na.omit, +na.omit,SparkDataFrame-method; +drop, drop, +drop, drop,ANY-method, +drop,SparkDataFrame-method; +dtypes, dtypes, +dtypes,SparkDataFrame-method; +except, except, +except,SparkDataFrame,SparkDataFrame-method; +explain, explain, +explain, +explain,SparkDataFrame-method, +explain,StreamingQuery-method; +filter, filter, +filter,SparkDataFrame,characterOrColumn-method, +where, where, +where,SparkDataFrame,characterOrColumn-method; +first, first, +first, +first,SparkDataFrame-method, +first,characterOrColumn-method; +gapplyCollect, gapplyCollect, +gapplyCollect, +gapplyCollect,GroupedData-method, +gapplyCollect,SparkDataFrame-method; +gapply, gapply, +gapply, +gapply,GroupedData-method, +gapply,SparkDataFrame-method; +getNumPartitions, +getNumPartitions,SparkDataFrame-method; +groupBy, groupBy, +groupBy,SparkDataFrame-method, +group_by, group_by, +group_by,SparkDataFrame-method; +head, +head,SparkDataFrame-method; +hint, hint, +hint,SparkDataFrame,character-method; +histogram, +histogram,SparkDataFrame,characterOrColumn-method; +insertInto, insertInto, +insertInto,SparkDataFrame,character-method; +intersect, intersect, +intersect,SparkDataFrame,SparkDataFrame-method; +isLocal, isLocal, +isLocal,SparkDataFrame-method; +isStreaming, isStreaming, +isStreaming,SparkDataFrame-method; +join, +join,SparkDataFrame,SparkDataFrame-method; +limit, limit, +limit,SparkDataFrame,numeric-method; +merge, merge, +merge,SparkDataFrame,SparkDataFrame-method; +mutate, mutate, +mutate,SparkDataFrame-method, +transform, transform, +transform,SparkDataFrame-method; +ncol, +ncol,SparkDataFrame-method; +persist, persist, +persist,SparkDataFrame,character-method; +printSchema, printSchema, +printSchema,SparkDataFrame-method; +randomSplit, randomSplit, +randomSplit,SparkDataFrame,numeric-method; +rbind, rbind, +rbind,SparkDataFrame-method; +registerTempTable, +registerTempTable, +registerTempTable,SparkDataFrame,character-method; +rename, rename, +rename,SparkDataFrame-method, +withColumnRenamed, +withColumnRenamed, +withColumnRenamed,SparkDataFrame,character,character-method; +repartition, repartition, +repartition,SparkDataFrame-method; +sample, sample, +sample,SparkDataFrame,logical,numeric-method, +sample_frac, sample_frac, +sample_frac,SparkDataFrame,logical,numeric-method; +saveAsParquetFile, +saveAsParquetFile, +saveAsParquetFile,SparkDataFrame,character-method, +write.parquet, write.parquet, +write.parquet,SparkDataFrame,character-method; +saveAsTable, saveAsTable, +saveAsTable,SparkDataFrame,character-method; +saveDF, saveDF, +saveDF,SparkDataFrame,character-method, +write.df, write.df, +write.df, +write.df,SparkDataFrame-method; +schema, schema, +schema,SparkDataFrame-method; +selectExpr, selectExpr, +selectExpr,SparkDataFrame,character-method; +showDF, showDF, +showDF,SparkDataFrame-method; +show, show, +show,Column-method, +show,GroupedData-method, +show,SparkDataFrame-method, +show,StreamingQuery-method, +show,WindowSpec-method; +storageLevel, +storageLevel,SparkDataFrame-method; +str, +str,SparkDataFrame-method; +take, take, +take,SparkDataFrame,numeric-method; +toJSON, +toJSON,SparkDataFrame-method; +union, union, +union,SparkDataFrame,SparkDataFrame-method, +unionAll, unionAll, +unionAll,SparkDataFrame,SparkDataFrame-method; +unpersist, unpersist, +unpersist,SparkDataFrame-method; +withColumn, withColumn, +withColumn,SparkDataFrame,character-method; +with, +with,SparkDataFrame-method; +write.jdbc, write.jdbc, +write.jdbc,SparkDataFrame,character,character-method; +write.json, write.json, +write.json,SparkDataFrame,character-method; +write.orc, write.orc, +write.orc,SparkDataFrame,character-method; +write.stream, write.stream, +write.stream,SparkDataFrame-method; +write.text, write.text, +write.text,SparkDataFrame,character-method +

+ + +

Examples

+ +
## Not run: 
+##D sparkR.session()
+##D path <- "path/to/file.json"
+##D df <- read.json(path)
+##D dim(df)
+## End(Not run)
+
+ + +
[Package SparkR version 2.2.0 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/distinct.html ---------------------------------------------------------------------- diff --git a/site/docs/2.2.0/api/R/distinct.html b/site/docs/2.2.0/api/R/distinct.html new file mode 100644 index 0000000..e6fc7fa --- /dev/null +++ b/site/docs/2.2.0/api/R/distinct.html @@ -0,0 +1,287 @@ + +R: Distinct + + + + + + + + + +
distinct {SparkR}R Documentation
+ +

Distinct

+ +

Description

+ +

Return a new SparkDataFrame containing the distinct rows in this SparkDataFrame. +

+ + +

Usage

+ +
+## S4 method for signature 'SparkDataFrame'
+distinct(x)
+
+## S4 method for signature 'SparkDataFrame'
+unique(x)
+
+distinct(x)
+
+ + +

Arguments

+ + + + +
x +

A SparkDataFrame

+
+ + +

Note

+ +

distinct since 1.4.0 +

+

unique since 1.5.0 +

+ + +

See Also

+ +

Other SparkDataFrame functions: $, +$,SparkDataFrame-method, $<-, +$<-,SparkDataFrame-method, +select, select, +select,SparkDataFrame,Column-method, +select,SparkDataFrame,character-method, +select,SparkDataFrame,list-method; +SparkDataFrame-class; [, +[,SparkDataFrame-method, [[, +[[,SparkDataFrame,numericOrcharacter-method, +[[<-, +[[<-,SparkDataFrame,numericOrcharacter-method, +subset, subset, +subset,SparkDataFrame-method; +agg, agg, agg, +agg,GroupedData-method, +agg,SparkDataFrame-method, +summarize, summarize, +summarize, +summarize,GroupedData-method, +summarize,SparkDataFrame-method; +arrange, arrange, +arrange, +arrange,SparkDataFrame,Column-method, +arrange,SparkDataFrame,character-method, +orderBy,SparkDataFrame,characterOrColumn-method; +as.data.frame, +as.data.frame,SparkDataFrame-method; +attach, +attach,SparkDataFrame-method; +cache, cache, +cache,SparkDataFrame-method; +checkpoint, checkpoint, +checkpoint,SparkDataFrame-method; +coalesce, coalesce, +coalesce, +coalesce,Column-method, +coalesce,SparkDataFrame-method; +collect, collect, +collect,SparkDataFrame-method; +colnames, colnames, +colnames,SparkDataFrame-method, +colnames<-, colnames<-, +colnames<-,SparkDataFrame-method, +columns, columns, +columns,SparkDataFrame-method, +names, +names,SparkDataFrame-method, +names<-, +names<-,SparkDataFrame-method; +coltypes, coltypes, +coltypes,SparkDataFrame-method, +coltypes<-, coltypes<-, +coltypes<-,SparkDataFrame,character-method; +count,SparkDataFrame-method, +nrow, nrow, +nrow,SparkDataFrame-method; +createOrReplaceTempView, +createOrReplaceTempView, +createOrReplaceTempView,SparkDataFrame,character-method; +crossJoin, +crossJoin,SparkDataFrame,SparkDataFrame-method; +dapplyCollect, dapplyCollect, +dapplyCollect,SparkDataFrame,function-method; +dapply, dapply, +dapply,SparkDataFrame,function,structType-method; +describe, describe, +describe, +describe,SparkDataFrame,ANY-method, +describe,SparkDataFrame,character-method, +describe,SparkDataFrame-method, +summary, summary, +summary,SparkDataFrame-method; +dim, +dim,SparkDataFrame-method; +dropDuplicates, +dropDuplicates, +dropDuplicates,SparkDataFrame-method; +dropna, dropna, +dropna,SparkDataFrame-method, +fillna, fillna, +fillna,SparkDataFrame-method, +na.omit, na.omit, +na.omit,SparkDataFrame-method; +drop, drop, +drop, drop,ANY-method, +drop,SparkDataFrame-method; +dtypes, dtypes, +dtypes,SparkDataFrame-method; +except, except, +except,SparkDataFrame,SparkDataFrame-method; +explain, explain, +explain, +explain,SparkDataFrame-method, +explain,StreamingQuery-method; +filter, filter, +filter,SparkDataFrame,characterOrColumn-method, +where, where, +where,SparkDataFrame,characterOrColumn-method; +first, first, +first, +first,SparkDataFrame-method, +first,characterOrColumn-method; +gapplyCollect, gapplyCollect, +gapplyCollect, +gapplyCollect,GroupedData-method, +gapplyCollect,SparkDataFrame-method; +gapply, gapply, +gapply, +gapply,GroupedData-method, +gapply,SparkDataFrame-method; +getNumPartitions, +getNumPartitions,SparkDataFrame-method; +groupBy, groupBy, +groupBy,SparkDataFrame-method, +group_by, group_by, +group_by,SparkDataFrame-method; +head, +head,SparkDataFrame-method; +hint, hint, +hint,SparkDataFrame,character-method; +histogram, +histogram,SparkDataFrame,characterOrColumn-method; +insertInto, insertInto, +insertInto,SparkDataFrame,character-method; +intersect, intersect, +intersect,SparkDataFrame,SparkDataFrame-method; +isLocal, isLocal, +isLocal,SparkDataFrame-method; +isStreaming, isStreaming, +isStreaming,SparkDataFrame-method; +join, +join,SparkDataFrame,SparkDataFrame-method; +limit, limit, +limit,SparkDataFrame,numeric-method; +merge, merge, +merge,SparkDataFrame,SparkDataFrame-method; +mutate, mutate, +mutate,SparkDataFrame-method, +transform, transform, +transform,SparkDataFrame-method; +ncol, +ncol,SparkDataFrame-method; +persist, persist, +persist,SparkDataFrame,character-method; +printSchema, printSchema, +printSchema,SparkDataFrame-method; +randomSplit, randomSplit, +randomSplit,SparkDataFrame,numeric-method; +rbind, rbind, +rbind,SparkDataFrame-method; +registerTempTable, +registerTempTable, +registerTempTable,SparkDataFrame,character-method; +rename, rename, +rename,SparkDataFrame-method, +withColumnRenamed, +withColumnRenamed, +withColumnRenamed,SparkDataFrame,character,character-method; +repartition, repartition, +repartition,SparkDataFrame-method; +sample, sample, +sample,SparkDataFrame,logical,numeric-method, +sample_frac, sample_frac, +sample_frac,SparkDataFrame,logical,numeric-method; +saveAsParquetFile, +saveAsParquetFile, +saveAsParquetFile,SparkDataFrame,character-method, +write.parquet, write.parquet, +write.parquet,SparkDataFrame,character-method; +saveAsTable, saveAsTable, +saveAsTable,SparkDataFrame,character-method; +saveDF, saveDF, +saveDF,SparkDataFrame,character-method, +write.df, write.df, +write.df, +write.df,SparkDataFrame-method; +schema, schema, +schema,SparkDataFrame-method; +selectExpr, selectExpr, +selectExpr,SparkDataFrame,character-method; +showDF, showDF, +showDF,SparkDataFrame-method; +show, show, +show,Column-method, +show,GroupedData-method, +show,SparkDataFrame-method, +show,StreamingQuery-method, +show,WindowSpec-method; +storageLevel, +storageLevel,SparkDataFrame-method; +str, +str,SparkDataFrame-method; +take, take, +take,SparkDataFrame,numeric-method; +toJSON, +toJSON,SparkDataFrame-method; +union, union, +union,SparkDataFrame,SparkDataFrame-method, +unionAll, unionAll, +unionAll,SparkDataFrame,SparkDataFrame-method; +unpersist, unpersist, +unpersist,SparkDataFrame-method; +withColumn, withColumn, +withColumn,SparkDataFrame,character-method; +with, +with,SparkDataFrame-method; +write.jdbc, write.jdbc, +write.jdbc,SparkDataFrame,character,character-method; +write.json, write.json, +write.json,SparkDataFrame,character-method; +write.orc, write.orc, +write.orc,SparkDataFrame,character-method; +write.stream, write.stream, +write.stream,SparkDataFrame-method; +write.text, write.text, +write.text,SparkDataFrame,character-method +

+ + +

Examples

+ +
## Not run: 
+##D sparkR.session()
+##D path <- "path/to/file.json"
+##D df <- read.json(path)
+##D distinctDF <- distinct(df)
+## End(Not run)
+
+ + +
[Package SparkR version 2.2.0 Index]
+ http://git-wip-us.apache.org/repos/asf/spark-website/blob/f7ec1155/site/docs/2.2.0/api/R/drop.html ---------------------------------------------------------------------- diff --git a/site/docs/2.2.0/api/R/drop.html b/site/docs/2.2.0/api/R/drop.html new file mode 100644 index 0000000..0834ec4 --- /dev/null +++ b/site/docs/2.2.0/api/R/drop.html @@ -0,0 +1,303 @@ + +R: drop + + + + + + + + + +
drop {SparkR}R Documentation
+ +

drop

+ +

Description

+ +

Returns a new SparkDataFrame with columns dropped. +This is a no-op if schema doesn't contain column name(s). +

+ + +

Usage

+ +
+## S4 method for signature 'SparkDataFrame'
+drop(x, col)
+
+## S4 method for signature 'ANY'
+drop(x)
+
+drop(x, ...)
+
+ + +

Arguments

+ + + + + + + + +
x +

a SparkDataFrame.

+
col +

a character vector of column names or a Column.

+
... +

further arguments to be passed to or from other methods.

+
+ + +

Value

+ +

A SparkDataFrame. +

+ + +

Note

+ +

drop since 2.0.0 +

+ + +

See Also

+ +

Other SparkDataFrame functions: $, +$,SparkDataFrame-method, $<-, +$<-,SparkDataFrame-method, +select, select, +select,SparkDataFrame,Column-method, +select,SparkDataFrame,character-method, +select,SparkDataFrame,list-method; +SparkDataFrame-class; [, +[,SparkDataFrame-method, [[, +[[,SparkDataFrame,numericOrcharacter-method, +[[<-, +[[<-,SparkDataFrame,numericOrcharacter-method, +subset, subset, +subset,SparkDataFrame-method; +agg, agg, agg, +agg,GroupedData-method, +agg,SparkDataFrame-method, +summarize, summarize, +summarize, +summarize,GroupedData-method, +summarize,SparkDataFrame-method; +arrange, arrange, +arrange, +arrange,SparkDataFrame,Column-method, +arrange,SparkDataFrame,character-method, +orderBy,SparkDataFrame,characterOrColumn-method; +as.data.frame, +as.data.frame,SparkDataFrame-method; +attach, +attach,SparkDataFrame-method; +cache, cache, +cache,SparkDataFrame-method; +checkpoint, checkpoint, +checkpoint,SparkDataFrame-method; +coalesce, coalesce, +coalesce, +coalesce,Column-method, +coalesce,SparkDataFrame-method; +collect, collect, +collect,SparkDataFrame-method; +colnames, colnames, +colnames,SparkDataFrame-method, +colnames<-, colnames<-, +colnames<-,SparkDataFrame-method, +columns, columns, +columns,SparkDataFrame-method, +names, +names,SparkDataFrame-method, +names<-, +names<-,SparkDataFrame-method; +coltypes, coltypes, +coltypes,SparkDataFrame-method, +coltypes<-, coltypes<-, +coltypes<-,SparkDataFrame,character-method; +count,SparkDataFrame-method, +nrow, nrow, +nrow,SparkDataFrame-method; +createOrReplaceTempView, +createOrReplaceTempView, +createOrReplaceTempView,SparkDataFrame,character-method; +crossJoin, +crossJoin,SparkDataFrame,SparkDataFrame-method; +dapplyCollect, dapplyCollect, +dapplyCollect,SparkDataFrame,function-method; +dapply, dapply, +dapply,SparkDataFrame,function,structType-method; +describe, describe, +describe, +describe,SparkDataFrame,ANY-method, +describe,SparkDataFrame,character-method, +describe,SparkDataFrame-method, +summary, summary, +summary,SparkDataFrame-method; +dim, +dim,SparkDataFrame-method; +distinct, distinct, +distinct,SparkDataFrame-method, +unique, +unique,SparkDataFrame-method; +dropDuplicates, +dropDuplicates, +dropDuplicates,SparkDataFrame-method; +dropna, dropna, +dropna,SparkDataFrame-method, +fillna, fillna, +fillna,SparkDataFrame-method, +na.omit, na.omit, +na.omit,SparkDataFrame-method; +dtypes, dtypes, +dtypes,SparkDataFrame-method; +except, except, +except,SparkDataFrame,SparkDataFrame-method; +explain, explain, +explain, +explain,SparkDataFrame-method, +explain,StreamingQuery-method; +filter, filter, +filter,SparkDataFrame,characterOrColumn-method, +where, where, +where,SparkDataFrame,characterOrColumn-method; +first, first, +first, +first,SparkDataFrame-method, +first,characterOrColumn-method; +gapplyCollect, gapplyCollect, +gapplyCollect, +gapplyCollect,GroupedData-method, +gapplyCollect,SparkDataFrame-method; +gapply, gapply, +gapply, +gapply,GroupedData-method, +gapply,SparkDataFrame-method; +getNumPartitions, +getNumPartitions,SparkDataFrame-method; +groupBy, groupBy, +groupBy,SparkDataFrame-method, +group_by, group_by, +group_by,SparkDataFrame-method; +head, +head,SparkDataFrame-method; +hint, hint, +hint,SparkDataFrame,character-method; +histogram, +histogram,SparkDataFrame,characterOrColumn-method; +insertInto, insertInto, +insertInto,SparkDataFrame,character-method; +intersect, intersect, +intersect,SparkDataFrame,SparkDataFrame-method; +isLocal, isLocal, +isLocal,SparkDataFrame-method; +isStreaming, isStreaming, +isStreaming,SparkDataFrame-method; +join, +join,SparkDataFrame,SparkDataFrame-method; +limit, limit, +limit,SparkDataFrame,numeric-method; +merge, merge, +merge,SparkDataFrame,SparkDataFrame-method; +mutate, mutate, +mutate,SparkDataFrame-method, +transform, transform, +transform,SparkDataFrame-method; +ncol, +ncol,SparkDataFrame-method; +persist, persist, +persist,SparkDataFrame,character-method; +printSchema, printSchema, +printSchema,SparkDataFrame-method; +randomSplit, randomSplit, +randomSplit,SparkDataFrame,numeric-method; +rbind, rbind, +rbind,SparkDataFrame-method; +registerTempTable, +registerTempTable, +registerTempTable,SparkDataFrame,character-method; +rename, rename, +rename,SparkDataFrame-method, +withColumnRenamed, +withColumnRenamed, +withColumnRenamed,SparkDataFrame,character,character-method; +repartition, repartition, +repartition,SparkDataFrame-method; +sample, sample, +sample,SparkDataFrame,logical,numeric-method, +sample_frac, sample_frac, +sample_frac,SparkDataFrame,logical,numeric-method; +saveAsParquetFile, +saveAsParquetFile, +saveAsParquetFile,SparkDataFrame,character-method, +write.parquet, write.parquet, +write.parquet,SparkDataFrame,character-method; +saveAsTable, saveAsTable, +saveAsTable,SparkDataFrame,character-method; +saveDF, saveDF, +saveDF,SparkDataFrame,character-method, +write.df, write.df, +write.df, +write.df,SparkDataFrame-method; +schema, schema, +schema,SparkDataFrame-method; +selectExpr, selectExpr, +selectExpr,SparkDataFrame,character-method; +showDF, showDF, +showDF,SparkDataFrame-method; +show, show, +show,Column-method, +show,GroupedData-method, +show,SparkDataFrame-method, +show,StreamingQuery-method, +show,WindowSpec-method; +storageLevel, +storageLevel,SparkDataFrame-method; +str, +str,SparkDataFrame-method; +take, take, +take,SparkDataFrame,numeric-method; +toJSON, +toJSON,SparkDataFrame-method; +union, union, +union,SparkDataFrame,SparkDataFrame-method, +unionAll, unionAll, +unionAll,SparkDataFrame,SparkDataFrame-method; +unpersist, unpersist, +unpersist,SparkDataFrame-method; +withColumn, withColumn, +withColumn,SparkDataFrame,character-method; +with, +with,SparkDataFrame-method; +write.jdbc, write.jdbc, +write.jdbc,SparkDataFrame,character,character-method; +write.json, write.json, +write.json,SparkDataFrame,character-method; +write.orc, write.orc, +write.orc,SparkDataFrame,character-method; +write.stream, write.stream, +write.stream,SparkDataFrame-method; +write.text, write.text, +write.text,SparkDataFrame,character-method +

+ + +

Examples

+ +
## Not run: 
+##D sparkR.session()
+##D path <- "path/to/file.json"
+##D df <- read.json(path)
+##D drop(df, "col1")
+##D drop(df, c("col1", "col2"))
+##D drop(df, df$col1)
+## End(Not run)
+
+ + +
[Package SparkR version 2.2.0 Index]
+ --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org For additional commands, e-mail: commits-help@spark.apache.org