spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject spark git commit: [MINOR][DOC] Add more built-in sources in
Date Tue, 18 Oct 2016 20:38:41 GMT
Repository: spark
Updated Branches:
  refs/heads/branch-2.0 26e978a93 -> 6ef923137

[MINOR][DOC] Add more built-in sources in

## What changes were proposed in this pull request?
Add more built-in sources in

## How was this patch tested?

Author: Weiqing Yang <>

Closes #15522 from weiqingy/dsDoc.

(cherry picked from commit 20dd11096cfda51e47b9dbe3b715a12ccbb4ce1d)
Signed-off-by: Reynold Xin <>


Branch: refs/heads/branch-2.0
Commit: 6ef9231377c7cce949dc7a988bb9d7a5cb3e458d
Parents: 26e978a
Author: Weiqing Yang <>
Authored: Tue Oct 18 13:38:14 2016 -0700
Committer: Reynold Xin <>
Committed: Tue Oct 18 13:38:50 2016 -0700

 docs/ | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs/ b/docs/
index 0bd0093..0a6bdb6 100644
--- a/docs/
+++ b/docs/
@@ -387,8 +387,8 @@ In the simplest form, the default data source (`parquet` unless otherwise
 You can also manually specify the data source that will be used along with any extra options
 that you would like to pass to the data source. Data sources are specified by their fully
 name (i.e., `org.apache.spark.sql.parquet`), but for built-in sources you can also use their
-names (`json`, `parquet`, `jdbc`). DataFrames loaded from any data source type can be converted
into other types
-using this syntax.
+names (`json`, `parquet`, `jdbc`, `orc`, `libsvm`, `csv`, `text`). DataFrames loaded from
any data
+source type can be converted into other types using this syntax.
 <div class="codetabs">
 <div data-lang="scala"  markdown="1">

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message