spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pwend...@apache.org
Subject git commit: Fix two download suggestions in the docs:
Date Tue, 06 May 2014 19:08:00 GMT
Repository: spark
Updated Branches:
  refs/heads/branch-1.0 0c3e4150f -> 1083f2bde


Fix two download suggestions in the docs:

1) On the quick start page provide a direct link to the downloads (suggested by @pbailis).
2) On the index page, don't suggest users always have to build Spark, since many won't.

Author: Patrick Wendell <pwendell@gmail.com>

Closes #662 from pwendell/quick-start and squashes the following commits:

0622f27 [Patrick Wendell] Fix two download suggestions in the docs:
(cherry picked from commit 7b978c1ac59718b85e512c46105b6af641afc3dc)

Signed-off-by: Patrick Wendell <pwendell@gmail.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/1083f2bd
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/1083f2bd
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/1083f2bd

Branch: refs/heads/branch-1.0
Commit: 1083f2bde81ff6185c42427d3390c4e07c215771
Parents: 0c3e415
Author: Patrick Wendell <pwendell@gmail.com>
Authored: Tue May 6 12:07:46 2014 -0700
Committer: Patrick Wendell <pwendell@gmail.com>
Committed: Tue May 6 12:07:56 2014 -0700

----------------------------------------------------------------------
 docs/index.md       | 36 ++++++++++--------------------------
 docs/quick-start.md |  8 +++-----
 2 files changed, 13 insertions(+), 31 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/1083f2bd/docs/index.md
----------------------------------------------------------------------
diff --git a/docs/index.md b/docs/index.md
index 2daa208..e364771 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -9,17 +9,18 @@ It also supports a rich set of higher-level tools including [Shark](http://shark
 
 # Downloading
 
-Get Spark by visiting the [downloads page](http://spark.apache.org/downloads.html) of the
Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}.
+Get Spark by visiting the [downloads page](http://spark.apache.org/downloads.html) of the
Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}. The downloads
page 
+contains Spark packages for many popular HDFS versions. If you'd like to build Spark from

+scratch, visit the [building with Maven](building-with-maven.html) page.
 
-Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run
it is to have `java` to installed on your system `PATH`, or the `JAVA_HOME` environment variable
pointing to a Java installation.
+Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run
it is 
+to have `java` to installed on your system `PATH`, or the `JAVA_HOME` environment variable

+pointing to a Java installation.
 
-# Building
-
-Spark uses [Simple Build Tool](http://www.scala-sbt.org), which is bundled with it. To compile
the code, go into the top-level Spark directory and run
-
-    sbt/sbt assembly
-
-For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_BINARY_VERSION}}.
If you write applications in Scala, you will need to use a compatible Scala version (e.g.
{{site.SCALA_BINARY_VERSION}}.X) -- newer major versions may not work. You can get the right
version of Scala from [scala-lang.org](http://www.scala-lang.org/download/).
+For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_BINARY_VERSION}}.

+If you write applications in Scala, you will need to use a compatible Scala version 
+(e.g. {{site.SCALA_BINARY_VERSION}}.X) -- newer major versions may not work. You can get
the 
+right version of Scala from [scala-lang.org](http://www.scala-lang.org/download/).
 
 # Running the Examples and Shell
 
@@ -50,23 +51,6 @@ options for deployment:
 * [Apache Mesos](running-on-mesos.html)
 * [Hadoop YARN](running-on-yarn.html)
 
-# A Note About Hadoop Versions
-
-Spark uses the Hadoop-client library to talk to HDFS and other Hadoop-supported
-storage systems. Because the HDFS protocol has changed in different versions of
-Hadoop, you must build Spark against the same version that your cluster uses.
-By default, Spark links to Hadoop 1.0.4. You can change this by setting the
-`SPARK_HADOOP_VERSION` variable when compiling:
-
-    SPARK_HADOOP_VERSION=2.2.0 sbt/sbt assembly
-
-In addition, if you wish to run Spark on [YARN](running-on-yarn.html), set
-`SPARK_YARN` to `true`:
-
-    SPARK_HADOOP_VERSION=2.0.5-alpha SPARK_YARN=true sbt/sbt assembly
-
-Note that on Windows, you need to set the environment variables on separate lines, e.g.,
`set SPARK_HADOOP_VERSION=1.2.1`.
-
 # Where to Go from Here
 
 **Programming guides:**

http://git-wip-us.apache.org/repos/asf/spark/blob/1083f2bd/docs/quick-start.md
----------------------------------------------------------------------
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 64996b5..478b790 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -9,11 +9,9 @@ title: Quick Start
 This tutorial provides a quick introduction to using Spark. We will first introduce the API
through Spark's interactive Scala shell (don't worry if you don't know Scala -- you will not
need much for this), then show how to write standalone applications in Scala, Java, and Python.
 See the [programming guide](scala-programming-guide.html) for a more complete reference.
 
-To follow along with this guide, you only need to have successfully built Spark on one machine.
Simply go into your Spark directory and run:
-
-{% highlight bash %}
-$ sbt/sbt assembly
-{% endhighlight %}
+To follow along with this guide, first download a packaged release of Spark from the
+[Spark website](http://spark.apache.org/downloads.html). Since we won't be using HDFS,
+you can download a package for any version of Hadoop.
 
 # Interactive Analysis with the Spark Shell
 


Mime
View raw message