spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pwend...@apache.org
Subject [3/5] [SPARK-1566] consolidate programming guide, and general doc updates
Date Fri, 30 May 2014 07:34:49 GMT
http://git-wip-us.apache.org/repos/asf/spark/blob/c8bf4131/docs/graphx-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/graphx-programming-guide.md b/docs/graphx-programming-guide.md
index 42ab27b..fdb9f98 100644
--- a/docs/graphx-programming-guide.md
+++ b/docs/graphx-programming-guide.md
@@ -10,7 +10,7 @@ title: GraphX Programming Guide
   <img src="img/graphx_logo.png"
        title="GraphX Logo"
        alt="GraphX"
-       width="65%" />
+       width="60%" />
   <!-- Images are downsized intentionally to improve quality on retina displays -->
 </p>
 
@@ -25,6 +25,8 @@ operators (e.g., [subgraph](#structural_operators), [joinVertices](#join_operato
 addition, GraphX includes a growing collection of graph [algorithms](#graph_algorithms) and
 [builders](#graph_builders) to simplify graph analytics tasks.
 
+**GraphX is currently an alpha component. While we will minimize API changes, some APIs may
change in future releases.**
+
 ## Background on Graph-Parallel Computation
 
 From social networks to language modeling, the growing scale and importance of
@@ -86,7 +88,7 @@ support the [Bagel API](api/scala/index.html#org.apache.spark.bagel.package)
and
 [Bagel programming guide](bagel-programming-guide.html). However, we encourage Bagel users
to
 explore the new GraphX API and comment on issues that may complicate the transition from
Bagel.
 
-## Upgrade Guide from Spark 0.9.1
+## Migrating from Spark 0.9.1
 
 GraphX in Spark {{site.SPARK_VERSION}} contains one user-facing interface change from Spark
0.9.1. [`EdgeRDD`][EdgeRDD] may now store adjacent vertex attributes to construct the triplets,
so it has gained a type parameter. The edges of a graph of type `Graph[VD, ED]` are of type
`EdgeRDD[ED, VD]` rather than `EdgeRDD[ED]`.
 
@@ -690,7 +692,7 @@ class GraphOps[VD, ED] {
 
 In Spark, RDDs are not persisted in memory by default. To avoid recomputation, they must
be explicitly cached when using them multiple times (see the [Spark Programming Guide][RDD
Persistence]). Graphs in GraphX behave the same way. **When using a graph multiple times,
make sure to call [`Graph.cache()`][Graph.cache] on it first.**
 
-[RDD Persistence]: scala-programming-guide.html#rdd-persistence
+[RDD Persistence]: programming-guide.html#rdd-persistence
 [Graph.cache]: api/scala/index.html#org.apache.spark.graphx.Graph@cache():Graph[VD,ED]
 
 In iterative computations, *uncaching* may also be necessary for best performance. By default,
cached RDDs and graphs will remain in memory until memory pressure forces them to be evicted
in LRU order. For iterative computation, intermediate results from previous iterations will
fill up the cache. Though they will eventually be evicted, the unnecessary data stored in
memory will slow down garbage collection. It would be more efficient to uncache intermediate
results as soon as they are no longer necessary. This involves materializing (caching and
forcing) a graph or RDD every iteration, uncaching all other datasets, and only using the
materialized dataset in future iterations. However, because graphs are composed of multiple
RDDs, it can be difficult to unpersist them correctly. **For iterative computation we recommend
using the Pregel API, which correctly unpersists intermediate results.**

http://git-wip-us.apache.org/repos/asf/spark/blob/c8bf4131/docs/hadoop-third-party-distributions.md
----------------------------------------------------------------------
diff --git a/docs/hadoop-third-party-distributions.md b/docs/hadoop-third-party-distributions.md
index a0aeab5..32403bc 100644
--- a/docs/hadoop-third-party-distributions.md
+++ b/docs/hadoop-third-party-distributions.md
@@ -1,6 +1,6 @@
 ---
 layout: global
-title: Running with Cloudera and HortonWorks
+title: Third-Party Hadoop Distributions
 ---
 
 Spark can run against all versions of Cloudera's Distribution Including Apache Hadoop (CDH)
and

http://git-wip-us.apache.org/repos/asf/spark/blob/c8bf4131/docs/index.md
----------------------------------------------------------------------
diff --git a/docs/index.md b/docs/index.md
index c9b1037..1a4ff3d 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -4,23 +4,23 @@ title: Spark Overview
 ---
 
 Apache Spark is a fast and general-purpose cluster computing system.
-It provides high-level APIs in [Scala](scala-programming-guide.html), [Java](java-programming-guide.html),
and [Python](python-programming-guide.html) that make parallel jobs easy to write, and an
optimized engine that supports general computation graphs.
-It also supports a rich set of higher-level tools including [Shark](http://shark.cs.berkeley.edu)
(Hive on Spark), [MLlib](mllib-guide.html) for machine learning, [GraphX](graphx-programming-guide.html)
for graph processing, and [Spark Streaming](streaming-programming-guide.html).
+It provides high-level APIs in Java, Scala and Python,
+and an optimized engine that supports general execution graphs.
+It also supports a rich set of higher-level tools including [Shark](http://shark.cs.berkeley.edu)
(Hive on Spark), [Spark SQL](sql-programming-guide.html) for structured data, [MLlib](mllib-guide.html)
for machine learning, [GraphX](graphx-programming-guide.html) for graph processing, and [Spark
Streaming](streaming-programming-guide.html).
 
 # Downloading
 
-Get Spark by visiting the [downloads page](http://spark.apache.org/downloads.html) of the
Apache Spark site. This documentation is for Spark version {{site.SPARK_VERSION}}. The downloads
page 
+Get Spark from the [downloads page](http://spark.apache.org/downloads.html) of the project
website. This documentation is for Spark version {{site.SPARK_VERSION}}. The downloads page

 contains Spark packages for many popular HDFS versions. If you'd like to build Spark from

-scratch, visit the [building with Maven](building-with-maven.html) page.
+scratch, visit [building Spark with Maven](building-with-maven.html).
 
-Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). All you need to run
it is 
-to have `java` to installed on your system `PATH`, or the `JAVA_HOME` environment variable

-pointing to a Java installation.
+Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS). It's easy to run
+locally on one machine --- all you need is to have `java` installed on your system `PATH`,
+or the `JAVA_HOME` environment variable pointing to a Java installation.
 
-For its Scala API, Spark {{site.SPARK_VERSION}} depends on Scala {{site.SCALA_BINARY_VERSION}}.

-If you write applications in Scala, you will need to use a compatible Scala version 
-(e.g. {{site.SCALA_BINARY_VERSION}}.X) -- newer major versions may not work. You can get
the 
-right version of Scala from [scala-lang.org](http://www.scala-lang.org/download/).
+Spark runs on Java 6+ and Python 2.6+. For the Scala API, Spark {{site.SPARK_VERSION}} uses
+Scala {{site.SCALA_BINARY_VERSION}}. You will need to use a compatible Scala version 
+({{site.SCALA_BINARY_VERSION}}.x).
 
 # Running the Examples and Shell
 
@@ -28,24 +28,23 @@ Spark comes with several sample programs.  Scala, Java and Python examples
are i
 `examples/src/main` directory. To run one of the Java or Scala sample programs, use
 `bin/run-example <class> [params]` in the top-level Spark directory. (Behind the scenes,
this
 invokes the more general
-[Spark submit script](cluster-overview.html#launching-applications-with-spark-submit) for
+[`spark-submit` script](submitting-applications.html) for
 launching applications). For example,
 
     ./bin/run-example SparkPi 10
 
-You can also run Spark interactively through modified versions of the Scala shell. This is
a
+You can also run Spark interactively through a modified version of the Scala shell. This
is a
 great way to learn the framework.
 
     ./bin/spark-shell --master local[2]
 
 The `--master` option specifies the
-[master URL for a distributed cluster](scala-programming-guide.html#master-urls), or `local`
to run
+[master URL for a distributed cluster](submitting-applications.html#master-urls), or `local`
to run
 locally with one thread, or `local[N]` to run locally with N threads. You should start by
using
 `local` for testing. For a full list of options, run Spark shell with the `--help` option.
 
-Spark also provides a Python interface. To run Spark interactively in a Python interpreter,
use
-`bin/pyspark`. As in Spark shell, you can also pass in the `--master` option to configure
your
-master URL.
+Spark also provides a Python API. To run Spark interactively in a Python interpreter, use
+`bin/pyspark`:
 
     ./bin/pyspark --master local[2]
 
@@ -66,17 +65,17 @@ options for deployment:
 
 # Where to Go from Here
 
-**Programming guides:**
+**Programming Guides:**
 
 * [Quick Start](quick-start.html): a quick introduction to the Spark API; start here!
-* [Spark Programming Guide](scala-programming-guide.html): an overview of Spark concepts,
and details on the Scala API
-  * [Java Programming Guide](java-programming-guide.html): using Spark from Java
-  * [Python Programming Guide](python-programming-guide.html): using Spark from Python
-* [Spark Streaming](streaming-programming-guide.html): Spark's API for processing data streams
-* [Spark SQL](sql-programming-guide.html): Support for running relational queries on Spark
-* [MLlib (Machine Learning)](mllib-guide.html): Spark's built-in machine learning library
-* [Bagel (Pregel on Spark)](bagel-programming-guide.html): simple graph processing model
-* [GraphX (Graphs on Spark)](graphx-programming-guide.html): Spark's new API for graphs
+* [Spark Programming Guide](programming-guide.html): detailed overview of Spark
+  in all supported languages (Scala, Java, Python)
+* Modules built on Spark:
+  * [Spark Streaming](streaming-programming-guide.html): processing real-time data streams
+  * [Spark SQL](sql-programming-guide.html): support for structured data and relational queries
+  * [MLlib](mllib-guide.html): built-in machine learning library
+  * [GraphX](graphx-programming-guide.html): Spark's new API for graph processing
+  * [Bagel (Pregel on Spark)](bagel-programming-guide.html): older, simple graph processing
model
 
 **API Docs:**
 
@@ -84,26 +83,30 @@ options for deployment:
 * [Spark Java API (Javadoc)](api/java/index.html)
 * [Spark Python API (Epydoc)](api/python/index.html)
 
-**Deployment guides:**
+**Deployment Guides:**
 
 * [Cluster Overview](cluster-overview.html): overview of concepts and components when running
on a cluster
-* [Amazon EC2](ec2-scripts.html): scripts that let you launch a cluster on EC2 in about 5
minutes
-* [Standalone Deploy Mode](spark-standalone.html): launch a standalone cluster quickly without
a third-party cluster manager
-* [Mesos](running-on-mesos.html): deploy a private cluster using
-    [Apache Mesos](http://mesos.apache.org)
-* [YARN](running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN)
+* [Submitting Applications](submitting-applications.html): packaging and deploying applications
+* Deployment modes:
+  * [Amazon EC2](ec2-scripts.html): scripts that let you launch a cluster on EC2 in about
5 minutes
+  * [Standalone Deploy Mode](spark-standalone.html): launch a standalone cluster quickly
without a third-party cluster manager
+  * [Mesos](running-on-mesos.html): deploy a private cluster using
+      [Apache Mesos](http://mesos.apache.org)
+  * [YARN](running-on-yarn.html): deploy Spark on top of Hadoop NextGen (YARN)
 
-**Other documents:**
+**Other Documents:**
 
 * [Configuration](configuration.html): customize Spark via its configuration system
+* [Monitoring](monitoring.html): track the behavior of your applications
 * [Tuning Guide](tuning.html): best practices to optimize performance and memory use
+* [Job Scheduling](job-scheduling.html): scheduling resources across and within Spark applications
 * [Security](security.html): Spark security support
 * [Hardware Provisioning](hardware-provisioning.html): recommendations for cluster hardware
-* [Job Scheduling](job-scheduling.html): scheduling resources across and within Spark applications
+* [3<sup>rd</sup> Party Hadoop Distributions](hadoop-third-party-distributions.html):
using common Hadoop distributions
 * [Building Spark with Maven](building-with-maven.html): build Spark using the Maven system
 * [Contributing to Spark](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark)
 
-**External resources:**
+**External Resources:**
 
 * [Spark Homepage](http://spark.apache.org)
 * [Shark](http://shark.cs.berkeley.edu): Apache Hive over Spark
@@ -112,9 +115,9 @@ options for deployment:
   exercises about Spark, Shark, Spark Streaming, Mesos, and more. [Videos](http://ampcamp.berkeley.edu/3/),
   [slides](http://ampcamp.berkeley.edu/3/) and [exercises](http://ampcamp.berkeley.edu/3/exercises/)
are
   available online for free.
-* [Code Examples](http://spark.apache.org/examples.html): more are also available in the
[examples subfolder](https://github.com/apache/spark/tree/master/examples/src/main/scala/org/apache/spark/)
of Spark
-* [Paper Describing Spark](http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf)
-* [Paper Describing Spark Streaming](http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf)
+* [Code Examples](http://spark.apache.org/examples.html): more are also available in the
`examples` subfolder of Spark ([Scala]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/scala/org/apache/spark/examples),
+ [Java]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/java/org/apache/spark/examples),
+ [Python]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/python))
 
 # Community
 

http://git-wip-us.apache.org/repos/asf/spark/blob/c8bf4131/docs/java-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/java-programming-guide.md b/docs/java-programming-guide.md
index 943fdd9..bb53958 100644
--- a/docs/java-programming-guide.md
+++ b/docs/java-programming-guide.md
@@ -1,218 +1,7 @@
 ---
 layout: global
 title: Java Programming Guide
+redirect: programming-guide.html
 ---
 
-The Spark Java API exposes all the Spark features available in the Scala version to Java.
-To learn the basics of Spark, we recommend reading through the
-[Scala programming guide](scala-programming-guide.html) first; it should be
-easy to follow even if you don't know Scala.
-This guide will show how to use the Spark features described there in Java.
-
-The Spark Java API is defined in the
-[`org.apache.spark.api.java`](api/java/index.html?org/apache/spark/api/java/package-summary.html)
package, and includes
-a [`JavaSparkContext`](api/java/index.html?org/apache/spark/api/java/JavaSparkContext.html)
for
-initializing Spark and [`JavaRDD`](api/java/index.html?org/apache/spark/api/java/JavaRDD.html)
classes,
-which support the same methods as their Scala counterparts but take Java functions and return
-Java data and collection types. The main differences have to do with passing functions to
RDD
-operations (e.g. map) and handling RDDs of different types, as discussed next.
-
-# Key Differences in the Java API
-
-There are a few key differences between the Java and Scala APIs:
-
-* Java does not support anonymous or first-class functions, so functions are passed
-  using anonymous classes that implement the
-  [`org.apache.spark.api.java.function.Function`](api/java/index.html?org/apache/spark/api/java/function/Function.html),
-  [`Function2`](api/java/index.html?org/apache/spark/api/java/function/Function2.html), etc.
-  interfaces.
-* To maintain type safety, the Java API defines specialized Function and RDD
-  classes for key-value pairs and doubles. For example, 
-  [`JavaPairRDD`](api/java/index.html?org/apache/spark/api/java/JavaPairRDD.html)
-  stores key-value pairs.
-* Some methods are defined on the basis of the passed function's return type.
-  For example `mapToPair()` returns
-  [`JavaPairRDD`](api/java/index.html?org/apache/spark/api/java/JavaPairRDD.html),
-  and `mapToDouble()` returns
-  [`JavaDoubleRDD`](api/java/index.html?org/apache/spark/api/java/JavaDoubleRDD.html).
-* RDD methods like `collect()` and `countByKey()` return Java collections types,
-  such as `java.util.List` and `java.util.Map`.
-* Key-value pairs, which are simply written as `(key, value)` in Scala, are represented
-  by the `scala.Tuple2` class, and need to be created using `new Tuple2<K, V>(key,
value)`.
-
-## RDD Classes
-
-Spark defines additional operations on RDDs of key-value pairs and doubles, such
-as `reduceByKey`, `join`, and `stdev`.
-
-In the Scala API, these methods are automatically added using Scala's
-[implicit conversions](http://www.scala-lang.org/node/130) mechanism.
-
-In the Java API, the extra methods are defined in the
-[`JavaPairRDD`](api/java/index.html?org/apache/spark/api/java/JavaPairRDD.html)
-and [`JavaDoubleRDD`](api/java/index.html?org/apache/spark/api/java/JavaDoubleRDD.html)
-classes.  RDD methods like `map` are overloaded by specialized `PairFunction`
-and `DoubleFunction` classes, allowing them to return RDDs of the appropriate
-types.  Common methods like `filter` and `sample` are implemented by
-each specialized RDD class, so filtering a `PairRDD` returns a new `PairRDD`,
-etc (this achieves the "same-result-type" principle used by the [Scala collections
-framework](http://docs.scala-lang.org/overviews/core/architecture-of-scala-collections.html)).
-
-## Function Interfaces
-
-The following table lists the function interfaces used by the Java API, located in the
-[`org.apache.spark.api.java.function`](api/java/index.html?org/apache/spark/api/java/function/package-summary.html)
-package. Each interface has a single abstract method, `call()`.
-
-<table class="table">
-<tr><th>Class</th><th>Function Type</th></tr>
-
-<tr><td>Function&lt;T, R&gt;</td><td>T =&gt; R </td></tr>
-<tr><td>DoubleFunction&lt;T&gt;</td><td>T =&gt; Double
</td></tr>
-<tr><td>PairFunction&lt;T, K, V&gt;</td><td>T =&gt; Tuple2&lt;K,
V&gt; </td></tr>
-
-<tr><td>FlatMapFunction&lt;T, R&gt;</td><td>T =&gt; Iterable&lt;R&gt;
</td></tr>
-<tr><td>DoubleFlatMapFunction&lt;T&gt;</td><td>T =&gt;
Iterable&lt;Double&gt; </td></tr>
-<tr><td>PairFlatMapFunction&lt;T, K, V&gt;</td><td>T =&gt;
Iterable&lt;Tuple2&lt;K, V&gt;&gt; </td></tr>
-
-<tr><td>Function2&lt;T1, T2, R&gt;</td><td>T1, T2 =&gt;
R (function of two arguments)</td></tr>
-</table>
-
-## Storage Levels
-
-RDD [storage level](scala-programming-guide.html#rdd-persistence) constants, such as `MEMORY_AND_DISK`,
are
-declared in the [org.apache.spark.api.java.StorageLevels](api/java/index.html?org/apache/spark/api/java/StorageLevels.html)
class. To
-define your own storage level, you can use StorageLevels.create(...). 
-
-# Other Features
-
-The Java API supports other Spark features, including
-[accumulators](scala-programming-guide.html#accumulators),
-[broadcast variables](scala-programming-guide.html#broadcast-variables), and
-[caching](scala-programming-guide.html#rdd-persistence).
-
-# Upgrading From Pre-1.0 Versions of Spark
-
-In version 1.0 of Spark the Java API was refactored to better support Java 8
-lambda expressions. Users upgrading from older versions of Spark should note
-the following changes:
-
-* All `org.apache.spark.api.java.function.*` have been changed from abstract
-  classes to interfaces. This means that concrete implementations of these 
-  `Function` classes will need to use `implements` rather than `extends`.
-* Certain transformation functions now have multiple versions depending
-  on the return type. In Spark core, the map functions (`map`, `flatMap`, and
-  `mapPartitions`) have type-specific versions, e.g.
-  [`mapToPair`](api/java/org/apache/spark/api/java/JavaRDDLike.html#mapToPair(org.apache.spark.api.java.function.PairFunction))
-  and [`mapToDouble`](api/java/org/apache/spark/api/java/JavaRDDLike.html#mapToDouble(org.apache.spark.api.java.function.DoubleFunction)).
-  Spark Streaming also uses the same approach, e.g. [`transformToPair`](api/java/org/apache/spark/streaming/api/java/JavaDStreamLike.html#transformToPair(org.apache.spark.api.java.function.Function)).
-
-# Example
-
-As an example, we will implement word count using the Java API.
-
-{% highlight java %}
-import org.apache.spark.api.java.*;
-import org.apache.spark.api.java.function.*;
-
-JavaSparkContext jsc = new JavaSparkContext(...);
-JavaRDD<String> lines = jsc.textFile("hdfs://...");
-JavaRDD<String> words = lines.flatMap(
-  new FlatMapFunction<String, String>() {
-    @Override public Iterable<String> call(String s) {
-      return Arrays.asList(s.split(" "));
-    }
-  }
-);
-{% endhighlight %}
-
-The word count program starts by creating a `JavaSparkContext`, which accepts
-the same parameters as its Scala counterpart.  `JavaSparkContext` supports the
-same data loading methods as the regular `SparkContext`; here, `textFile`
-loads lines from text files stored in HDFS.
-
-To split the lines into words, we use `flatMap` to split each line on
-whitespace.  `flatMap` is passed a `FlatMapFunction` that accepts a string and
-returns an `java.lang.Iterable` of strings.
-
-Here, the `FlatMapFunction` was created inline; another option is to subclass
-`FlatMapFunction` and pass an instance to `flatMap`:
-
-{% highlight java %}
-class Split extends FlatMapFunction<String, String> {
-  @Override public Iterable<String> call(String s) {
-    return Arrays.asList(s.split(" "));
-  }
-}
-JavaRDD<String> words = lines.flatMap(new Split());
-{% endhighlight %}
-
-Java 8+ users can also write the above `FlatMapFunction` in a more concise way using 
-a lambda expression:
-
-{% highlight java %}
-JavaRDD<String> words = lines.flatMap(s -> Arrays.asList(s.split(" ")));
-{% endhighlight %}
-
-This lambda syntax can be applied to all anonymous classes in Java 8.
-
-Continuing with the word count example, we map each word to a `(word, 1)` pair:
-
-{% highlight java %}
-import scala.Tuple2;
-JavaPairRDD<String, Integer> ones = words.mapToPair(
-  new PairFunction<String, String, Integer>() {
-    @Override public Tuple2<String, Integer> call(String s) {
-      return new Tuple2<String, Integer>(s, 1);
-    }
-  }
-);
-{% endhighlight %}
-
-Note that `mapToPair` was passed a `PairFunction<String, String, Integer>` and
-returned a `JavaPairRDD<String, Integer>`.
-
-To finish the word count program, we will use `reduceByKey` to count the
-occurrences of each word:
-
-{% highlight java %}
-JavaPairRDD<String, Integer> counts = ones.reduceByKey(
-  new Function2<Integer, Integer, Integer>() {
-    @Override public Integer call(Integer i1, Integer i2) {
-      return i1 + i2;
-    }
-  }
-);
-{% endhighlight %}
-
-Here, `reduceByKey` is passed a `Function2`, which implements a function with
-two arguments.  The resulting `JavaPairRDD` contains `(word, count)` pairs.
-
-In this example, we explicitly showed each intermediate RDD.  It is also
-possible to chain the RDD transformations, so the word count example could also
-be written as:
-
-{% highlight java %}
-JavaPairRDD<String, Integer> counts = lines.flatMapToPair(
-    ...
-  ).map(
-    ...
-  ).reduceByKey(
-    ...
-  );
-{% endhighlight %}
-
-There is no performance difference between these approaches; the choice is
-just a matter of style.
-
-# API Docs
-
-[API documentation](api/java/index.html) for Spark in Java is available in Javadoc format.
-
-# Where to Go from Here
-
-Spark includes several sample programs using the Java API in
-[`examples/src/main/java`](https://github.com/apache/spark/tree/master/examples/src/main/java/org/apache/spark/examples).
 You can run them by passing the class name to the
-`bin/run-example` script included in Spark; for example:
-
-    ./bin/run-example JavaWordCount README.md
+This document has been merged into the [Spark programming guide](programming-guide.html).

http://git-wip-us.apache.org/repos/asf/spark/blob/c8bf4131/docs/js/api-docs.js
----------------------------------------------------------------------
diff --git a/docs/js/api-docs.js b/docs/js/api-docs.js
index 1414b6d..ce89d89 100644
--- a/docs/js/api-docs.js
+++ b/docs/js/api-docs.js
@@ -1,10 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
 /* Dynamically injected post-processing code for the API docs */
 
 $(document).ready(function() {
   var annotations = $("dt:contains('Annotations')").next("dd").children("span.name");
-  addBadges(annotations, "AlphaComponent", ":: AlphaComponent ::", "<span class='alphaComponent
badge'>Alpha Component</span>");
-  addBadges(annotations, "DeveloperApi", ":: DeveloperApi ::", "<span class='developer
badge'>Developer API</span>");
-  addBadges(annotations, "Experimental", ":: Experimental ::", "<span class='experimental
badge'>Experimental</span>");
+  addBadges(annotations, "AlphaComponent", ":: AlphaComponent ::", '<span class="alphaComponent
badge">Alpha Component</span>');
+  addBadges(annotations, "DeveloperApi", ":: DeveloperApi ::", '<span class="developer
badge">Developer API</span>');
+  addBadges(annotations, "Experimental", ":: Experimental ::", '<span class="experimental
badge">Experimental</span>');
 });
 
 function addBadges(allAnnotations, name, tag, html) {

http://git-wip-us.apache.org/repos/asf/spark/blob/c8bf4131/docs/js/main.js
----------------------------------------------------------------------
diff --git a/docs/js/main.js b/docs/js/main.js
index 5905546..f1a90e4 100755
--- a/docs/js/main.js
+++ b/docs/js/main.js
@@ -1,3 +1,23 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/* Custom JavaScript code in the MarkDown docs */
+
+// Enable language-specific code tabs
 function codeTabs() {
   var counter = 0;
   var langImages = {
@@ -62,6 +82,7 @@ function makeCollapsable(elt, accordionClass, accordionBodyId, title) {
   );
 }
 
+// Enable "view solution" sections (for exercises)
 function viewSolution() {
   var counter = 0
   $("div.solution").each(function() {

http://git-wip-us.apache.org/repos/asf/spark/blob/c8bf4131/docs/mllib-guide.md
----------------------------------------------------------------------
diff --git a/docs/mllib-guide.md b/docs/mllib-guide.md
index 640ca83..95ee6bc 100644
--- a/docs/mllib-guide.md
+++ b/docs/mllib-guide.md
@@ -31,7 +31,7 @@ MLlib is a new component under active development.
 The APIs marked `Experimental`/`DeveloperApi` may change in future releases, 
 and we will provide migration guide between releases.
 
-## Dependencies
+# Dependencies
 
 MLlib uses linear algebra packages [Breeze](http://www.scalanlp.org/), which depends on
 [netlib-java](https://github.com/fommil/netlib-java), and
@@ -50,9 +50,9 @@ To use MLlib in Python, you will need [NumPy](http://www.numpy.org) version
1.4
 
 ---
 
-## Migration guide
+# Migration Guide
 
-### From 0.9 to 1.0
+## From 0.9 to 1.0
 
 In MLlib v1.0, we support both dense and sparse input in a unified way, which introduces
a few
 breaking changes.  If your data is sparse, please store it in a sparse format instead of
dense to
@@ -84,9 +84,9 @@ val vector: Vector = Vectors.dense(array) // a dense vector
 <div data-lang="java" markdown="1">
 
 We used to represent a feature vector by `double[]`, which is replaced by
-[`Vector`](api/scala/index.html#org.apache.spark.mllib.linalg.Vector) in v1.0. Algorithms
that used
+[`Vector`](api/java/index.html?org/apache/spark/mllib/linalg/Vector.html) in v1.0. Algorithms
that used
 to accept `RDD<double[]>` now take
-`RDD<Vector>`. [`LabeledPoint`](api/scala/index.html#org.apache.spark.mllib.regression.LabeledPoint)
+`RDD<Vector>`. [`LabeledPoint`](api/java/index.html?org/apache/spark/mllib/regression/LabeledPoint.html)
 is now a wrapper of `(double, Vector)` instead of `(double, double[])`. Converting `double[]`
to
 `Vector` is straightforward:
 

http://git-wip-us.apache.org/repos/asf/spark/blob/c8bf4131/docs/mllib-optimization.md
----------------------------------------------------------------------
diff --git a/docs/mllib-optimization.md b/docs/mllib-optimization.md
index a22980d..97e8f4e 100644
--- a/docs/mllib-optimization.md
+++ b/docs/mllib-optimization.md
@@ -116,7 +116,7 @@ is a stochastic gradient. Here `$S$` is the sampled subset of size `$|S|=$
miniB
 $\cdot n$`.
 
 In each iteration, the sampling over the distributed dataset
-([RDD](scala-programming-guide.html#resilient-distributed-datasets-rdds)), as well as the
+([RDD](programming-guide.html#resilient-distributed-datasets-rdds)), as well as the
 computation of the sum of the partial results from each worker machine is performed by the
 standard spark routines.
 

http://git-wip-us.apache.org/repos/asf/spark/blob/c8bf4131/docs/monitoring.md
----------------------------------------------------------------------
diff --git a/docs/monitoring.md b/docs/monitoring.md
index fffc58a..2b9e9e5 100644
--- a/docs/monitoring.md
+++ b/docs/monitoring.md
@@ -3,7 +3,7 @@ layout: global
 title: Monitoring and Instrumentation
 ---
 
-There are several ways to monitor Spark applications.
+There are several ways to monitor Spark applications: web UIs, metrics, and external instrumentation.
 
 # Web Interfaces
 


Mime
View raw message