spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pwend...@apache.org
Subject [09/10] git commit: Merge remote-tracking branch 'apache-github/master' into remove-binaries
Date Sat, 04 Jan 2014 07:50:47 GMT
Merge remote-tracking branch 'apache-github/master' into remove-binaries

Conflicts:
	core/src/test/scala/org/apache/spark/DriverSuite.scala
	docs/python-programming-guide.md


Project: http://git-wip-us.apache.org/repos/asf/incubator-spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-spark/commit/604fad9c
Tree: http://git-wip-us.apache.org/repos/asf/incubator-spark/tree/604fad9c
Diff: http://git-wip-us.apache.org/repos/asf/incubator-spark/diff/604fad9c

Branch: refs/heads/master
Commit: 604fad9c39763012d97b404941f7ba7137ec2eed
Parents: 9e6f3bd c4d6145
Author: Patrick Wendell <pwendell@gmail.com>
Authored: Fri Jan 3 21:29:33 2014 -0800
Committer: Patrick Wendell <pwendell@gmail.com>
Committed: Fri Jan 3 21:29:33 2014 -0800

----------------------------------------------------------------------
 .gitignore                                      |    2 +
 README.md                                       |    8 +-
 assembly/pom.xml                                |   12 +-
 assembly/src/main/assembly/assembly.xml         |   11 +-
 bin/compute-classpath.cmd                       |    2 +-
 bin/compute-classpath.sh                        |    2 +-
 bin/pyspark                                     |   70 ++
 bin/pyspark.cmd                                 |   23 +
 bin/pyspark2.cmd                                |   55 +
 bin/run-example                                 |   91 ++
 bin/run-example.cmd                             |   23 +
 bin/run-example2.cmd                            |   61 ++
 bin/slaves.sh                                   |   91 --
 bin/spark-class                                 |  154 +++
 bin/spark-class.cmd                             |   23 +
 bin/spark-class2.cmd                            |   85 ++
 bin/spark-config.sh                             |   36 -
 bin/spark-daemon.sh                             |  183 ----
 bin/spark-daemons.sh                            |   35 -
 bin/spark-shell                                 |  102 ++
 bin/spark-shell.cmd                             |   23 +
 bin/start-all.sh                                |   34 -
 bin/start-master.sh                             |   52 -
 bin/start-slave.sh                              |   35 -
 bin/start-slaves.sh                             |   48 -
 bin/stop-all.sh                                 |   32 -
 bin/stop-master.sh                              |   27 -
 bin/stop-slaves.sh                              |   35 -
 .../mesos/CoarseMesosSchedulerBackend.scala     |    4 +-
 .../cluster/mesos/MesosSchedulerBackend.scala   |    4 +-
 .../apache/spark/ui/UIWorkloadGenerator.scala   |    4 +-
 .../scala/org/apache/spark/DriverSuite.scala    |    2 +-
 data/kmeans_data.txt                            |    6 +
 data/lr_data.txt                                | 1000 ++++++++++++++++++
 data/pagerank_data.txt                          |    6 +
 docs/bagel-programming-guide.md                 |    4 +-
 docs/building-with-maven.md                     |   14 +-
 docs/index.md                                   |   10 +-
 docs/java-programming-guide.md                  |    4 +-
 docs/mllib-guide.md                             |    2 +-
 docs/python-programming-guide.md                |   28 +-
 docs/quick-start.md                             |    8 +-
 docs/running-on-yarn.md                         |   11 +-
 docs/scala-programming-guide.md                 |   14 +-
 docs/spark-debugger.md                          |    2 +-
 docs/spark-standalone.md                        |   20 +-
 docs/streaming-programming-guide.md             |    4 +-
 ec2/spark_ec2.py                                |    2 +-
 .../streaming/examples/JavaKafkaWordCount.java  |    2 +-
 .../streaming/examples/ActorWordCount.scala     |    4 +-
 .../streaming/examples/HdfsWordCount.scala      |    2 +-
 .../streaming/examples/KafkaWordCount.scala     |    2 +-
 .../streaming/examples/MQTTWordCount.scala      |    4 +-
 .../streaming/examples/NetworkWordCount.scala   |    2 +-
 .../examples/StatefulNetworkWordCount.scala     |    2 +-
 .../streaming/examples/ZeroMQWordCount.scala    |    4 +-
 .../clickstream/PageViewGenerator.scala         |    4 +-
 .../examples/clickstream/PageViewStream.scala   |    4 +-
 kmeans_data.txt                                 |    6 -
 lr_data.txt                                     | 1000 ------------------
 make-distribution.sh                            |   11 +-
 new-yarn/pom.xml                                |  161 ---
 .../spark/deploy/yarn/ApplicationMaster.scala   |  428 --------
 .../yarn/ApplicationMasterArguments.scala       |   94 --
 .../org/apache/spark/deploy/yarn/Client.scala   |  523 ---------
 .../spark/deploy/yarn/ClientArguments.scala     |  150 ---
 .../yarn/ClientDistributedCacheManager.scala    |  228 ----
 .../spark/deploy/yarn/WorkerLauncher.scala      |  225 ----
 .../spark/deploy/yarn/WorkerRunnable.scala      |  209 ----
 .../deploy/yarn/YarnAllocationHandler.scala     |  694 ------------
 .../spark/deploy/yarn/YarnSparkHadoopUtil.scala |   43 -
 .../cluster/YarnClientClusterScheduler.scala    |   48 -
 .../cluster/YarnClientSchedulerBackend.scala    |  110 --
 .../cluster/YarnClusterScheduler.scala          |   56 -
 .../ClientDistributedCacheManagerSuite.scala    |  220 ----
 pagerank_data.txt                               |    6 -
 pom.xml                                         |   59 +-
 project/SparkBuild.scala                        |   32 +-
 pyspark                                         |   70 --
 pyspark.cmd                                     |   23 -
 pyspark2.cmd                                    |   55 -
 python/pyspark/java_gateway.py                  |    2 +-
 python/pyspark/shell.py                         |    2 +-
 python/run-tests                                |    2 +-
 repl-bin/src/deb/bin/run                        |    3 +-
 repl/pom.xml                                    |    1 -
 run-example                                     |   91 --
 run-example.cmd                                 |   23 -
 run-example2.cmd                                |   61 --
 sbin/slaves.sh                                  |   91 ++
 sbin/spark-config.sh                            |   36 +
 sbin/spark-daemon.sh                            |  183 ++++
 sbin/spark-daemons.sh                           |   35 +
 sbin/spark-executor                             |   23 +
 sbin/start-all.sh                               |   34 +
 sbin/start-master.sh                            |   52 +
 sbin/start-slave.sh                             |   35 +
 sbin/start-slaves.sh                            |   48 +
 sbin/stop-all.sh                                |   32 +
 sbin/stop-master.sh                             |   27 +
 sbin/stop-slaves.sh                             |   35 +
 spark-class                                     |  154 ---
 spark-class.cmd                                 |   23 -
 spark-class2.cmd                                |   85 --
 spark-executor                                  |   22 -
 spark-shell                                     |  102 --
 spark-shell.cmd                                 |   22 -
 yarn/README.md                                  |   12 +
 yarn/alpha/pom.xml                              |   32 +
 .../spark/deploy/yarn/ApplicationMaster.scala   |  464 ++++++++
 .../org/apache/spark/deploy/yarn/Client.scala   |  509 +++++++++
 .../spark/deploy/yarn/WorkerLauncher.scala      |  250 +++++
 .../spark/deploy/yarn/WorkerRunnable.scala      |  236 +++++
 .../deploy/yarn/YarnAllocationHandler.scala     |  680 ++++++++++++
 .../yarn/ApplicationMasterArguments.scala       |   94 ++
 .../spark/deploy/yarn/ClientArguments.scala     |  150 +++
 .../yarn/ClientDistributedCacheManager.scala    |  228 ++++
 .../spark/deploy/yarn/YarnSparkHadoopUtil.scala |   43 +
 .../cluster/YarnClientClusterScheduler.scala    |   48 +
 .../cluster/YarnClientSchedulerBackend.scala    |  110 ++
 .../cluster/YarnClusterScheduler.scala          |   56 +
 .../ClientDistributedCacheManagerSuite.scala    |  220 ++++
 yarn/pom.xml                                    |   84 +-
 .../spark/deploy/yarn/ApplicationMaster.scala   |  458 --------
 .../yarn/ApplicationMasterArguments.scala       |   94 --
 .../org/apache/spark/deploy/yarn/Client.scala   |  505 ---------
 .../spark/deploy/yarn/ClientArguments.scala     |  147 ---
 .../yarn/ClientDistributedCacheManager.scala    |  228 ----
 .../spark/deploy/yarn/WorkerLauncher.scala      |  247 -----
 .../spark/deploy/yarn/WorkerRunnable.scala      |  235 ----
 .../deploy/yarn/YarnAllocationHandler.scala     |  680 ------------
 .../spark/deploy/yarn/YarnSparkHadoopUtil.scala |   43 -
 .../cluster/YarnClientClusterScheduler.scala    |   48 -
 .../cluster/YarnClientSchedulerBackend.scala    |  110 --
 .../cluster/YarnClusterScheduler.scala          |   59 --
 .../ClientDistributedCacheManagerSuite.scala    |  220 ----
 yarn/stable/pom.xml                             |   32 +
 .../spark/deploy/yarn/ApplicationMaster.scala   |  432 ++++++++
 .../org/apache/spark/deploy/yarn/Client.scala   |  525 +++++++++
 .../spark/deploy/yarn/WorkerLauncher.scala      |  230 ++++
 .../spark/deploy/yarn/WorkerRunnable.scala      |  210 ++++
 .../deploy/yarn/YarnAllocationHandler.scala     |  695 ++++++++++++
 142 files changed, 7803 insertions(+), 8820 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/README.md
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/assembly/pom.xml
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/bin/pyspark
----------------------------------------------------------------------
diff --cc bin/pyspark
index 0000000,d6810f4..f97dfa7
mode 000000,100755..100755
--- a/bin/pyspark
+++ b/bin/pyspark
@@@ -1,0 -1,70 +1,70 @@@
+ #!/usr/bin/env bash
+ 
+ #
+ # Licensed to the Apache Software Foundation (ASF) under one or more
+ # contributor license agreements.  See the NOTICE file distributed with
+ # this work for additional information regarding copyright ownership.
+ # The ASF licenses this file to You under the Apache License, Version 2.0
+ # (the "License"); you may not use this file except in compliance with
+ # the License.  You may obtain a copy of the License at
+ #
+ #    http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing, software
+ # distributed under the License is distributed on an "AS IS" BASIS,
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ # See the License for the specific language governing permissions and
+ # limitations under the License.
+ #
+ 
+ # Figure out where the Scala framework is installed
+ FWDIR="$(cd `dirname $0`/..; pwd)"
+ 
+ # Export this as SPARK_HOME
+ export SPARK_HOME="$FWDIR"
+ 
+ SCALA_VERSION=2.10
+ 
+ # Exit if the user hasn't compiled Spark
+ if [ ! -f "$FWDIR/RELEASE" ]; then
+   # Exit if the user hasn't compiled Spark
+   ls "$FWDIR"/assembly/target/scala-$SCALA_VERSION/spark-assembly*hadoop*.jar >&
/dev/null
+   if [[ $? != 0 ]]; then
+     echo "Failed to find Spark assembly in $FWDIR/assembly/target" >&2
 -    echo "You need to build Spark with sbt/sbt assembly before running this program" >&2
++    echo "You need to build Spark with sbt assembly before running this program" >&2
+     exit 1
+   fi
+ fi
+ 
+ # Load environment variables from conf/spark-env.sh, if it exists
+ if [ -e "$FWDIR/conf/spark-env.sh" ] ; then
+   . $FWDIR/conf/spark-env.sh
+ fi
+ 
+ # Figure out which Python executable to use
+ if [ -z "$PYSPARK_PYTHON" ] ; then
+   PYSPARK_PYTHON="python"
+ fi
+ export PYSPARK_PYTHON
+ 
+ # Add the PySpark classes to the Python path:
+ export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
+ 
+ # Load the PySpark shell.py script when ./pyspark is used interactively:
+ export OLD_PYTHONSTARTUP=$PYTHONSTARTUP
+ export PYTHONSTARTUP=$FWDIR/python/pyspark/shell.py
+ 
+ if [ -n "$IPYTHON_OPTS" ]; then
+   IPYTHON=1
+ fi
+ 
+ if [[ "$IPYTHON" = "1" ]] ; then
+   # IPython <1.0.0 doesn't honor PYTHONSTARTUP, while 1.0.0+ does. 
+   # Hence we clear PYTHONSTARTUP and use the -c "%run $IPYTHONSTARTUP" command which works
on all versions
+   # We also force interactive mode with "-i"
+   IPYTHONSTARTUP=$PYTHONSTARTUP
+   PYTHONSTARTUP=
+   exec ipython "$IPYTHON_OPTS" -i -c "%run $IPYTHONSTARTUP"
+ else
+   exec "$PYSPARK_PYTHON" "$@"
+ fi

http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/bin/run-example
----------------------------------------------------------------------
diff --cc bin/run-example
index 0000000,6c5d4a6..dfb4bf7
mode 000000,100755..100755
--- a/bin/run-example
+++ b/bin/run-example
@@@ -1,0 -1,91 +1,91 @@@
+ #!/usr/bin/env bash
+ 
+ #
+ # Licensed to the Apache Software Foundation (ASF) under one or more
+ # contributor license agreements.  See the NOTICE file distributed with
+ # this work for additional information regarding copyright ownership.
+ # The ASF licenses this file to You under the Apache License, Version 2.0
+ # (the "License"); you may not use this file except in compliance with
+ # the License.  You may obtain a copy of the License at
+ #
+ #    http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing, software
+ # distributed under the License is distributed on an "AS IS" BASIS,
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ # See the License for the specific language governing permissions and
+ # limitations under the License.
+ #
+ 
+ cygwin=false
+ case "`uname`" in
+     CYGWIN*) cygwin=true;;
+ esac
+ 
+ SCALA_VERSION=2.10
+ 
+ # Figure out where the Scala framework is installed
+ FWDIR="$(cd `dirname $0`/..; pwd)"
+ 
+ # Export this as SPARK_HOME
+ export SPARK_HOME="$FWDIR"
+ 
+ # Load environment variables from conf/spark-env.sh, if it exists
+ if [ -e "$FWDIR/conf/spark-env.sh" ] ; then
+   . $FWDIR/conf/spark-env.sh
+ fi
+ 
+ if [ -z "$1" ]; then
+   echo "Usage: run-example <example-class> [<args>]" >&2
+   exit 1
+ fi
+ 
+ # Figure out the JAR file that our examples were packaged into. This includes a bit of a
hack
+ # to avoid the -sources and -doc packages that are built by publish-local.
+ EXAMPLES_DIR="$FWDIR"/examples
+ SPARK_EXAMPLES_JAR=""
+ if [ -e "$EXAMPLES_DIR"/target/scala-$SCALA_VERSION/*assembly*[0-9Tg].jar ]; then
+   # Use the JAR from the SBT build
+   export SPARK_EXAMPLES_JAR=`ls "$EXAMPLES_DIR"/target/scala-$SCALA_VERSION/*assembly*[0-9Tg].jar`
+ fi
+ if [ -e "$EXAMPLES_DIR"/target/spark-examples*[0-9Tg].jar ]; then
+   # Use the JAR from the Maven build
+   # TODO: this also needs to become an assembly!
+   export SPARK_EXAMPLES_JAR=`ls "$EXAMPLES_DIR"/target/spark-examples*[0-9Tg].jar`
+ fi
+ if [[ -z $SPARK_EXAMPLES_JAR ]]; then
+   echo "Failed to find Spark examples assembly in $FWDIR/examples/target" >&2
 -  echo "You need to build Spark with sbt/sbt assembly before running this program" >&2
++  echo "You need to build Spark with sbt assembly before running this program" >&2
+   exit 1
+ fi
+ 
+ # Since the examples JAR ideally shouldn't include spark-core (that dependency should be
+ # "provided"), also add our standard Spark classpath, built using compute-classpath.sh.
+ CLASSPATH=`$FWDIR/bin/compute-classpath.sh`
+ CLASSPATH="$SPARK_EXAMPLES_JAR:$CLASSPATH"
+ 
+ if $cygwin; then
+     CLASSPATH=`cygpath -wp $CLASSPATH`
+     export SPARK_EXAMPLES_JAR=`cygpath -w $SPARK_EXAMPLES_JAR`
+ fi
+ 
+ # Find java binary
+ if [ -n "${JAVA_HOME}" ]; then
+   RUNNER="${JAVA_HOME}/bin/java"
+ else
+   if [ `command -v java` ]; then
+     RUNNER="java"
+   else
+     echo "JAVA_HOME is not set" >&2
+     exit 1
+   fi
+ fi
+ 
+ if [ "$SPARK_PRINT_LAUNCH_COMMAND" == "1" ]; then
+   echo -n "Spark Command: "
+   echo "$RUNNER" -cp "$CLASSPATH" "$@"
+   echo "========================================"
+   echo
+ fi
+ 
+ exec "$RUNNER" -cp "$CLASSPATH" "$@"

http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/bin/spark-class
----------------------------------------------------------------------
diff --cc bin/spark-class
index 0000000,c4225a3..49b0bef
mode 000000,100755..100755
--- a/bin/spark-class
+++ b/bin/spark-class
@@@ -1,0 -1,154 +1,154 @@@
+ #!/usr/bin/env bash
+ 
+ #
+ # Licensed to the Apache Software Foundation (ASF) under one or more
+ # contributor license agreements.  See the NOTICE file distributed with
+ # this work for additional information regarding copyright ownership.
+ # The ASF licenses this file to You under the Apache License, Version 2.0
+ # (the "License"); you may not use this file except in compliance with
+ # the License.  You may obtain a copy of the License at
+ #
+ #    http://www.apache.org/licenses/LICENSE-2.0
+ #
+ # Unless required by applicable law or agreed to in writing, software
+ # distributed under the License is distributed on an "AS IS" BASIS,
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ # See the License for the specific language governing permissions and
+ # limitations under the License.
+ #
+ 
+ cygwin=false
+ case "`uname`" in
+     CYGWIN*) cygwin=true;;
+ esac
+ 
+ SCALA_VERSION=2.10
+ 
+ # Figure out where the Scala framework is installed
+ FWDIR="$(cd `dirname $0`/..; pwd)"
+ 
+ # Export this as SPARK_HOME
+ export SPARK_HOME="$FWDIR"
+ 
+ # Load environment variables from conf/spark-env.sh, if it exists
+ if [ -e "$FWDIR/conf/spark-env.sh" ] ; then
+   . $FWDIR/conf/spark-env.sh
+ fi
+ 
+ if [ -z "$1" ]; then
+   echo "Usage: spark-class <class> [<args>]" >&2
+   exit 1
+ fi
+ 
+ # If this is a standalone cluster daemon, reset SPARK_JAVA_OPTS and SPARK_MEM to reasonable
+ # values for that; it doesn't need a lot
+ if [ "$1" = "org.apache.spark.deploy.master.Master" -o "$1" = "org.apache.spark.deploy.worker.Worker"
]; then
+   SPARK_MEM=${SPARK_DAEMON_MEMORY:-512m}
+   SPARK_DAEMON_JAVA_OPTS="$SPARK_DAEMON_JAVA_OPTS -Dspark.akka.logLifecycleEvents=true"
+   # Do not overwrite SPARK_JAVA_OPTS environment variable in this script
+   OUR_JAVA_OPTS="$SPARK_DAEMON_JAVA_OPTS"   # Empty by default
+ else
+   OUR_JAVA_OPTS="$SPARK_JAVA_OPTS"
+ fi
+ 
+ 
+ # Add java opts for master, worker, executor. The opts maybe null
+ case "$1" in
+   'org.apache.spark.deploy.master.Master')
+     OUR_JAVA_OPTS="$OUR_JAVA_OPTS $SPARK_MASTER_OPTS"
+     ;;
+   'org.apache.spark.deploy.worker.Worker')
+     OUR_JAVA_OPTS="$OUR_JAVA_OPTS $SPARK_WORKER_OPTS"
+     ;;
+   'org.apache.spark.executor.CoarseGrainedExecutorBackend')
+     OUR_JAVA_OPTS="$OUR_JAVA_OPTS $SPARK_EXECUTOR_OPTS"
+     ;;
+   'org.apache.spark.executor.MesosExecutorBackend')
+     OUR_JAVA_OPTS="$OUR_JAVA_OPTS $SPARK_EXECUTOR_OPTS"
+     ;;
+   'org.apache.spark.repl.Main')
+     OUR_JAVA_OPTS="$OUR_JAVA_OPTS $SPARK_REPL_OPTS"
+     ;;
+ esac
+ 
+ # Find the java binary
+ if [ -n "${JAVA_HOME}" ]; then
+   RUNNER="${JAVA_HOME}/bin/java"
+ else
+   if [ `command -v java` ]; then
+     RUNNER="java"
+   else
+     echo "JAVA_HOME is not set" >&2
+     exit 1
+   fi
+ fi
+ 
+ # Set SPARK_MEM if it isn't already set since we also use it for this process
+ SPARK_MEM=${SPARK_MEM:-512m}
+ export SPARK_MEM
+ 
+ # Set JAVA_OPTS to be able to load native libraries and to set heap size
+ JAVA_OPTS="$OUR_JAVA_OPTS"
+ JAVA_OPTS="$JAVA_OPTS -Djava.library.path=$SPARK_LIBRARY_PATH"
+ JAVA_OPTS="$JAVA_OPTS -Xms$SPARK_MEM -Xmx$SPARK_MEM"
+ # Load extra JAVA_OPTS from conf/java-opts, if it exists
+ if [ -e "$FWDIR/conf/java-opts" ] ; then
+   JAVA_OPTS="$JAVA_OPTS `cat $FWDIR/conf/java-opts`"
+ fi
+ export JAVA_OPTS
+ # Attention: when changing the way the JAVA_OPTS are assembled, the change must be reflected
in ExecutorRunner.scala!
+ 
+ if [ ! -f "$FWDIR/RELEASE" ]; then
+   # Exit if the user hasn't compiled Spark
+   num_jars=$(ls "$FWDIR"/assembly/target/scala-$SCALA_VERSION/ | grep "spark-assembly.*hadoop.*.jar"
| wc -l)
+   jars_list=$(ls "$FWDIR"/assembly/target/scala-$SCALA_VERSION/ | grep "spark-assembly.*hadoop.*.jar")
+   if [ "$num_jars" -eq "0" ]; then
+     echo "Failed to find Spark assembly in $FWDIR/assembly/target/scala-$SCALA_VERSION/"
>&2
 -    echo "You need to build Spark with 'sbt/sbt assembly' before running this program."
>&2
++    echo "You need to build Spark with 'sbt assembly' before running this program." >&2
+     exit 1
+   fi
+   if [ "$num_jars" -gt "1" ]; then
+     echo "Found multiple Spark assembly jars in $FWDIR/assembly/target/scala-$SCALA_VERSION:"
>&2
+     echo "$jars_list"
+     echo "Please remove all but one jar."
+     exit 1
+   fi
+ fi
+ 
+ TOOLS_DIR="$FWDIR"/tools
+ SPARK_TOOLS_JAR=""
+ if [ -e "$TOOLS_DIR"/target/scala-$SCALA_VERSION/*assembly*[0-9Tg].jar ]; then
+   # Use the JAR from the SBT build
+   export SPARK_TOOLS_JAR=`ls "$TOOLS_DIR"/target/scala-$SCALA_VERSION/*assembly*[0-9Tg].jar`
+ fi
+ if [ -e "$TOOLS_DIR"/target/spark-tools*[0-9Tg].jar ]; then
+   # Use the JAR from the Maven build
+   # TODO: this also needs to become an assembly!
+   export SPARK_TOOLS_JAR=`ls "$TOOLS_DIR"/target/spark-tools*[0-9Tg].jar`
+ fi
+ 
+ # Compute classpath using external script
+ CLASSPATH=`$FWDIR/bin/compute-classpath.sh`
+ 
+ if [ "$1" == "org.apache.spark.tools.JavaAPICompletenessChecker" ]; then
+   CLASSPATH="$CLASSPATH:$SPARK_TOOLS_JAR"
+ fi
+ 
+ if $cygwin; then
+   CLASSPATH=`cygpath -wp $CLASSPATH`
+   if [ "$1" == "org.apache.spark.tools.JavaAPICompletenessChecker" ]; then
+     export SPARK_TOOLS_JAR=`cygpath -w $SPARK_TOOLS_JAR`
+   fi
+ fi
+ export CLASSPATH
+ 
+ if [ "$SPARK_PRINT_LAUNCH_COMMAND" == "1" ]; then
+   echo -n "Spark Command: "
+   echo "$RUNNER" -cp "$CLASSPATH" $JAVA_OPTS "$@"
+   echo "========================================"
+   echo
+ fi
+ 
+ exec "$RUNNER" -cp "$CLASSPATH" $JAVA_OPTS "$@"
+ 
+ 

http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/core/src/test/scala/org/apache/spark/DriverSuite.scala
----------------------------------------------------------------------
diff --cc core/src/test/scala/org/apache/spark/DriverSuite.scala
index 7e1e55f,605588f..fb89537
--- a/core/src/test/scala/org/apache/spark/DriverSuite.scala
+++ b/core/src/test/scala/org/apache/spark/DriverSuite.scala
@@@ -35,10 -35,8 +35,10 @@@ class DriverSuite extends FunSuite wit
      val masters = Table(("master"), ("local"), ("local-cluster[2,1,512]"))
      forAll(masters) { (master: String) =>
        failAfter(60 seconds) {
 -        Utils.execute(Seq("./bin/spark-class", "org.apache.spark.DriverWithoutCleanup",
master),
 -          new File(System.getenv("SPARK_HOME")))
 +        Utils.executeAndGetOutput(
-           Seq("./spark-class", "org.apache.spark.DriverWithoutCleanup", master),
++          Seq("./bin/spark-class", "org.apache.spark.DriverWithoutCleanup", master),
 +          new File(sparkHome), 
 +          Map("SPARK_TESTING" -> "1", "SPARK_HOME" -> sparkHome))
        }
      }
    }

http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/docs/index.md
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/docs/python-programming-guide.md
----------------------------------------------------------------------
diff --cc docs/python-programming-guide.md
index 45a6250,dc187b3..5d48cb6
--- a/docs/python-programming-guide.md
+++ b/docs/python-programming-guide.md
@@@ -66,11 -66,11 +66,11 @@@ The script automatically adds the `bin/
  
  # Interactive Use
  
- The `pyspark` script launches a Python interpreter that is configured to run PySpark applications.
To use `pyspark` interactively, first build Spark, then launch it directly from the command
line without any options:
+ The `bin/pyspark` script launches a Python interpreter that is configured to run PySpark
applications. To use `pyspark` interactively, first build Spark, then launch it directly from
the command line without any options:
  
  {% highlight bash %}
 -$ sbt/sbt assembly
 +$ sbt assembly
- $ ./pyspark
+ $ ./bin/pyspark
  {% endhighlight %}
  
  The Python shell can be used explore data interactively and is a simple way to learn the
API:

http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/docs/quick-start.md
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/docs/running-on-yarn.md
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/docs/scala-programming-guide.md
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/make-distribution.sh
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/604fad9c/project/SparkBuild.scala
----------------------------------------------------------------------


Mime
View raw message