Return-Path: X-Original-To: apmail-spark-dev-archive@minotaur.apache.org Delivered-To: apmail-spark-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 793FFC70C for ; Wed, 12 Mar 2014 17:53:53 +0000 (UTC) Received: (qmail 92541 invoked by uid 500); 12 Mar 2014 17:53:52 -0000 Delivered-To: apmail-spark-dev-archive@spark.apache.org Received: (qmail 92331 invoked by uid 500); 12 Mar 2014 17:53:51 -0000 Mailing-List: contact dev-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@spark.apache.org Delivered-To: mailing list dev@spark.apache.org Received: (qmail 92323 invoked by uid 99); 12 Mar 2014 17:53:50 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 12 Mar 2014 17:53:50 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id 6712294241C; Wed, 12 Mar 2014 17:53:50 +0000 (UTC) From: sryza To: dev@spark.apache.org Reply-To: dev@spark.apache.org References: In-Reply-To: Subject: [GitHub] spark pull request: [Spark-1234] clean up text in running-on-yarn.... Content-Type: text/plain Message-Id: <20140312175350.6712294241C@tyr.zones.apache.org> Date: Wed, 12 Mar 2014 17:53:50 +0000 (UTC) Github user sryza commented on a diff in the pull request: https://github.com/apache/spark/pull/130#discussion_r10530856 --- Diff: docs/running-on-yarn.md --- @@ -99,16 +99,16 @@ With this mode, your application is actually run on the remote machine where the ## Launch spark application with yarn-client mode. -With yarn-client mode, the application will be launched locally. Just like running application or spark-shell on Local / Mesos / Standalone mode. The launch method is also the similar with them, just make sure that when you need to specify a master url, use "yarn-client" instead. And you also need to export the env value for SPARK_JAR. +With yarn-client mode, the application will be launched locally, as when running the application or spark-shell on Local / Mesos / Standalone mode. The method to launch is similar as with those modes, except you should specify "yarn-client" as the master URL. You also need to export the env value for SPARK_JAR. Configuration in yarn-client mode: -In order to tune worker core/number/memory etc. You need to export environment variables or add them to the spark configuration file (./conf/spark_env.sh). The following are the list of options. +In order to tune worker core/number/memory etc. you need to export environment variables or add them to the spark configuration file (./conf/spark_env.sh). The following are the list of options. * `SPARK_WORKER_INSTANCES`, Number of workers to start (Default: 2) -* `SPARK_WORKER_CORES`, Number of cores for the workers (Default: 1). +* `SPARK_WORKER_CORES`, Number of cores for the workers (Default: 1) * `SPARK_WORKER_MEMORY`, Memory per Worker (e.g. 1000M, 2G) (Default: 1G) -* `SPARK_MASTER_MEMORY`, Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb) +* `SPARK_MASTER_MEMORY`, Memory for Master (e.g. 1000M, 2G) (Default: 512 M) --- End diff -- Should take out the space between 512 and "M" as well. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. ---