spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sro...@apache.org
Subject [1/2] spark-website git commit: Recover ec2-scripts.html and remove ec2-scripts.md.
Date Mon, 10 Jul 2017 06:36:00 GMT
Repository: spark-website
Updated Branches:
  refs/heads/remove_ec2 [created] 04d5ce051


Recover ec2-scripts.html and remove ec2-scripts.md.


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/74622a5c
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/74622a5c
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/74622a5c

Branch: refs/heads/remove_ec2
Commit: 74622a5cd3c41c1fa6d8ea336ac003e29502b216
Parents: 878dcfd
Author: Dongjoon Hyun <dongjoon@apache.org>
Authored: Sun Jul 9 01:46:09 2017 -0700
Committer: Dongjoon Hyun <dongjoon@apache.org>
Committed: Sun Jul 9 02:24:55 2017 -0700

----------------------------------------------------------------------
 faq.md                           |   2 +-
 site/docs/2.1.1/ec2-scripts.html | 161 ++++++++++++++++++++++++++++++++++
 site/docs/2.1.1/ec2-scripts.md   |   7 --
 site/faq.html                    |   2 +-
 4 files changed, 163 insertions(+), 9 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark-website/blob/74622a5c/faq.md
----------------------------------------------------------------------
diff --git a/faq.md b/faq.md
index 614664c..c41de64 100644
--- a/faq.md
+++ b/faq.md
@@ -26,7 +26,7 @@ Spark is a fast and general processing engine compatible with Hadoop data.
It ca
 <p class="answer">No. Spark's operators spill data to disk if it does not fit in memory,
allowing it to run well on any sized data. Likewise, cached datasets that do not fit in memory
are either spilled to disk or recomputed on the fly when needed, as determined by the RDD's
<a href="{{site.baseurl}}/docs/latest/scala-programming-guide.html#rdd-persistence">storage
level</a>.
 
 <p class="question">How can I run Spark on a cluster?</p>
-<p class="answer">You can use either the <a href="{{site.baseurl}}/docs/latest/spark-standalone.html">standalone
deploy mode</a>, which only needs Java to be installed on each node, or the <a href="{{site.baseurl}}/docs/latest/running-on-mesos.html">Mesos</a>
and <a href="{{site.baseurl}}/docs/latest/running-on-yarn.html">YARN</a> cluster
managers. If you'd like to run on Amazon EC2, Spark provides <a href="{{site.baseurl}}/docs/latest/ec2-scripts.html}}">EC2
scripts</a> to automatically launch a cluster.</p>
+<p class="answer">You can use either the <a href="{{site.baseurl}}/docs/latest/spark-standalone.html">standalone
deploy mode</a>, which only needs Java to be installed on each node, or the <a href="{{site.baseurl}}/docs/latest/running-on-mesos.html">Mesos</a>
and <a href="{{site.baseurl}}/docs/latest/running-on-yarn.html">YARN</a> cluster
managers. If you'd like to run on Amazon EC2, Spark provides <a href="{{site.baseurl}}/docs/latest/ec2-scripts.html">EC2
scripts</a> to automatically launch a cluster.</p>
 
 <p>Note that you can also run Spark locally (possibly on multiple cores) without any
special setup by just passing <code>local[N]</code> as the master URL, where <code>N</code>
is the number of parallel threads you want.</p>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/74622a5c/site/docs/2.1.1/ec2-scripts.html
----------------------------------------------------------------------
diff --git a/site/docs/2.1.1/ec2-scripts.html b/site/docs/2.1.1/ec2-scripts.html
new file mode 100644
index 0000000..320317f
--- /dev/null
+++ b/site/docs/2.1.1/ec2-scripts.html
@@ -0,0 +1,161 @@
+
+<!DOCTYPE html>
+<!--[if lt IE 7]>      <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
+<!--[if IE 7]>         <html class="no-js lt-ie9 lt-ie8"> <![endif]-->
+<!--[if IE 8]>         <html class="no-js lt-ie9"> <![endif]-->
+<!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]-->
+    <head>
+        <meta charset="utf-8">
+        <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
+        <title>Running Spark on EC2 - Spark 2.1.1 Documentation</title>
+        
+
+        
+          <meta http-equiv="refresh" content="0; url=https://github.com/amplab/spark-ec2#readme">
+          <link rel="canonical" href="https://github.com/amplab/spark-ec2#readme" />
+        
+
+        <link rel="stylesheet" href="css/bootstrap.min.css">
+        <style>
+            body {
+                padding-top: 60px;
+                padding-bottom: 40px;
+            }
+        </style>
+        <meta name="viewport" content="width=device-width">
+        <link rel="stylesheet" href="css/bootstrap-responsive.min.css">
+        <link rel="stylesheet" href="css/main.css">
+
+        <script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script>
+
+        <link rel="stylesheet" href="css/pygments-default.css">
+
+        
+
+    </head>
+    <body>
+        <!--[if lt IE 7]>
+            <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade
your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install
Google Chrome Frame</a> to better experience this site.</p>
+        <![endif]-->
+
+        <!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html
-->
+
+        <div class="navbar navbar-fixed-top" id="topbar">
+            <div class="navbar-inner">
+                <div class="container">
+                    <div class="brand"><a href="index.html">
+                      <img src="img/spark-logo-hd.png" style="height:50px;"/></a><span
class="version">2.1.1</span>
+                    </div>
+                    <ul class="nav">
+                        <!--TODO(andyk): Add class="active" attribute to li some how.-->
+                        <li><a href="index.html">Overview</a></li>
+
+                        <li class="dropdown">
+                            <a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming
Guides<b class="caret"></b></a>
+                            <ul class="dropdown-menu">
+                                <li><a href="quick-start.html">Quick Start</a></li>
+                                <li><a href="programming-guide.html">Spark Programming
Guide</a></li>
+                                <li class="divider"></li>
+                                <li><a href="streaming-programming-guide.html">Spark
Streaming</a></li>
+                                <li><a href="sql-programming-guide.html">DataFrames,
Datasets and SQL</a></li>
+                                <li><a href="structured-streaming-programming-guide.html">Structured
Streaming</a></li>
+                                <li><a href="ml-guide.html">MLlib (Machine Learning)</a></li>
+                                <li><a href="graphx-programming-guide.html">GraphX
(Graph Processing)</a></li>
+                                <li><a href="sparkr.html">SparkR (R on Spark)</a></li>
+                            </ul>
+                        </li>
+
+                        <li class="dropdown">
+                            <a href="#" class="dropdown-toggle" data-toggle="dropdown">API
Docs<b class="caret"></b></a>
+                            <ul class="dropdown-menu">
+                                <li><a href="api/scala/index.html#org.apache.spark.package">Scala</a></li>
+                                <li><a href="api/java/index.html">Java</a></li>
+                                <li><a href="api/python/index.html">Python</a></li>
+                                <li><a href="api/R/index.html">R</a></li>
+                            </ul>
+                        </li>
+
+                        <li class="dropdown">
+                            <a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b
class="caret"></b></a>
+                            <ul class="dropdown-menu">
+                                <li><a href="cluster-overview.html">Overview</a></li>
+                                <li><a href="submitting-applications.html">Submitting
Applications</a></li>
+                                <li class="divider"></li>
+                                <li><a href="spark-standalone.html">Spark Standalone</a></li>
+                                <li><a href="running-on-mesos.html">Mesos</a></li>
+                                <li><a href="running-on-yarn.html">YARN</a></li>
+                            </ul>
+                        </li>
+
+                        <li class="dropdown">
+                            <a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b
class="caret"></b></a>
+                            <ul class="dropdown-menu">
+                                <li><a href="configuration.html">Configuration</a></li>
+                                <li><a href="monitoring.html">Monitoring</a></li>
+                                <li><a href="tuning.html">Tuning Guide</a></li>
+                                <li><a href="job-scheduling.html">Job Scheduling</a></li>
+                                <li><a href="security.html">Security</a></li>
+                                <li><a href="hardware-provisioning.html">Hardware
Provisioning</a></li>
+                                <li class="divider"></li>
+                                <li><a href="building-spark.html">Building Spark</a></li>
+                                <li><a href="http://spark.apache.org/contributing.html">Contributing
to Spark</a></li>
+                                <li><a href="http://spark.apache.org/third-party-projects.html">Third
Party Projects</a></li>
+                            </ul>
+                        </li>
+                    </ul>
+                    <!--<p class="navbar-text pull-right"><span class="version-text">v2.1.1</span></p>-->
+                </div>
+            </div>
+        </div>
+
+        <div class="container-wrapper">
+
+            
+                <div class="content" id="content">
+                    
+                        <h1 class="title">Running Spark on EC2</h1>
+                    
+
+                    <p>This document has been superseded and replaced by documentation
at https://github.com/amplab/spark-ec2#readme</p>
+
+
+                </div>
+            
+             <!-- /container -->
+        </div>
+
+        <script src="js/vendor/jquery-1.8.0.min.js"></script>
+        <script src="js/vendor/bootstrap.min.js"></script>
+        <script src="js/vendor/anchor.min.js"></script>
+        <script src="js/main.js"></script>
+
+        <!-- MathJax Section -->
+        <script type="text/x-mathjax-config">
+            MathJax.Hub.Config({
+                TeX: { equationNumbers: { autoNumber: "AMS" } }
+            });
+        </script>
+        <script>
+            // Note that we load MathJax this way to work with local file (file://), HTTP
and HTTPS.
+            // We could use "//cdn.mathjax...", but that won't support "file://".
+            (function(d, script) {
+                script = d.createElement('script');
+                script.type = 'text/javascript';
+                script.async = true;
+                script.onload = function(){
+                    MathJax.Hub.Config({
+                        tex2jax: {
+                            inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ],
+                            displayMath: [ ["$$","$$"], ["\\[", "\\]"] ],
+                            processEscapes: true,
+                            skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
+                        }
+                    });
+                };
+                script.src = ('https:' == document.location.protocol ? 'https://' : 'http://')
+
+                    'cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML';
+                d.getElementsByTagName('head')[0].appendChild(script);
+            }(document));
+        </script>
+    </body>
+</html>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/74622a5c/site/docs/2.1.1/ec2-scripts.md
----------------------------------------------------------------------
diff --git a/site/docs/2.1.1/ec2-scripts.md b/site/docs/2.1.1/ec2-scripts.md
deleted file mode 100644
index 6cd39db..0000000
--- a/site/docs/2.1.1/ec2-scripts.md
+++ /dev/null
@@ -1,7 +0,0 @@
----		
-layout: global
-title: Running Spark on EC2
-redirect: https://github.com/amplab/spark-ec2#readme
----
-
-This document has been superseded and replaced by documentation at https://github.com/amplab/spark-ec2#readme

http://git-wip-us.apache.org/repos/asf/spark-website/blob/74622a5c/site/faq.html
----------------------------------------------------------------------
diff --git a/site/faq.html b/site/faq.html
index e0d6c63..584c12b 100644
--- a/site/faq.html
+++ b/site/faq.html
@@ -213,7 +213,7 @@ Spark is a fast and general processing engine compatible with Hadoop data.
It ca
 <p class="answer">No. Spark's operators spill data to disk if it does not fit in memory,
allowing it to run well on any sized data. Likewise, cached datasets that do not fit in memory
are either spilled to disk or recomputed on the fly when needed, as determined by the RDD's
<a href="/docs/latest/scala-programming-guide.html#rdd-persistence">storage level</a>.
 
 <p class="question">How can I run Spark on a cluster?</p>
-<p class="answer">You can use either the <a href="/docs/latest/spark-standalone.html">standalone
deploy mode</a>, which only needs Java to be installed on each node, or the <a href="/docs/latest/running-on-mesos.html">Mesos</a>
and <a href="/docs/latest/running-on-yarn.html">YARN</a> cluster managers. If
you'd like to run on Amazon EC2, Spark provides <a href="/docs/latest/ec2-scripts.html}}">EC2
scripts</a> to automatically launch a cluster.</p>
+<p class="answer">You can use either the <a href="/docs/latest/spark-standalone.html">standalone
deploy mode</a>, which only needs Java to be installed on each node, or the <a href="/docs/latest/running-on-mesos.html">Mesos</a>
and <a href="/docs/latest/running-on-yarn.html">YARN</a> cluster managers. If
you'd like to run on Amazon EC2, Spark provides <a href="/docs/latest/ec2-scripts.html">EC2
scripts</a> to automatically launch a cluster.</p>
 
 <p>Note that you can also run Spark locally (possibly on multiple cores) without any
special setup by just passing <code>local[N]</code> as the master URL, where <code>N</code>
is the number of parallel threads you want.</p>
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message