spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ma...@apache.org
Subject svn commit: r1611801 - in /spark: faq.md site/faq.html
Date Fri, 18 Jul 2014 20:55:12 GMT
Author: matei
Date: Fri Jul 18 20:55:11 2014
New Revision: 1611801

URL: http://svn.apache.org/r1611801
Log:
tweak

Modified:
    spark/faq.md
    spark/site/faq.html

Modified: spark/faq.md
URL: http://svn.apache.org/viewvc/spark/faq.md?rev=1611801&r1=1611800&r2=1611801&view=diff
==============================================================================
--- spark/faq.md (original)
+++ spark/faq.md Fri Jul 18 20:55:11 2014
@@ -23,7 +23,7 @@ streaming, interactive queries, and mach
 <p class="answer">Spark supports Scala, Java and Python.</p>
 
 <p class="question">How large a cluster can Spark scale to?</p>
-<p class="answer">We are aware of multiple deployments on over 1000 nodes.</p>
+<p class="answer">We have seen multiple deployments on over 1000 nodes.</p>
 
 <p class="question">What happens when a cached dataset does not fit in memory?</p>
 <p class="answer">Spark can either spill it to disk or recompute the partitions that
don't fit in RAM each time they are requested. By default, it uses recomputation, but you
can set a dataset's <a href="{{site.url}}docs/latest/scala-programming-guide.html#rdd-persistence">storage
level</a> to <code>MEMORY_AND_DISK</code> to avoid this.  </p>

Modified: spark/site/faq.html
URL: http://svn.apache.org/viewvc/spark/site/faq.html?rev=1611801&r1=1611800&r2=1611801&view=diff
==============================================================================
--- spark/site/faq.html (original)
+++ spark/site/faq.html Fri Jul 18 20:55:11 2014
@@ -174,7 +174,7 @@ streaming, interactive queries, and mach
 <p class="answer">Spark supports Scala, Java and Python.</p>
 
 <p class="question">How large a cluster can Spark scale to?</p>
-<p class="answer">We are aware of multiple deployments on over 1000 nodes.</p>
+<p class="answer">We have seen multiple deployments on over 1000 nodes.</p>
 
 <p class="question">What happens when a cached dataset does not fit in memory?</p>
 <p class="answer">Spark can either spill it to disk or recompute the partitions that
don't fit in RAM each time they are requested. By default, it uses recomputation, but you
can set a dataset's <a href="/docs/latest/scala-programming-guide.html#rdd-persistence">storage
level</a> to <code>MEMORY_AND_DISK</code> to avoid this.  </p>



Mime
View raw message