flink-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From u..@apache.org
Subject flink-web git commit: [faq] Link to network buffer config section and adjust for slots
Date Mon, 09 May 2016 09:03:15 GMT
Repository: flink-web
Updated Branches:
  refs/heads/asf-site 9294101c0 -> f5dbbeb3d


[faq] Link to network buffer config section and adjust for slots


Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/f5dbbeb3
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/f5dbbeb3
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/f5dbbeb3

Branch: refs/heads/asf-site
Commit: f5dbbeb3d2dd9d9512fc845e4141c8a057bf9c36
Parents: 9294101
Author: Ufuk Celebi <uce@apache.org>
Authored: Mon May 9 11:03:04 2016 +0200
Committer: Ufuk Celebi <uce@apache.org>
Committed: Mon May 9 11:03:04 2016 +0200

----------------------------------------------------------------------
 content/faq.html |  6 +++---
 faq.md           | 12 ++++++------
 2 files changed, 9 insertions(+), 9 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/f5dbbeb3/content/faq.html
----------------------------------------------------------------------
diff --git a/content/faq.html b/content/faq.html
index 8fea89c..39e5348 100644
--- a/content/faq.html
+++ b/content/faq.html
@@ -244,7 +244,7 @@ Hadoop client libraries by default.</p>
 
 <p>Additionally, we provide a special YARN Enabled download of Flink for
 users with an existing Hadoop YARN cluster. <a href="http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/YARN.html">Apache
Hadoop
-YARN</a> 
+YARN</a>
 is Hadoop’s cluster resource manager that allows to use
 different execution engines next to each other on a cluster.</p>
 
@@ -353,8 +353,8 @@ an in-depth discussion of how Flink handles types.</p>
 you need to adapt the number of network buffers via the config parameter
 <code>taskmanager.network.numberOfBuffers</code>.
 As a rule-of-thumb, the number of buffers should be at least
-<code>4 * numberOfNodes * numberOfTasksPerNode^2</code>. See
-<a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html">Configuration
Reference</a> for details.</p>
+<code>4 * numberOfTaskManagers * numberOfSlotsPerTaskManager^2</code>. See
+<a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/config.html#configuring-the-network-buffers">Configuration
Reference</a> for details.</p>
 
 <h3 id="my-job-fails-early-with-a-javaioeofexception-what-could-be-the-cause">My job
fails early with a java.io.EOFException. What could be the cause?</h3>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/f5dbbeb3/faq.md
----------------------------------------------------------------------
diff --git a/faq.md b/faq.md
index 054bcd8..84824c8 100644
--- a/faq.md
+++ b/faq.md
@@ -50,7 +50,7 @@ Hadoop client libraries by default.
 
 Additionally, we provide a special YARN Enabled download of Flink for
 users with an existing Hadoop YARN cluster. [Apache Hadoop
-YARN](http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/YARN.html) 
+YARN](http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/YARN.html)
 is Hadoop's cluster resource manager that allows to use
 different execution engines next to each other on a cluster.
 
@@ -153,8 +153,8 @@ If you run Flink in a massively parallel setting (100+ parallel threads),
 you need to adapt the number of network buffers via the config parameter
 `taskmanager.network.numberOfBuffers`.
 As a rule-of-thumb, the number of buffers should be at least
-`4 * numberOfNodes * numberOfTasksPerNode^2`. See
-[Configuration Reference]({{ site.docs-snapshot }}/setup/config.html) for details.
+`4 * numberOfTaskManagers * numberOfSlotsPerTaskManager^2`. See
+[Configuration Reference]({{ site.docs-snapshot }}/setup/config.html#configuring-the-network-buffers)
for details.
 
 ### My job fails early with a java.io.EOFException. What could be the cause?
 
@@ -198,7 +198,7 @@ Among the exceptions are the following:
         at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:478)
         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipelineInternal(FSNamesystem.java:6039)
         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updatePipeline(FSNamesystem.java:6002)`
-        
+
 If you are experiencing any of these, we recommend using a Flink build with a Hadoop version
matching
 your local HDFS version.
 You can also manually build Flink against the exact Hadoop version (for example
@@ -242,10 +242,10 @@ cluster.sh`). You can kill their processes on Linux/Mac as follows:
 
 - Determine the process id (pid) of the JobManager / TaskManager process. You
 can use the `jps` command on Linux(if you have OpenJDK installed) or command
-`ps -ef | grep java` to find all Java processes. 
+`ps -ef | grep java` to find all Java processes.
 - Kill the process with `kill -9 <pid>`, where `pid` is the process id of the
 affected JobManager or TaskManager process.
-    
+
 On Windows, the TaskManager shows a table of all processes and allows you to
 destroy a process by right its entry.
 


Mime
View raw message