flink-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From chiwanp...@apache.org
Subject flink git commit: [hotfix] [docs] Update python.md
Date Tue, 22 Mar 2016 05:08:56 GMT
Repository: flink
Updated Branches:
  refs/heads/master 048cda72b -> 34a42681d

[hotfix] [docs] Update python.md

This closes #1814.

Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/34a42681
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/34a42681
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/34a42681

Branch: refs/heads/master
Commit: 34a42681d331bcb4089b4b3467043b1e848f8931
Parents: 048cda7
Author: zentol <chesnay@apache.org>
Authored: Thu Mar 17 15:05:15 2016 +0100
Committer: Chiwan Park <chiwanpark@apache.org>
Committed: Tue Mar 22 14:07:33 2016 +0900

 docs/apis/batch/python.md | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/docs/apis/batch/python.md b/docs/apis/batch/python.md
index 359d2ed..646683d 100644
--- a/docs/apis/batch/python.md
+++ b/docs/apis/batch/python.md
@@ -150,7 +150,10 @@ Project setup
 Apart from setting up Flink, no additional work is required. The python package can be found
in the /resource folder of your Flink distribution. The flink package, along with the plan
and optional packages are automatically distributed among the cluster via HDFS when running
a job.
-The Python API was tested on Linux systems that have Python 2.7 or 3.4 installed.
+The Python API was tested on Linux/Windows systems that have Python 2.7 or 3.4 installed.
+By default Flink will start python processes by calling "python" or "python3", depending
on which start-script
+was used. By setting the "python.binary.python[2/3]" key in the flink-conf.yaml you can modify
this behaviour to use a binary of your choice.
 {% top %}
@@ -216,7 +219,7 @@ data.flat_map(
         <p>Transforms a parallel partition in a single function call. The function
get the partition
         as an `Iterator` and can produce an arbitrary number of result values. The number
-        elements in each partition depends on the degree-of-parallelism and previous operations.</p>
+        elements in each partition depends on the parallelism and previous operations.</p>
 {% highlight python %}
 data.map_partition(lambda x,c: [value * 2 for value in x])
 {% endhighlight %}
@@ -566,13 +569,13 @@ it executes. Execution environment parallelism can be overwritten by
 parallelism of an operator.
 The default parallelism of an execution environment can be specified by calling the
-`set_degree_of_parallelism()` method. To execute all operators, data sources, and data sinks
of the
+`set_parallelism()` method. To execute all operators, data sources, and data sinks of the
 [WordCount](#example-program) example program with a parallelism of `3`, set the default
parallelism of the
 execution environment as follows:
 {% highlight python %}
 env = get_environment()
 text.flat_map(lambda x,c: x.lower().split()) \
     .group_by(1) \

View raw message