flink-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ches...@apache.org
Subject flink git commit: [hotfix][docs] Fix some typos in the documentation.
Date Tue, 21 Nov 2017 13:43:50 GMT
Repository: flink
Updated Branches:
  refs/heads/master 80cd586b1 -> 52599ff33


[hotfix][docs] Fix some typos in the documentation.

This closes #5039.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/52599ff3
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/52599ff3
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/52599ff3

Branch: refs/heads/master
Commit: 52599ff338afa19d07277874f2d102845c6dbec3
Parents: 80cd586
Author: Gabor Gevay <ggab90@gmail.com>
Authored: Mon Nov 20 16:51:43 2017 +0100
Committer: zentol <chesnay@apache.org>
Committed: Tue Nov 21 14:43:31 2017 +0100

----------------------------------------------------------------------
 docs/dev/connectors/kafka.md                                   | 4 ++--
 docs/dev/stream/operators/windows.md                           | 4 ++--
 docs/ops/production_ready.md                                   | 2 +-
 .../streaming/api/environment/StreamExecutionEnvironment.java  | 6 +++---
 4 files changed, 8 insertions(+), 8 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/52599ff3/docs/dev/connectors/kafka.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/kafka.md b/docs/dev/connectors/kafka.md
index 5d3e66d..ad4cc2f 100644
--- a/docs/dev/connectors/kafka.md
+++ b/docs/dev/connectors/kafka.md
@@ -537,7 +537,7 @@ chosen by passing appropriate `semantic` parameter to the `FlinkKafkaProducer011
  * `Semantic.NONE`: Flink will not guarantee anything. Produced records can be lost or they
can
  be duplicated.
  * `Semantic.AT_LEAST_ONCE` (default setting): similar to `setFlushOnCheckpoint(true)` in
- `FlinkKafkaProducer010`. his guarantees that no records will be lost (although they can
be duplicated).
+ `FlinkKafkaProducer010`. This guarantees that no records will be lost (although they can
be duplicated).
  * `Semantic.EXACTLY_ONCE`: uses Kafka transactions to provide exactly-once semantic.
 
 <div class="alert alert-warning">
@@ -579,7 +579,7 @@ un-finished transaction. In other words after following sequence of events:
 3. User committed `transaction2`
 
 Even if records from `transaction2` are already committed, they will not be visible to
-the consumers until `transaction1` is committed or aborted. This hastwo implications:
+the consumers until `transaction1` is committed or aborted. This has two implications:
 
  * First of all, during normal working of Flink applications, user can expect a delay in
visibility
  of the records produced into Kafka topics, equal to average time between completed checkpoints.

http://git-wip-us.apache.org/repos/asf/flink/blob/52599ff3/docs/dev/stream/operators/windows.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/operators/windows.md b/docs/dev/stream/operators/windows.md
index 3c0cd85..e161854 100644
--- a/docs/dev/stream/operators/windows.md
+++ b/docs/dev/stream/operators/windows.md
@@ -29,7 +29,7 @@ programmer can benefit to the maximum from its offered functionality.
 
 The general structure of a windowed Flink program is presented below. The first snippet refers
to *keyed* streams,
 while the second to *non-keyed* ones. As one can see, the only difference is the `keyBy(...)`
call for the keyed streams
-and the `window(...)` which becomes `windowAll(...)` for non-keyed streams. These is also
going to serve as a roadmap
+and the `window(...)` which becomes `windowAll(...)` for non-keyed streams. This is also
going to serve as a roadmap
 for the rest of the page.
 
 **Keyed Windows**
@@ -1383,7 +1383,7 @@ and then calculating the top-k elements within the same window in the
second ope
 
 Windows can be defined over long periods of time (such as days, weeks, or months) and therefore
accumulate very large state. There are a couple of rules to keep in mind when estimating the
storage requirements of your windowing computation:
 
-1. Flink creates one copy of each element per window to which it belongs. Given this, tumbling
windows keep one copy of each element (an element belongs to exactly window unless it is dropped
late). In contrast, sliding windows create several of each element, as explained in the [Window
Assigners](#window-assigners) section. Hence, a sliding window of size 1 day and slide 1 second
might not be a good idea.
+1. Flink creates one copy of each element per window to which it belongs. Given this, tumbling
windows keep one copy of each element (an element belongs to exactly one window unless it
is dropped late). In contrast, sliding windows create several of each element, as explained
in the [Window Assigners](#window-assigners) section. Hence, a sliding window of size 1 day
and slide 1 second might not be a good idea.
 
 2. `ReduceFunction`, `AggregateFunction`, and `FoldFunction` can significantly reduce the
storage requirements, as they eagerly aggregate elements and store only one value per window.
In contrast, just using a `ProcessWindowFunction` requires accumulating all elements.
 

http://git-wip-us.apache.org/repos/asf/flink/blob/52599ff3/docs/ops/production_ready.md
----------------------------------------------------------------------
diff --git a/docs/ops/production_ready.md b/docs/ops/production_ready.md
index 303e7a7..0d11b8a 100644
--- a/docs/ops/production_ready.md
+++ b/docs/ops/production_ready.md
@@ -32,7 +32,7 @@ important and need **careful considerations** if you plan to bring your
Flink jo
 Flink provides out-of-the-box defaults to make usage and adoption of Flink easier. For many
users and scenarios, those
 defaults are good starting points for development and completely sufficient for "one-shot"
jobs. 
 
-However, once you are planning to bring a Flink appplication to production the requirements
typically increase. For example,
+However, once you are planning to bring a Flink application to production the requirements
typically increase. For example,
 you want your job to be (re-)scalable and to have a good upgrade story for your job and new
Flink versions.
 
 In the following, we present a collection of configuration options that you should check
before your job goes into production.

http://git-wip-us.apache.org/repos/asf/flink/blob/52599ff3/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java
----------------------------------------------------------------------
diff --git a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java
b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java
index 46c821e..cc45ddc 100644
--- a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java
+++ b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.java
@@ -747,7 +747,7 @@ public abstract class StreamExecutionEnvironment {
 	 * elements, it may be necessary to manually supply the type information via
 	 * {@link #fromCollection(java.util.Collection, org.apache.flink.api.common.typeinfo.TypeInformation)}.
 	 *
-	 * <p>Note that this operation will result in a non-parallel data stream source, i.e.
a data stream source with a
+	 * <p>Note that this operation will result in a non-parallel data stream source, i.e.
a data stream source with
 	 * parallelism one.
 	 *
 	 * @param data
@@ -784,7 +784,7 @@ public abstract class StreamExecutionEnvironment {
 	 * Creates a data stream from the given non-empty collection.
 	 *
 	 * <p>Note that this operation will result in a non-parallel data stream source,
-	 * i.e., a data stream source with a parallelism one.
+	 * i.e., a data stream source with parallelism one.
 	 *
 	 * @param data
 	 * 		The collection of elements to create the data stream from
@@ -843,7 +843,7 @@ public abstract class StreamExecutionEnvironment {
 	 * {@link #fromCollection(java.util.Iterator, Class)} does not supply all type information.
 	 *
 	 * <p>Note that this operation will result in a non-parallel data stream source, i.e.,
-	 * a data stream source with a parallelism one.
+	 * a data stream source with parallelism one.
 	 *
 	 * @param data
 	 * 		The iterator of elements to create the data stream from


Mime
View raw message