flink-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From u..@apache.org
Subject flink git commit: [FLINK-5894] [docs] Fix misleading HA docs
Date Thu, 23 Feb 2017 12:49:33 GMT
Repository: flink
Updated Branches:
  refs/heads/master e7a914d4e -> 234b90528

[FLINK-5894] [docs] Fix misleading HA docs

This closes #3401.

Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/234b9052
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/234b9052
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/234b9052

Branch: refs/heads/master
Commit: 234b90528a08151d4e43d4214563275bea6d877d
Parents: e7a914d
Author: Ufuk Celebi <uce@apache.org>
Authored: Thu Feb 23 13:30:13 2017 +0100
Committer: Ufuk Celebi <uce@apache.org>
Committed: Thu Feb 23 13:49:01 2017 +0100

 docs/setup/jobmanager_high_availability.md | 14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)

diff --git a/docs/setup/jobmanager_high_availability.md b/docs/setup/jobmanager_high_availability.md
index aa18a4b..5949835 100644
--- a/docs/setup/jobmanager_high_availability.md
+++ b/docs/setup/jobmanager_high_availability.md
@@ -84,14 +84,13 @@ In order to start an HA-cluster add the following configuration keys to
   **Important**: if you are running multiple Flink HA clusters, you have to manually configure
separate namespaces for each cluster. By default, the Yarn cluster and the Yarn session automatically
generate namespaces based on Yarn application id. A manual configuration overrides this behaviour
in Yarn. Specifying a namespace with the -z CLI option, in turn, overrides manual configuration.
-- **State backend and storage directory** (required): JobManager meta data is persisted in
the *state backend* and only a pointer to this state is stored in ZooKeeper. Currently, only
the file system state backend is supported in HA mode.
+- **Storage directory** (required): JobManager metadata is persisted in the file system *storageDir*
and only a pointer to this state is stored in ZooKeeper.
-high-availability.zookeeper.storageDir: hdfs:///flink/recovery</pre>
-state.backend: filesystem
-state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
+high-availability.zookeeper.storageDir: hdfs:///flink/recovery
+    </pre>
-    The `storageDir` stores all meta data needed to recover a JobManager failure.
+    The `storageDir` stores all metadata needed to recover a JobManager failure.
 After configuring the masters and the ZooKeeper quorum, you can use the provided cluster
startup scripts as usual. They will start an HA-cluster. Keep in mind that the **ZooKeeper
quorum has to be running** when you call the scripts and make sure to **configure a separate
ZooKeeper root path** for each HA cluster you are starting.
@@ -106,9 +105,6 @@ high-availability.zookeeper.path.root: /flink
 high-availability.zookeeper.path.namespace: /cluster_one # important: customize per cluster
 high-availability.zookeeper.storageDir: hdfs:///flink/recovery</pre>
-state.backend: filesystem
-state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
 2. **Configure masters** in `conf/masters`:
@@ -192,8 +188,6 @@ high-availability.zookeeper.quorum: localhost:2181
 high-availability.zookeeper.storageDir: hdfs:///flink/recovery
 high-availability.zookeeper.path.root: /flink
 high-availability.zookeeper.path.namespace: /cluster_one # important: customize per cluster
-state.backend: filesystem
-state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
 yarn.application-attempts: 10</pre>
 3. **Configure ZooKeeper server** in `conf/zoo.cfg` (currently it's only possible to run
a single ZooKeeper server per machine):

View raw message