flink-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rmetz...@apache.org
Subject flink git commit: [docs] Fix broken links in documentation
Date Mon, 11 May 2015 12:22:39 GMT
Repository: flink
Updated Branches:
  refs/heads/master 7e5a97062 -> d259e6962


[docs] Fix broken links in documentation


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/d259e696
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/d259e696
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/d259e696

Branch: refs/heads/master
Commit: d259e6962186135f5d8d4e7a9fbb9b053756477a
Parents: 7e5a970
Author: Robert Metzger <rmetzger@apache.org>
Authored: Mon May 11 14:22:02 2015 +0200
Committer: Robert Metzger <rmetzger@apache.org>
Committed: Mon May 11 14:22:02 2015 +0200

----------------------------------------------------------------------
 docs/apis/cli.md                        | 2 +-
 docs/apis/examples.md                   | 4 ++--
 docs/apis/iterations.md                 | 2 +-
 docs/apis/java8.md                      | 2 +-
 docs/apis/programming_guide.md          | 8 ++++----
 docs/apis/web_client.md                 | 2 +-
 docs/index.md                           | 2 +-
 docs/libs/gelly_guide.md                | 2 +-
 docs/libs/ml/cocoa.md                   | 2 +-
 docs/libs/spargel_guide.md              | 2 +-
 docs/quickstart/java_api_quickstart.md  | 2 +-
 docs/quickstart/scala_api_quickstart.md | 2 +-
 docs/setup/config.md                    | 4 ++--
 docs/setup/flink_on_tez.md              | 4 ++--
 docs/setup/gce_setup.md                 | 2 +-
 docs/setup/yarn_setup.md                | 4 ++--
 16 files changed, 23 insertions(+), 23 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/apis/cli.md
----------------------------------------------------------------------
diff --git a/docs/apis/cli.md b/docs/apis/cli.md
index 4bfd1ad..1f550f3 100644
--- a/docs/apis/cli.md
+++ b/docs/apis/cli.md
@@ -70,7 +70,7 @@ The command line can be used to
                                ./examples/flink-java-examples-{{ site.version }}-WordCount.jar
\
                                file:///home/user/hamlet.txt file:///home/user/wordcount_out
 
--   Run example program using a [per-job YARN cluster](yarn_setup.html#run-a-single-flink-job-on-hadoop-yarn)
with 2 TaskManagers:
+-   Run example program using a [per-job YARN cluster]({{site.baseurl}}/setup/yarn_setup.html#run-a-single-flink-job-on-hadoop-yarn)
with 2 TaskManagers:
 
         ./bin/flink run -m yarn-cluster -yn 2 \
                                ./examples/flink-java-examples-{{ site.version }}-WordCount.jar
\

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/apis/examples.md
----------------------------------------------------------------------
diff --git a/docs/apis/examples.md b/docs/apis/examples.md
index 29d1e1c..2b52f3b 100644
--- a/docs/apis/examples.md
+++ b/docs/apis/examples.md
@@ -457,13 +457,13 @@ DataSet<Tuple3<Integer, Integer, Double>> priceSums =
 priceSums.writeAsCsv(outputPath);
 ~~~
 
-The {% gh_link /flink-examples/flink-java-examples/src/main/java/org/apache/flink/examples/java/relational/RelationalQuery.java
"Relational Query program" %} implements the above query. It requires the following parameters
to run: `<orders input path>, <lineitem input path>, <output path>`.
+The {% gh_link /flink-examples/flink-java-examples/src/main/java/org/apache/flink/examples/java/relational/TPCHQuery10.java
"Relational Query program" %} implements the above query. It requires the following parameters
to run: `<orders input path>, <lineitem input path>, <output path>`.
 
 </div>
 <div data-lang="scala" markdown="1">
 Coming soon...
 
-The {% gh_link /flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/relational/RelationalQuery.scala
"Relational Query program" %} implements the above query. It requires the following parameters
to run: `<orders input path>, <lineitem input path>, <output path>`.
+The {% gh_link /flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/relational/TPCHQuery3.scala
"Relational Query program" %} implements the above query. It requires the following parameters
to run: `<orders input path>, <lineitem input path>, <output path>`.
 
 </div>
 </div>

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/apis/iterations.md
----------------------------------------------------------------------
diff --git a/docs/apis/iterations.md b/docs/apis/iterations.md
index 8233230..a83d403 100644
--- a/docs/apis/iterations.md
+++ b/docs/apis/iterations.md
@@ -24,7 +24,7 @@ Iterative algorithms occur in many domains of data analysis, such as *machine
le
 
 Flink programs implement iterative algorithms by defining a **step function** and embedding
it into a special iteration operator. There are two  variants of this operator: **Iterate**
and **Delta Iterate**. Both operators repeatedly invoke the step function on the current iteration
state until a certain termination condition is reached.
 
-Here, we provide background on both operator variants and outline their usage. The [programming
guide](programming_guide.html) explain how to implement the operators in both Scala and Java.
We also provide a **vertex-centric graph processing API** called [Spargel](spargel_guide.html).
+Here, we provide background on both operator variants and outline their usage. The [programming
guide](programming_guide.html) explain how to implement the operators in both Scala and Java.
We also provide a **vertex-centric graph processing API** called [Spargel]({{site.baseurl}}/libs/spargel_guide.html).
 
 The following table provides an overview of both operators:
 

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/apis/java8.md
----------------------------------------------------------------------
diff --git a/docs/apis/java8.md b/docs/apis/java8.md
index ec17db5..6866b95 100644
--- a/docs/apis/java8.md
+++ b/docs/apis/java8.md
@@ -108,7 +108,7 @@ However, it is possible to implement functions such as `map()` or `filter()`
wit
 
 If you are using the Eclipse IDE, you can run and debug your Flink code within the IDE without
any problems after some configuration steps. The Eclipse IDE by default compiles its Java
sources with the Eclipse JDT compiler. The next section describes how to configure the Eclipse
IDE.
 
-If you are using a different IDE such as IntelliJ IDEA or you want to package your Jar-File
with Maven to run your job on a cluster, you need to modify your project's `pom.xml` file
and build your program with Maven. The [quickstart](setup_quickstart.html) contains preconfigured
Maven projects which can be used for new projects or as a reference. Uncomment the mentioned
lines in your generated quickstart `pom.xml` file if you want to use Java 8 with Lambda Expressions.

+If you are using a different IDE such as IntelliJ IDEA or you want to package your Jar-File
with Maven to run your job on a cluster, you need to modify your project's `pom.xml` file
and build your program with Maven. The [quickstart]({{site.baseurl}}/quickstart/setup_quickstart.html)
contains preconfigured Maven projects which can be used for new projects or as a reference.
Uncomment the mentioned lines in your generated quickstart `pom.xml` file if you want to use
Java 8 with Lambda Expressions. 
 
 Alternatively, you can manually insert the following lines to your Maven `pom.xml` file.
Maven will then use the Eclipse JDT compiler for compilation.
 

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/apis/programming_guide.md
----------------------------------------------------------------------
diff --git a/docs/apis/programming_guide.md b/docs/apis/programming_guide.md
index 7d21200..46dcc28 100644
--- a/docs/apis/programming_guide.md
+++ b/docs/apis/programming_guide.md
@@ -117,7 +117,7 @@ To write programs with Flink, you need to include the Flink library correspondin
 your programming language in your project.
 
 The simplest way to do this is to use one of the quickstart scripts: either for
-[Java](java_api_quickstart.html) or for [Scala](scala_api_quickstart.html). They
+[Java]({{ site.baseurl }}/quickstart/java_api_quickstart.html) or for [Scala]({{ site.baseurl
}}/quickstart/scala_api_quickstart.html). They
 create a blank project from a template (a Maven Archetype), which sets up everything for
you. To
 manually create the project, you can use the archetype and create a project by calling:
 
@@ -1221,7 +1221,7 @@ data.map(new MapFunction<String, Integer> () {
 
 #### Java 8 Lambdas
 
-Flink also supports Java 8 Lambdas in the Java API. Please see the full [Java 8 Guide](java8_programming_guide.html).
+Flink also supports Java 8 Lambdas in the Java API. Please see the full [Java 8 Guide](java8.html).
 
 {% highlight java %}
 DataSet<String> data = // [...]
@@ -2836,7 +2836,7 @@ The parallelism of a task can be specified in Flink on different levels.
 
 The parallelism of an individual operator, data source, or data sink can be defined by calling
its
 `setParallelism()` method.  For example, the parallelism of the `Sum` operator in the
-[WordCount](#example-program) example program can be set to `5` as follows :
+[WordCount](examples.html#word-count) example program can be set to `5` as follows :
 
 
 <div class="codetabs" markdown="1">
@@ -2970,7 +2970,7 @@ try {
 
 A system-wide default parallelism for all execution environments can be defined by setting
the
 `parallelism.default` property in `./conf/flink-conf.yaml`. See the
-[Configuration](config.html) documentation for details.
+[Configuration]({{ site.baseurl }}/setup/config.html) documentation for details.
 
 [Back to top](#top)
 

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/apis/web_client.md
----------------------------------------------------------------------
diff --git a/docs/apis/web_client.md b/docs/apis/web_client.md
index bf70016..16767c6 100644
--- a/docs/apis/web_client.md
+++ b/docs/apis/web_client.md
@@ -35,7 +35,7 @@ and stop it by calling:
 
     ./bin/stop-webclient.sh
 
-The web interface runs on port 8080 by default. To specify a custom port set the ```webclient.port```
property in the *./conf/flink.yaml* configuration file. Jobs are submitted to the JobManager
specified by ```jobmanager.rpc.address``` and ```jobmanager.rpc.port```. Please consult the
[configuration](config.html#webclient) page for details and further configuration options.
+The web interface runs on port 8080 by default. To specify a custom port set the ```webclient.port```
property in the *./conf/flink.yaml* configuration file. Jobs are submitted to the JobManager
specified by ```jobmanager.rpc.address``` and ```jobmanager.rpc.port```. Please consult the
[configuration]({{ site.baseurl }}/setup/config.html#webclient) page for details and further
configuration options.
 
 ## Using the Web Interface
 

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/index.md
----------------------------------------------------------------------
diff --git a/docs/index.md b/docs/index.md
index d218cc7..ca2b4d1 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -27,7 +27,7 @@ programs consisting of large DAGs of operations.
 
 If you quickly want to try out the system, please look at one of the available quickstarts.
For
 a thorough introduction of the Flink API please refer to the
-[Programming Guide](programming_guide.html).
+[Programming Guide](apis/programming_guide.html).
 
 ## Stack
 

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/libs/gelly_guide.md
----------------------------------------------------------------------
diff --git a/docs/libs/gelly_guide.md b/docs/libs/gelly_guide.md
index 8292968..77621e6 100644
--- a/docs/libs/gelly_guide.md
+++ b/docs/libs/gelly_guide.md
@@ -386,7 +386,7 @@ and can be specified using the `setName()` method.
 * <strong>Aggregators</strong>: Iteration aggregators can be registered using
the `registerAggregator()` method. An iteration aggregator combines
 all aggregates globally once per superstep and makes them available in the next superstep.
Registered aggregators can be accessed inside the user-defined `VertexUpdateFunction` and
`MessagingFunction`.
 
-* <strong>Broadcast Variables</strong>: DataSets can be added as [Broadcast Variables](programming_guide.html#broadcast-variables)
to the `VertexUpdateFunction` and `MessagingFunction`, using the `addBroadcastSetForUpdateFunction()`
and `addBroadcastSetForMessagingFunction()` methods, respectively.
+* <strong>Broadcast Variables</strong>: DataSets can be added as [Broadcast Variables]({{site.baseurl}}/apis/programming_guide.html#broadcast-variables)
to the `VertexUpdateFunction` and `MessagingFunction`, using the `addBroadcastSetForUpdateFunction()`
and `addBroadcastSetForMessagingFunction()` methods, respectively.
 
 {% highlight java %}
 

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/libs/ml/cocoa.md
----------------------------------------------------------------------
diff --git a/docs/libs/ml/cocoa.md b/docs/libs/ml/cocoa.md
index 0bf8d67..0327f8f 100644
--- a/docs/libs/ml/cocoa.md
+++ b/docs/libs/ml/cocoa.md
@@ -54,7 +54,7 @@ The local SDCA iterations are embarrassingly parallel once the individual
data p
 distributed across the cluster.
 
 The implementation of this algorithm is based on the work of 
-[Jaggi et al.](http://arxiv.org/abs/1409.1458 here)
+[Jaggi et al.](http://arxiv.org/abs/1409.1458)
 
 ## Parameters
 

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/libs/spargel_guide.md
----------------------------------------------------------------------
diff --git a/docs/libs/spargel_guide.md b/docs/libs/spargel_guide.md
index ab69783..127df38 100644
--- a/docs/libs/spargel_guide.md
+++ b/docs/libs/spargel_guide.md
@@ -20,7 +20,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Spargel is our [Giraph](http://giraph.apache.org) like **graph processing** Java API. It
supports basic graph computations, which are run as a sequence of [supersteps](iterations.html#supersteps).
Spargel and Giraph both implement the [Bulk Synchronous Parallel (BSP)](https://en.wikipedia.org/wiki/Bulk_Synchronous_Parallel)
programming model, propsed by Google's [Pregel](http://googleresearch.blogspot.de/2009/06/large-scale-graph-computing-at-google.html).
+Spargel is our [Giraph](http://giraph.apache.org) like **graph processing** Java API. It
supports basic graph computations, which are run as a sequence of [supersteps]({{site.baseurl}}/apis/iterations.html#supersteps).
Spargel and Giraph both implement the [Bulk Synchronous Parallel (BSP)](https://en.wikipedia.org/wiki/Bulk_Synchronous_Parallel)
programming model, propsed by Google's [Pregel](http://googleresearch.blogspot.de/2009/06/large-scale-graph-computing-at-google.html).
 
 The API provides a **vertex-centric** view on graph processing with two basic operations
per superstep:
 

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/quickstart/java_api_quickstart.md
----------------------------------------------------------------------
diff --git a/docs/quickstart/java_api_quickstart.md b/docs/quickstart/java_api_quickstart.md
index 0485e2a..ac01001 100644
--- a/docs/quickstart/java_api_quickstart.md
+++ b/docs/quickstart/java_api_quickstart.md
@@ -147,5 +147,5 @@ public class LineSplitter implements FlatMapFunction<String, Tuple2<String,
Inte
 
 {% gh_link /flink-examples/flink-java-examples/src/main/java/org/apache/flink/examples/java/wordcount/WordCount.java
"Check GitHub" %} for the full example code.
 
-For a complete overview over our API, have a look at the [Programming Guide](programming_guide.html)
and [further example programs](examples.html). If you have any trouble, ask on our [Mailing
List](http://mail-archives.apache.org/mod_mbox/flink-dev/). We are happy to provide help.
+For a complete overview over our API, have a look at the [Programming Guide]({{ site.baseurl
}}/apis/programming_guide.html) and [further example programs](examples.html). If you have
any trouble, ask on our [Mailing List](http://mail-archives.apache.org/mod_mbox/flink-dev/).
We are happy to provide help.
 

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/quickstart/scala_api_quickstart.md
----------------------------------------------------------------------
diff --git a/docs/quickstart/scala_api_quickstart.md b/docs/quickstart/scala_api_quickstart.md
index 771acfc..26a4bc5 100644
--- a/docs/quickstart/scala_api_quickstart.md
+++ b/docs/quickstart/scala_api_quickstart.md
@@ -131,6 +131,6 @@ object WordCountJob {
 
 {% gh_link /flink-examples/flink-scala-examples/src/main/scala/org/apache/flink/examples/scala/wordcount/WordCount.scala
"Check GitHub" %} for the full example code.
 
-For a complete overview over our API, have a look at the [Programming Guide](programming_guide.html)
and [further example programs](examples.html). If you have any trouble, ask on our [Mailing
List](http://mail-archives.apache.org/mod_mbox/flink-dev/). We are happy to provide help.
+For a complete overview over our API, have a look at the [Programming Guide]({{ site.baseurl
}}/apis/programming_guide.html) and [further example programs]({{ site.baseurl }}/apis/examples.html).
If you have any trouble, ask on our [Mailing List](http://mail-archives.apache.org/mod_mbox/flink-dev/).
We are happy to provide help.
 
 

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/setup/config.md
----------------------------------------------------------------------
diff --git a/docs/setup/config.md b/docs/setup/config.md
index afc8631..4f7378d 100644
--- a/docs/setup/config.md
+++ b/docs/setup/config.md
@@ -89,7 +89,7 @@ job by calling `setParallelism(int parallelism)` on the `ExecutionEnvironment`
 or by passing `-p <parallelism>` to the Flink Command-line frontend. It can be
 overwritten for single transformations by calling `setParallelism(int
 parallelism)` on an operator. See the [programming
-guide](programming_guide.html#parallel-execution) for more information about the
+guide]({{site.baseurl}}/apis/programming_guide.html#parallel-execution) for more information
about the
 parallelism.
 
 - `fs.hdfs.hadoopconf`: The absolute path to the Hadoop File System's (HDFS)
@@ -391,7 +391,7 @@ As a general recommendation, the number of available CPU cores is a good
default
 
 When starting a Flink application, users can supply the default number of slots to use for
that job.
 The command line value therefore is called `-p` (for parallelism). In addition, it is possible
-to [set the number of slots in the programming APIs](programming_guide.html#parallel-execution)
for 
+to [set the number of slots in the programming APIs]({{site.baseurl}}/apis/programming_guide.html#parallel-execution)
for 
 the whole application and individual operators.
 
 <img src="fig/slots_parallelism.svg" class="img-responsive" />
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/setup/flink_on_tez.md
----------------------------------------------------------------------
diff --git a/docs/setup/flink_on_tez.md b/docs/setup/flink_on_tez.md
index f68c2a1..afbd147 100644
--- a/docs/setup/flink_on_tez.md
+++ b/docs/setup/flink_on_tez.md
@@ -31,7 +31,7 @@ located in the *org.apache.flink.tez* package.
 
 ## Why Flink on Tez
 
-[Apache Tez](tez.apache.org) is a scalable data processing
+[Apache Tez](http://tez.apache.org) is a scalable data processing
 platform. Tez provides an API for specifying a directed acyclic
 graph (DAG), and functionality for placing the DAG vertices in YARN
 containers, as well as data shuffling.  In Flink's architecture,
@@ -264,7 +264,7 @@ that the elements are destined to.
 
 Currently, Flink on Tez does not support all features of the Flink API. We are working
 to enable all of the missing features listed below. In the meantime, if your project depends
on these features, we suggest
-to use [Flink on YARN]({{site.baseurl}}/yarn_setup.html) or [Flink standalone]({{site.baseurl}}/setup_quickstart.html).
+to use [Flink on YARN]({{site.baseurl}}/setup/yarn_setup.html) or [Flink standalone]({{site.baseurl}}/quickstart/setup_quickstart.html).
 
 The following features are currently missing.
 

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/setup/gce_setup.md
----------------------------------------------------------------------
diff --git a/docs/setup/gce_setup.md b/docs/setup/gce_setup.md
index c7316ef..e4b2316 100644
--- a/docs/setup/gce_setup.md
+++ b/docs/setup/gce_setup.md
@@ -73,7 +73,7 @@ bdutil_env.sh.
 bdutil's Flink extension handles the configuration for you. You may additionally
 adjust configuration variables in `extensions/flink/flink_env.sh`. If you want
 to make further configuration, please take a look at
-[configuring Flink](config.md). You will have to restart Flink after changing
+[configuring Flink](config.html). You will have to restart Flink after changing
 its configuration using `bin/stop-cluster` and `bin/start-cluster`.
 
 ## Bring up a cluster with Flink

http://git-wip-us.apache.org/repos/asf/flink/blob/d259e696/docs/setup/yarn_setup.md
----------------------------------------------------------------------
diff --git a/docs/setup/yarn_setup.md b/docs/setup/yarn_setup.md
index 230b8f9..cf9f6f8 100644
--- a/docs/setup/yarn_setup.md
+++ b/docs/setup/yarn_setup.md
@@ -56,7 +56,7 @@ Apache [Hadoop YARN](http://hadoop.apache.org/) is a cluster resource management
 - Apache Hadoop 2.2
 - HDFS (Hadoop Distributed File System) (or another distributed file system supported by
Hadoop)
 
-If you have troubles using the Flink YARN client, have a look in the [FAQ section](faq.html).
+If you have troubles using the Flink YARN client, have a look in the [FAQ section]({{ site.baseurl
}}/faq.html).
 
 ### Start Flink Session
 
@@ -141,7 +141,7 @@ Use the following command to submit a Flink program to the YARN cluster:
 ./bin/flink
 ~~~
 
-Please refer to the documentation of the [commandline client](cli.html).
+Please refer to the documentation of the [commandline client]({{ site.baseurl }}/apis/cli.html).
 
 The command will show you a help menu like this:
 


Mime
View raw message