flink-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From u..@apache.org
Subject [5/5] flink-web git commit: Rebuild website
Date Mon, 08 Aug 2016 16:32:04 GMT
Rebuild website


Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/609ba062
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/609ba062
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/609ba062

Branch: refs/heads/asf-site
Commit: 609ba06253fe2ec886935f9d84478966cc0521e5
Parents: 9d88fef
Author: Ufuk Celebi <uce@apache.org>
Authored: Mon Aug 8 18:31:51 2016 +0200
Committer: Ufuk Celebi <uce@apache.org>
Committed: Mon Aug 8 18:31:51 2016 +0200

----------------------------------------------------------------------
 content/blog/feed.xml                      | 214 ++++++++
 content/blog/index.html                    |  34 +-
 content/blog/page2/index.html              |  33 +-
 content/blog/page3/index.html              |  35 +-
 content/blog/page4/index.html              |  23 +
 content/blog/release_1.1.0-changelog.html  | 699 ++++++++++++++++++++++++
 content/img/blog/session-windows.svg       |  22 +
 content/index.html                         |   8 +-
 content/news/2016/08/08/release-1.1.0.html | 414 ++++++++++++++
 9 files changed, 1442 insertions(+), 40 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/609ba062/content/blog/feed.xml
----------------------------------------------------------------------
diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index bcdbdac..da3a9a0 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,220 @@
 <atom:link href="http://flink.apache.org/blog/feed.xml" rel="self" type="application/rss+xml"
/>
 
 <item>
+<title>Announcing Apache Flink 1.1.0</title>
+<description>&lt;p&gt;The Apache Flink community is pleased to announce the
availability of Flink 1.1.0.&lt;/p&gt;
+
+&lt;p&gt;This release is the first major release in the 1.X.X series of releases,
which maintains API compatibility with 1.0.0. This means that your applications written against
stable APIs of Flink 1.0.0 will compile and run with Flink 1.1.0. 95 contributors provided
bug fixes, improvements, and new features such that in total more than 450 JIRA issues could
be resolved. See the &lt;a href=&quot;/blog/release_1.1.0-changelog.html&quot;&gt;complete
changelog&lt;/a&gt; for more details.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;We encourage everyone to &lt;a href=&quot;http://flink.apache.org/downloads.html&quot;&gt;download
the release&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.1/&quot;&gt;check
out the documentation&lt;/a&gt;. Feedback through the Flink &lt;a href=&quot;http://flink.apache.org/community.html#mailing-lists&quot;&gt;mailing
lists&lt;/a&gt; is, as always, very welcome!&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;Some highlights of the release are listed in the following sections.&lt;/p&gt;
+
+&lt;h2 id=&quot;connectors&quot;&gt;Connectors&lt;/h2&gt;
+
+&lt;p&gt;The &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/connectors/index.html&quot;&gt;streaming
connectors&lt;/a&gt; are a major part of Flink’s DataStream API. This release adds
support for new external systems and further improves on the available connectors.&lt;/p&gt;
+
+&lt;h3 id=&quot;continuous-file-system-sources&quot;&gt;Continuous File System
Sources&lt;/h3&gt;
+
+&lt;p&gt;A frequently requested feature for Flink 1.0 was to be able to monitor directories
and process files continuously. Flink 1.1 now adds support for this via &lt;code&gt;FileProcessingMode&lt;/code&gt;s:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span
class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span
class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;stream&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span
class=&quot;na&quot;&gt;readFile&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;textInputFormat&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;s&quot;&gt;&amp;quot;hdfs:///file-path&amp;quot;&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;FileProcessingMode&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;PROCESS_CONTINUOUSLY&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;mi&quot;&gt;5000&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;//
monitoring interval (millis)&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;FilePathFilter&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;createDefaultFilter&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;());&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;//
file path filter&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;This will monitor &lt;code&gt;hdfs:///file-path&lt;/code&gt;
every &lt;code&gt;5000&lt;/code&gt; milliseconds. Check out the &lt;a
href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/index.html#data-sources&quot;&gt;DataSource
documentation for more details&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;kinesis-source-and-sink&quot;&gt;Kinesis Source and Sink&lt;/h3&gt;
+
+&lt;p&gt;Flink 1.1 adds a Kinesis connector for both consuming (&lt;code&gt;FlinkKinesisConsumer&lt;/code&gt;)
from and producing (&lt;code&gt;FlinkKinesisProduer&lt;/code&gt;) to &lt;a
href=&quot;https://aws.amazon.com/kinesis/&quot;&gt;Amazon Kinesis Streams&lt;/a&gt;,
which is a managed service purpose-built to make it easy to work with streaming data on AWS.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span
class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span
class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;kinesis&lt;/span&gt; &lt;span
class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;addSource&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;(&lt;/span&gt;
+  &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;FlinkKinesisConsumer&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span
class=&quot;s&quot;&gt;&amp;quot;stream-name&amp;quot;&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;schema&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;config&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;));&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;Check out the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/connectors/kinesis.html&quot;&gt;Kinesis
connector documentation for more details&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;cassandra-sink&quot;&gt;Cassandra Sink&lt;/h3&gt;
+
+&lt;p&gt;The &lt;a href=&quot;http://wiki.apache.org/cassandra/GettingStarted&quot;&gt;Apache
Cassandra&lt;/a&gt; sink allows you to write from Flink to Cassandra. Flink can provide
exactly-once guarantees if the query is idempotent, meaning it can be applied multiple times
without changing the result.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span
class=&quot;n&quot;&gt;CassandraSink&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span
class=&quot;na&quot;&gt;addSink&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span
class=&quot;n&quot;&gt;input&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;Check out the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/connectors/cassandra.html&quot;&gt;Cassandra
Sink documentation for more details&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h2 id=&quot;table-api-and-sql&quot;&gt;Table API and SQL&lt;/h2&gt;
+
+&lt;p&gt;The Table API is a SQL-like expression language for relational stream and
batch processing that can be easily embedded in Flink’s DataSet and DataStream APIs (for
both Java and Scala).&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span
class=&quot;n&quot;&gt;Table&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;custT&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tableEnv&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;toTable&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;custDs&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;name,
zipcode&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;where&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;zipcode
= &amp;#39;12345&amp;#39;&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;select&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;name&amp;quot;&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;An initial version of this API was already available in Flink 1.0. For Flink
1.1, the community put a lot of work into reworking the architecture of the Table API and
integrating it with &lt;a href=&quot;https://calcite.apache.org&quot;&gt;Apache
Calcite&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;In this first version, SQL (and Table API) queries on streams are limited
to selection, filter, and union operators. Compared to Flink 1.0, the revised Table API supports
many more scalar functions and is able to read tables from external sources and write them
back to external sinks.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span
class=&quot;n&quot;&gt;Table&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;result&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tableEnv&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;sql&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;(&lt;/span&gt;
+  &lt;span class=&quot;s&quot;&gt;&amp;quot;SELECT STREAM product, amount
FROM Orders WHERE product LIKE &amp;#39;%Rubber%&amp;#39;&amp;quot;&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+&lt;p&gt;A more detailed introduction can be found in the &lt;a href=&quot;http://flink.apache.org/news/2016/05/24/stream-sql.html&quot;&gt;Flink
blog&lt;/a&gt; and the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/table.html&quot;&gt;Table
API documentation&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h2 id=&quot;datastream-api&quot;&gt;DataStream API&lt;/h2&gt;
+
+&lt;p&gt;The DataStream API now exposes &lt;strong&gt;session windows&lt;/strong&gt;
and &lt;strong&gt;allowed lateness&lt;/strong&gt; as first-class citizens.&lt;/p&gt;
+
+&lt;h3 id=&quot;session-windows&quot;&gt;Session Windows&lt;/h3&gt;
+
+&lt;p&gt;Session windows are ideal for cases where the window boundaries need to
adjust to the incoming data. This enables you to have windows that start at individual points
in time for each key and that end once there has been a &lt;em&gt;certain period of
inactivity&lt;/em&gt;. The configuration parameter is the session gap that specifies
how long to wait for new data before considering a session as closed.&lt;/p&gt;
+
+&lt;center&gt;
+&lt;img src=&quot;/img/blog/session-windows.svg&quot; style=&quot;height:400px&quot;
/&gt;
+&lt;/center&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span
class=&quot;n&quot;&gt;input&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span
class=&quot;na&quot;&gt;keyBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&amp;lt;&lt;/span&gt;&lt;span
class=&quot;n&quot;&gt;key&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;selector&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;&amp;gt;)&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;window&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;EventTimeSessionWindows&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;withGap&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Time&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;minutes&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;)))&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&amp;lt;&lt;/span&gt;&lt;span
class=&quot;n&quot;&gt;windowed&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;transformation&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;&amp;gt;(&amp;lt;&lt;/span&gt;&lt;span
class=&quot;n&quot;&gt;window&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;function&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;&amp;gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;h3 id=&quot;support-for-late-elements&quot;&gt;Support for Late Elements&lt;/h3&gt;
+
+&lt;p&gt;You can now specify how a windowed transformation should deal with late
elements and how much lateness is allowed. The parameter for this is called &lt;em&gt;allowed
lateness&lt;/em&gt;. This specifies by how much time elements can be late.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span
class=&quot;n&quot;&gt;input&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span
class=&quot;na&quot;&gt;keyBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&amp;lt;&lt;/span&gt;&lt;span
class=&quot;n&quot;&gt;key&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;selector&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;&amp;gt;).&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;window&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;(&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;window&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;assigner&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;&amp;gt;)&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;allowedLateness&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;(&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;time&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;&amp;gt;)&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&amp;lt;&lt;/span&gt;&lt;span
class=&quot;n&quot;&gt;windowed&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;transformation&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;&amp;gt;(&amp;lt;&lt;/span&gt;&lt;span
class=&quot;n&quot;&gt;window&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;function&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;&amp;gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;Elements that arrive within the allowed lateness are still put into windows
and are considered when computing window results. If elements arrive after the allowed lateness
they will be dropped. Flink will also make sure that any state held by the windowing operation
is garbage collected once the watermark passes the end of a window plus the allowed lateness.&lt;/p&gt;
+
+&lt;p&gt;Check out the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/windows.html&quot;&gt;Windows
documentation for more details&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h2 id=&quot;scala-api-for-complex-event-processing-cep&quot;&gt;Scala
API for Complex Event Processing (CEP)&lt;/h2&gt;
+
+&lt;p&gt;Flink 1.0 added the initial version of the CEP library. The core of the
library is a Pattern API, which allows you to easily specify patterns to match against in
your event stream. While in Flink 1.0 this API was only available for Java, Flink 1.1. now
exposes the same API for Scala, allowing you to specify your event patterns in a more concise
manner.&lt;/p&gt;
+
+&lt;p&gt;A more detailed introduction can be found in the &lt;a href=&quot;http://flink.apache.org/news/2016/04/06/cep-monitoring.html&quot;&gt;Flink
blog&lt;/a&gt; and the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/libs/cep.html&quot;&gt;CEP
documentation&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h2 id=&quot;metrics&quot;&gt;Metrics&lt;/h2&gt;
+
+&lt;p&gt;Flink’s new metrics system allows you to easily gather and expose metrics
from your user application to external systems. You can add counters, gauges, and histograms
to your application via the runtime context:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span
class=&quot;n&quot;&gt;Counter&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;counter&lt;/span&gt;
&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;getRuntimeContext&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;()&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getMetricGroup&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;()&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;counter&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;my-counter&amp;quot;&lt;/span&gt;&lt;span
class=&quot;o&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;All registered metrics will be exposed via reporters. Out of the box, Flinks
comes with support for JMX, Ganglia, Graphite, and statsD. In addition to your custom metrics,
Flink exposes many internal metrics like checkpoint sizes and JVM stats.&lt;/p&gt;
+
+&lt;p&gt;Check out the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/metrics.html&quot;&gt;Metrics
documentation for more details&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h2 id=&quot;list-of-contributors&quot;&gt;List of Contributors&lt;/h2&gt;
+
+&lt;p&gt;The following 95 people contributed to this release:&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;Abdullah Ozturk&lt;/li&gt;
+  &lt;li&gt;Ajay Bhat&lt;/li&gt;
+  &lt;li&gt;Alexey Savartsov&lt;/li&gt;
+  &lt;li&gt;Aljoscha Krettek&lt;/li&gt;
+  &lt;li&gt;Andrea Sella&lt;/li&gt;
+  &lt;li&gt;Andrew Palumbo&lt;/li&gt;
+  &lt;li&gt;Chenguang He&lt;/li&gt;
+  &lt;li&gt;Chiwan Park&lt;/li&gt;
+  &lt;li&gt;David Moravek&lt;/li&gt;
+  &lt;li&gt;Dominik Bruhn&lt;/li&gt;
+  &lt;li&gt;Dyana Rose&lt;/li&gt;
+  &lt;li&gt;Fabian Hueske&lt;/li&gt;
+  &lt;li&gt;Flavio Pompermaier&lt;/li&gt;
+  &lt;li&gt;Gabor Gevay&lt;/li&gt;
+  &lt;li&gt;Gabor Horvath&lt;/li&gt;
+  &lt;li&gt;Geoffrey Mon&lt;/li&gt;
+  &lt;li&gt;Gordon Tai&lt;/li&gt;
+  &lt;li&gt;Greg Hogan&lt;/li&gt;
+  &lt;li&gt;Gyula Fora&lt;/li&gt;
+  &lt;li&gt;Henry Saputra&lt;/li&gt;
+  &lt;li&gt;Ignacio N. Lucero Ascencio&lt;/li&gt;
+  &lt;li&gt;Igor Berman&lt;/li&gt;
+  &lt;li&gt;Ismaël Mejía&lt;/li&gt;
+  &lt;li&gt;Ivan Mushketyk&lt;/li&gt;
+  &lt;li&gt;Jark Wu&lt;/li&gt;
+  &lt;li&gt;Jiri Simsa&lt;/li&gt;
+  &lt;li&gt;Jonas Traub&lt;/li&gt;
+  &lt;li&gt;Josh&lt;/li&gt;
+  &lt;li&gt;Joshi&lt;/li&gt;
+  &lt;li&gt;Joshua Herman&lt;/li&gt;
+  &lt;li&gt;Ken Krugler&lt;/li&gt;
+  &lt;li&gt;Konstantin Knauf&lt;/li&gt;
+  &lt;li&gt;Lasse Dalegaard&lt;/li&gt;
+  &lt;li&gt;Li Fanxi&lt;/li&gt;
+  &lt;li&gt;MaBiao&lt;/li&gt;
+  &lt;li&gt;Mao Wei&lt;/li&gt;
+  &lt;li&gt;Mark Reddy&lt;/li&gt;
+  &lt;li&gt;Martin Junghanns&lt;/li&gt;
+  &lt;li&gt;Martin Liesenberg&lt;/li&gt;
+  &lt;li&gt;Maximilian Michels&lt;/li&gt;
+  &lt;li&gt;Michal Fijolek&lt;/li&gt;
+  &lt;li&gt;Márton Balassi&lt;/li&gt;
+  &lt;li&gt;Nathan Howell&lt;/li&gt;
+  &lt;li&gt;Niels Basjes&lt;/li&gt;
+  &lt;li&gt;Niels Zeilemaker&lt;/li&gt;
+  &lt;li&gt;Phetsarath, Sourigna&lt;/li&gt;
+  &lt;li&gt;Robert Metzger&lt;/li&gt;
+  &lt;li&gt;Scott Kidder&lt;/li&gt;
+  &lt;li&gt;Sebastian Klemke&lt;/li&gt;
+  &lt;li&gt;Shahin&lt;/li&gt;
+  &lt;li&gt;Shannon Carey&lt;/li&gt;
+  &lt;li&gt;Shannon Quinn&lt;/li&gt;
+  &lt;li&gt;Stefan Richter&lt;/li&gt;
+  &lt;li&gt;Stefano Baghino&lt;/li&gt;
+  &lt;li&gt;Stefano Bortoli&lt;/li&gt;
+  &lt;li&gt;Stephan Ewen&lt;/li&gt;
+  &lt;li&gt;Steve Cosenza&lt;/li&gt;
+  &lt;li&gt;Sumit Chawla&lt;/li&gt;
+  &lt;li&gt;Tatu Saloranta&lt;/li&gt;
+  &lt;li&gt;Tianji Li&lt;/li&gt;
+  &lt;li&gt;Till Rohrmann&lt;/li&gt;
+  &lt;li&gt;Todd Lisonbee&lt;/li&gt;
+  &lt;li&gt;Tony Baines&lt;/li&gt;
+  &lt;li&gt;Trevor Grant&lt;/li&gt;
+  &lt;li&gt;Ufuk Celebi&lt;/li&gt;
+  &lt;li&gt;Vasudevan&lt;/li&gt;
+  &lt;li&gt;Yijie Shen&lt;/li&gt;
+  &lt;li&gt;Zack Pierce&lt;/li&gt;
+  &lt;li&gt;Zhai Jia&lt;/li&gt;
+  &lt;li&gt;chengxiang li&lt;/li&gt;
+  &lt;li&gt;chobeat&lt;/li&gt;
+  &lt;li&gt;danielblazevski&lt;/li&gt;
+  &lt;li&gt;dawid&lt;/li&gt;
+  &lt;li&gt;dawidwys&lt;/li&gt;
+  &lt;li&gt;eastcirclek&lt;/li&gt;
+  &lt;li&gt;erli ding&lt;/li&gt;
+  &lt;li&gt;gallenvara&lt;/li&gt;
+  &lt;li&gt;kl0u&lt;/li&gt;
+  &lt;li&gt;mans2singh&lt;/li&gt;
+  &lt;li&gt;markreddy&lt;/li&gt;
+  &lt;li&gt;mjsax&lt;/li&gt;
+  &lt;li&gt;nikste&lt;/li&gt;
+  &lt;li&gt;omaralvarez&lt;/li&gt;
+  &lt;li&gt;philippgrulich&lt;/li&gt;
+  &lt;li&gt;ramkrishna&lt;/li&gt;
+  &lt;li&gt;sahitya-pavurala&lt;/li&gt;
+  &lt;li&gt;samaitra&lt;/li&gt;
+  &lt;li&gt;smarthi&lt;/li&gt;
+  &lt;li&gt;spkavuly&lt;/li&gt;
+  &lt;li&gt;subhankar&lt;/li&gt;
+  &lt;li&gt;twalthr&lt;/li&gt;
+  &lt;li&gt;vasia&lt;/li&gt;
+  &lt;li&gt;xueyan.li&lt;/li&gt;
+  &lt;li&gt;zentol&lt;/li&gt;
+  &lt;li&gt;卫乐&lt;/li&gt;
+&lt;/ul&gt;
+</description>
+<pubDate>Mon, 08 Aug 2016 15:00:00 +0200</pubDate>
+<link>http://flink.apache.org/news/2016/08/08/release-1.1.0.html</link>
+<guid isPermaLink="true">/news/2016/08/08/release-1.1.0.html</guid>
+</item>
+
+<item>
 <title>Stream Processing for Everyone with SQL and Apache Flink</title>
 <description>&lt;p&gt;The capabilities of open source systems for distributed
stream processing have evolved significantly over the last years. Initially, the first systems
in the field (notably &lt;a href=&quot;https://storm.apache.org&quot;&gt;Apache
Storm&lt;/a&gt;) provided low latency processing, but were limited to at-least-once
guarantees, processing-time semantics, and rather low-level APIs. Since then, several new
systems emerged and pushed the state of the art of open source stream processing in several
dimensions. Today, users of Apache Flink or &lt;a href=&quot;https://beam.incubator.apache.org&quot;&gt;Apache
Beam&lt;/a&gt; can use fluent Scala and Java APIs to implement stream processing jobs
that operate in event-time with exactly-once semantics at high throughput and low latency.&lt;/p&gt;
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/609ba062/content/blog/index.html
----------------------------------------------------------------------
diff --git a/content/blog/index.html b/content/blog/index.html
index 4345a77..7a6a02b 100644
--- a/content/blog/index.html
+++ b/content/blog/index.html
@@ -161,6 +161,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2016/08/08/release-1.1.0.html">Announcing
Apache Flink 1.1.0</a></h2>
+      <p>08 Aug 2016</p>
+
+      <p><p>The Apache Flink community is pleased to announce the availability
of Flink 1.1.0.</p>
+
+</p>
+
+      <p><a href="/news/2016/08/08/release-1.1.0.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2016/05/24/stream-sql.html">Stream
Processing for Everyone with SQL and Apache Flink</a></h2>
       <p>24 May 2016 by Fabian Hueske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
 
@@ -272,17 +285,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2015/12/11/storm-compatibility.html">Storm
Compatibility in Apache Flink: How to run existing Storm topologies on Flink</a></h2>
-      <p>11 Dec 2015 by Matthias J. Sax (<a href="https://twitter.com/MatthiasJSax">@MatthiasJSax</a>)</p>
-
-      <p>In this blog post, we describe Flink's compatibility package for <a href="https://storm.apache.org">Apache
Storm</a> that allows to embed Spouts (sources) and Bolts (operators) in a regular Flink
streaming job. Furthermore, the compatibility package provides a Storm compatible API in order
to execute whole Storm topologies with (almost) no code adaption.</p>
-
-      <p><a href="/news/2015/12/11/storm-compatibility.html">Continue reading
&raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -315,6 +317,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2016/08/08/release-1.1.0.html">Announcing Apache Flink
1.1.0</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2016/05/24/stream-sql.html">Stream Processing for Everyone
with SQL and Apache Flink</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/609ba062/content/blog/page2/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page2/index.html b/content/blog/page2/index.html
index 7c3201c..280788c 100644
--- a/content/blog/page2/index.html
+++ b/content/blog/page2/index.html
@@ -161,6 +161,17 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2015/12/11/storm-compatibility.html">Storm
Compatibility in Apache Flink: How to run existing Storm topologies on Flink</a></h2>
+      <p>11 Dec 2015 by Matthias J. Sax (<a href="https://twitter.com/MatthiasJSax">@MatthiasJSax</a>)</p>
+
+      <p>In this blog post, we describe Flink's compatibility package for <a href="https://storm.apache.org">Apache
Storm</a> that allows to embed Spouts (sources) and Bolts (operators) in a regular Flink
streaming job. Furthermore, the compatibility package provides a Storm compatible API in order
to execute whole Storm topologies with (almost) no code adaption.</p>
+
+      <p><a href="/news/2015/12/11/storm-compatibility.html">Continue reading
&raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2015/12/04/Introducing-windows.html">Introducing
Stream Windows in Apache Flink</a></h2>
       <p>04 Dec 2015 by Fabian Hueske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
 
@@ -280,18 +291,6 @@ vertex-centric or gather-sum-apply to Flink dataflows.</p>
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2015/05/11/Juggling-with-Bits-and-Bytes.html">Juggling
with Bits and Bytes</a></h2>
-      <p>11 May 2015 by Fabian Hüske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
-
-      <p><p>Nowadays, a lot of open-source systems for analyzing large data sets
are implemented in Java or other JVM-based programming languages. The most well-known example
is Apache Hadoop, but also newer frameworks such as Apache Spark, Apache Drill, and also Apache
Flink run on JVMs. A common challenge that JVM-based data analysis engines face is to store
large amounts of data in memory - both for caching and for efficient processing such as sorting
and joining of data. Managing the JVM memory well makes the difference between a system that
is hard to configure and has unpredictable reliability and performance and a system that behaves
robustly with few configuration knobs.</p>
-<p>In this blog post we discuss how Apache Flink manages memory, talk about its custom
data de/serialization stack, and show how it operates on binary data.</p></p>
-
-      <p><a href="/news/2015/05/11/Juggling-with-Bits-and-Bytes.html">Continue
reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -324,6 +323,16 @@ vertex-centric or gather-sum-apply to Flink dataflows.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2016/08/08/release-1.1.0.html">Announcing Apache Flink
1.1.0</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2016/05/24/stream-sql.html">Stream Processing for Everyone
with SQL and Apache Flink</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/609ba062/content/blog/page3/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page3/index.html b/content/blog/page3/index.html
index 7ac1c75..b930268 100644
--- a/content/blog/page3/index.html
+++ b/content/blog/page3/index.html
@@ -161,6 +161,18 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2015/05/11/Juggling-with-Bits-and-Bytes.html">Juggling
with Bits and Bytes</a></h2>
+      <p>11 May 2015 by Fabian Hüske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
+
+      <p><p>Nowadays, a lot of open-source systems for analyzing large data sets
are implemented in Java or other JVM-based programming languages. The most well-known example
is Apache Hadoop, but also newer frameworks such as Apache Spark, Apache Drill, and also Apache
Flink run on JVMs. A common challenge that JVM-based data analysis engines face is to store
large amounts of data in memory - both for caching and for efficient processing such as sorting
and joining of data. Managing the JVM memory well makes the difference between a system that
is hard to configure and has unpredictable reliability and performance and a system that behaves
robustly with few configuration knobs.</p>
+<p>In this blog post we discuss how Apache Flink manages memory, talk about its custom
data de/serialization stack, and show how it operates on binary data.</p></p>
+
+      <p><a href="/news/2015/05/11/Juggling-with-Bits-and-Bytes.html">Continue
reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2015/04/13/release-0.9.0-milestone1.html">Announcing
Flink 0.9.0-milestone1 preview release</a></h2>
       <p>13 Apr 2015</p>
 
@@ -287,19 +299,6 @@ and offers a new API including definition of flexible windows.</p>
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2014/11/04/release-0.7.0.html">Apache
Flink 0.7.0 available</a></h2>
-      <p>04 Nov 2014</p>
-
-      <p><p>We are pleased to announce the availability of Flink 0.7.0. This
release includes new user-facing features as well as performance and bug fixes, brings the
Scala and Java APIs in sync, and introduces Flink Streaming. A total of 34 people have contributed
to this release, a big thanks to all of them!</p>
-
-</p>
-
-      <p><a href="/news/2014/11/04/release-0.7.0.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -332,6 +331,16 @@ and offers a new API including definition of flexible windows.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2016/08/08/release-1.1.0.html">Announcing Apache Flink
1.1.0</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2016/05/24/stream-sql.html">Stream Processing for Everyone
with SQL and Apache Flink</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/609ba062/content/blog/page4/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page4/index.html b/content/blog/page4/index.html
index d73320a..d387243 100644
--- a/content/blog/page4/index.html
+++ b/content/blog/page4/index.html
@@ -161,6 +161,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2014/11/04/release-0.7.0.html">Apache
Flink 0.7.0 available</a></h2>
+      <p>04 Nov 2014</p>
+
+      <p><p>We are pleased to announce the availability of Flink 0.7.0. This
release includes new user-facing features as well as performance and bug fixes, brings the
Scala and Java APIs in sync, and introduces Flink Streaming. A total of 34 people have contributed
to this release, a big thanks to all of them!</p>
+
+</p>
+
+      <p><a href="/news/2014/11/04/release-0.7.0.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2014/10/03/upcoming_events.html">Upcoming
Events</a></h2>
       <p>03 Oct 2014</p>
 
@@ -234,6 +247,16 @@ academic and open source project that Flink originates from.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2016/08/08/release-1.1.0.html">Announcing Apache Flink
1.1.0</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2016/05/24/stream-sql.html">Stream Processing for Everyone
with SQL and Apache Flink</a></li>
       
       


Mime
View raw message