flink-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From fhue...@apache.org
Subject [2/2] flink-web git commit: rebuild website
Date Mon, 16 Nov 2015 13:36:50 GMT
rebuild website


Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/4e2b2396
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/4e2b2396
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/4e2b2396

Branch: refs/heads/asf-site
Commit: 4e2b23965480cc69573037eb2d106a0217a2ac2c
Parents: f82224b
Author: Fabian Hueske <fhueske@gmail.com>
Authored: Mon Nov 16 14:34:22 2015 +0100
Committer: Fabian Hueske <fhueske@gmail.com>
Committed: Mon Nov 16 14:34:22 2015 +0100

----------------------------------------------------------------------
 content/blog/feed.xml                         | 175 ++++++++++
 content/blog/index.html                       |  34 +-
 content/blog/page2/index.html                 |  37 +-
 content/blog/page3/index.html                 |  45 +--
 content/blog/page4/index.html                 |  29 ++
 content/img/blog/new-dashboard-screenshot.png | Bin 0 -> 570241 bytes
 content/index.html                            |   8 +-
 content/news/2015/11/16/release-0.10.0.html   | 371 +++++++++++++++++++++
 8 files changed, 649 insertions(+), 50 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/4e2b2396/content/blog/feed.xml
----------------------------------------------------------------------
diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index b3b8d04..135033f 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,181 @@
 <atom:link href="http://flink.apache.org/blog/feed.xml" rel="self" type="application/rss+xml" />
 
 <item>
+<title>Announcing Apache Flink 0.10.0</title>
+<description>&lt;p&gt;The Apache Flink community is pleased to announce the availability of the 0.10.0 release. The community put significant effort into improving and extending Apache Flink since the last release, focusing on data stream processing and operational features. About 80 contributors provided bug fixes, improvements, and new features such that in total more than 400 JIRA issues could be resolved.&lt;/p&gt;
+
+&lt;p&gt;For Flink 0.10.0, the focus of the community was to graduate the DataStream API from beta and to evolve Apache Flink into a production-ready stream data processor with a competitive feature set. These efforts resulted in support for event-time and out-of-order streams, exactly-once guarantees in the case of failures, a very flexible windowing mechanism, sophisticated operator state management, and a highly-available cluster operation mode. Flink 0.10.0 also brings a new monitoring dashboard with real-time system and job monitoring capabilities. Both batch and streaming modes of Flink benefit from the new high availability and improved monitoring features. Needless to say that Flink 0.10.0 includes many more features, improvements, and bug fixes. &lt;/p&gt;
+
+&lt;p&gt;We encourage everyone to &lt;a href=&quot;/downloads.html&quot;&gt;download the release&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-0.10/&quot;&gt;check out the documentation&lt;/a&gt;. Feedback through the Flink &lt;a href=&quot;/community.html#mailing-lists&quot;&gt;mailing lists&lt;/a&gt; is, as always, very welcome!&lt;/p&gt;
+
+&lt;h2 id=&quot;new-features&quot;&gt;New Features&lt;/h2&gt;
+
+&lt;h3 id=&quot;event-time-stream-processing&quot;&gt;Event-time Stream Processing&lt;/h3&gt;
+
+&lt;p&gt;Many stream processing applications consume data from sources that produce events with associated timestamps such as sensor or user-interaction events. Very often, events have to be collected from several sources such that it is usually not guaranteed that events arrive in the exact order of their timestamps at the stream processor. Consequently, stream processors must take out-of-order elements into account in order to produce results which are correct and consistent with respect to the timestamps of the events. With release 0.10.0, Apache Flink supports event-time processing as well as ingestion-time and processing-time processing. See &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2674&quot;&gt;FLINK-2674&lt;/a&gt; for details.&lt;/p&gt;
+
+&lt;h3 id=&quot;stateful-stream-processing&quot;&gt;Stateful Stream Processing&lt;/h3&gt;
+
+&lt;p&gt;Operators that maintain and update state are a common pattern in many stream processing applications. Since streaming applications tend to run for a very long time, operator state can become very valuable and impossible to recompute. In order to enable fault-tolerance, operator state must be backed up to persistent storage in regular intervals. Flink 0.10.0 offers flexible interfaces to define, update, and query operator state and hooks to connect various state backends.&lt;/p&gt;
+
+&lt;h3 id=&quot;highly-available-cluster-operations&quot;&gt;Highly-available Cluster Operations&lt;/h3&gt;
+
+&lt;p&gt;Stream processing applications may be live for months. Therefore, a production-ready stream processor must be highly-available and continue to process data even in the face of failures. With release 0.10.0, Flink supports high availability modes for standalone cluster and &lt;a href=&quot;https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html&quot;&gt;YARN&lt;/a&gt; setups, eliminating any single point of failure. In this mode, Flink relies on &lt;a href=&quot;https://zookeeper.apache.org&quot;&gt;Apache Zookeeper&lt;/a&gt; for leader election and persisting small sized meta-data of running jobs. You can &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-0.10/setup/jobmanager_high_availability.html&quot;&gt;check out the documentation&lt;/a&gt; to see how to enable high availability. See &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2287&quot;&gt;FLINK-2287&lt;/a&gt; for details.&lt;/p&gt;
+
+&lt;h3 id=&quot;graduated-datastream-api&quot;&gt;Graduated DataStream API&lt;/h3&gt;
+
+&lt;p&gt;The DataStream API was revised based on user feedback and with foresight for upcoming features and graduated from beta status to fully supported. The most obvious changes are related to the methods for stream partitioning and window operations. The new windowing system is based on the concepts of window assigners, triggers, and evictors, inspired by the &lt;a href=&quot;http://www.vldb.org/pvldb/vol8/p1792-Akidau.pdf&quot;&gt;Dataflow Model&lt;/a&gt;. The new API is fully described in the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/streaming_guide.html&quot;&gt;DataStream API documentation&lt;/a&gt;. This &lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/Migration+Guide%3A+0.9.x+to+0.10.x&quot;&gt;migration guide&lt;/a&gt; will help to port your Flink 0.9 DataStream programs to the revised API of Flink 0.10.0. See &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2674&quot;&gt;FLINK-2674&lt;/a&gt; and &lt;a 
 href=&quot;https://issues.apache.org/jira/browse/FLINK-2877&quot;&gt;FLINK-2877&lt;/a&gt; for details.&lt;/p&gt;
+
+&lt;h3 id=&quot;new-connectors-for-data-streams&quot;&gt;New Connectors for Data Streams&lt;/h3&gt;
+
+&lt;p&gt;Apache Flink 0.10.0 features DataStream sources and sinks for many common data producers and stores. This includes an exactly-once rolling file sink which supports any file system, including HDFS, local FS, and S3. We also updated the &lt;a href=&quot;https://kafka.apache.org&quot;&gt;Apache Kafka&lt;/a&gt; producer to use the new producer API, and added a connectors for &lt;a href=&quot;https://github.com/elastic/elasticsearch&quot;&gt;ElasticSearch&lt;/a&gt; and &lt;a href=&quot;https://nifi.apache.org&quot;&gt;Apache Nifi&lt;/a&gt;. More connectors for DataStream programs will be added by the community in the future. See the following JIRA issues for details &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2583&quot;&gt;FLINK-2583&lt;/a&gt;, &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2386&quot;&gt;FLINK-2386&lt;/a&gt;, &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2372&quot;&gt;FLINK-2372&lt;/a&gt;, &lt;a href=&quot;https://is
 sues.apache.org/jira/browse/FLINK-2740&quot;&gt;FLINK-2740&lt;/a&gt;, and &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2558&quot;&gt;FLINK-2558&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;new-web-dashboard--real-time-monitoring&quot;&gt;New Web Dashboard &amp;amp; Real-time Monitoring&lt;/h3&gt;
+
+&lt;p&gt;The 0.10.0 release features a newly designed and significantly improved monitoring dashboard for Apache Flink. The new dashboard visualizes the progress of running jobs and shows real-time statistics of processed data volumes and record counts. Moreover, it gives access to resource usage and JVM statistics of TaskManagers including JVM heap usage and garbage collection details. The following screenshot shows the job view of the new dashboard.&lt;/p&gt;
+
+&lt;center&gt;
+&lt;img src=&quot;/img/blog/new-dashboard-screenshot.png&quot; style=&quot;width:90%;margin:15px&quot; /&gt;
+&lt;/center&gt;
+
+&lt;p&gt;The web server that provides all monitoring statistics has been designed with a REST interface allowing other systems to also access the internal system metrics. See &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2357&quot;&gt;FLINK-2357&lt;/a&gt; for details.&lt;/p&gt;
+
+&lt;h3 id=&quot;off-heap-managed-memory&quot;&gt;Off-heap Managed Memory&lt;/h3&gt;
+
+&lt;p&gt;Flink’s internal operators (such as its sort algorithm and hash tables) write data to and read data from managed memory to achieve memory-safe operations and reduce garbage collection overhead. Until version 0.10.0, managed memory was allocated only from JVM heap memory. With this release, managed memory can also be allocated from off-heap memory. This will facilitate shorter TaskManager start-up times as well as reduce garbage collection pressure. See &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-0.10/setup/config.html#managed-memory&quot;&gt;the documentation&lt;/a&gt; to learn how to configure managed memory on off-heap memory. JIRA issue &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1320&quot;&gt;FLINK-1320&lt;/a&gt; contains further details.&lt;/p&gt;
+
+&lt;h3 id=&quot;outer-joins&quot;&gt;Outer Joins&lt;/h3&gt;
+
+&lt;p&gt;Outer joins have been one of the most frequently requested features for Flink’s &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/programming_guide.html&quot;&gt;DataSet API&lt;/a&gt;. Although there was a workaround to implement outer joins as CoGroup function, it had significant drawbacks including added code complexity and not being fully memory-safe. With release 0.10.0, Flink adds native support for &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/dataset_transformations.html#outerjoin&quot;&gt;left, right, and full outer joins&lt;/a&gt; to the DataSet API. All outer joins are backed by a memory-safe operator implementation that leverages Flink’s managed memory. See &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-687&quot;&gt;FLINK-687&lt;/a&gt; and &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2107&quot;&gt;FLINK-2107&lt;/a&gt; for details.&lt;/p&gt;
+
+&lt;h3 id=&quot;gelly-major-improvements-and-scala-api&quot;&gt;Gelly: Major Improvements and Scala API&lt;/h3&gt;
+
+&lt;p&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-0.10/libs/gelly_guide.html&quot;&gt;Gelly&lt;/a&gt; is Flink’s API and library for processing and analyzing large-scale graphs. Gelly was introduced with release 0.9.0 and has been very well received by users and contributors. Based on user feedback, Gelly has been improved since then. In addition, Flink 0.10.0 introduces a Scala API for Gelly. See &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2857&quot;&gt;FLINK-2857&lt;/a&gt; and &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1962&quot;&gt;FLINK-1962&lt;/a&gt; for details.&lt;/p&gt;
+
+&lt;h2 id=&quot;more-improvements-and-fixes&quot;&gt;More Improvements and Fixes&lt;/h2&gt;
+
+&lt;p&gt;The Flink community resolved more than 400 issues. The following list is a selection of new features and fixed bugs.&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-1851&quot;&gt;FLINK-1851&lt;/a&gt; Java Table API does not support Casting&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2152&quot;&gt;FLINK-2152&lt;/a&gt; Provide zipWithIndex utility in flink-contrib&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2158&quot;&gt;FLINK-2158&lt;/a&gt; NullPointerException in DateSerializer.&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2240&quot;&gt;FLINK-2240&lt;/a&gt; Use BloomFilter to minimize probe side records which are spilled to disk in Hybrid-Hash-Join&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2533&quot;&gt;FLINK-2533&lt;/a&gt; Gap based random sample optimization&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2555&quot;&gt;FLINK-2555&lt;/a&gt; Hadoop Input/Output Formats are unable to access secured HDFS clusters&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2565&quot;&gt;FLINK-2565&lt;/a&gt; Support primitive arrays as keys&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2582&quot;&gt;FLINK-2582&lt;/a&gt; Document how to build Flink with other Scala versions&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2584&quot;&gt;FLINK-2584&lt;/a&gt; ASM dependency is not shaded away&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2689&quot;&gt;FLINK-2689&lt;/a&gt; Reusing null object for joins with SolutionSet&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2703&quot;&gt;FLINK-2703&lt;/a&gt; Remove log4j classes from fat jar / document how to use Flink with logback&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2763&quot;&gt;FLINK-2763&lt;/a&gt; Bug in Hybrid Hash Join: Request to spill a partition with less than two buffers.&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2767&quot;&gt;FLINK-2767&lt;/a&gt; Add support Scala 2.11 to Scala shell&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2774&quot;&gt;FLINK-2774&lt;/a&gt; Import Java API classes automatically in Flink’s Scala shell&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2782&quot;&gt;FLINK-2782&lt;/a&gt; Remove deprecated features for 0.10&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2800&quot;&gt;FLINK-2800&lt;/a&gt; kryo serialization problem&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2834&quot;&gt;FLINK-2834&lt;/a&gt; Global round-robin for temporary directories&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2842&quot;&gt;FLINK-2842&lt;/a&gt; S3FileSystem is broken&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2874&quot;&gt;FLINK-2874&lt;/a&gt; Certain Avro generated getters/setters not recognized&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2895&quot;&gt;FLINK-2895&lt;/a&gt; Duplicate immutable object creation&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2964&quot;&gt;FLINK-2964&lt;/a&gt; MutableHashTable fails when spilling partitions without overflow segments&lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h2 id=&quot;notice&quot;&gt;Notice&lt;/h2&gt;
+
+&lt;p&gt;As previously announced, Flink 0.10.0 no longer supports Java 6. If you are still using Java 6, please consider upgrading to Java 8 (Java 7 ended its free support in April 2015).
+Also note that some methods in the DataStream API had to be renamed as part of the API rework. For example the &lt;code&gt;groupBy&lt;/code&gt; method has been renamed to &lt;code&gt;keyBy&lt;/code&gt; and the windowing API changed. This &lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/Migration+Guide%3A+0.9.x+to+0.10.x&quot;&gt;migration guide&lt;/a&gt; will help to port your Flink 0.9 DataStream programs to the revised API of Flink 0.10.0.&lt;/p&gt;
+
+&lt;h2 id=&quot;contributors&quot;&gt;Contributors&lt;/h2&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;Alexander Alexandrov&lt;/li&gt;
+  &lt;li&gt;Marton Balassi&lt;/li&gt;
+  &lt;li&gt;Enrique Bautista&lt;/li&gt;
+  &lt;li&gt;Faye Beligianni&lt;/li&gt;
+  &lt;li&gt;Bryan Bende&lt;/li&gt;
+  &lt;li&gt;Ajay Bhat&lt;/li&gt;
+  &lt;li&gt;Chris Brinkman&lt;/li&gt;
+  &lt;li&gt;Dmitry Buzdin&lt;/li&gt;
+  &lt;li&gt;Kun Cao&lt;/li&gt;
+  &lt;li&gt;Paris Carbone&lt;/li&gt;
+  &lt;li&gt;Ufuk Celebi&lt;/li&gt;
+  &lt;li&gt;Shivani Chandna&lt;/li&gt;
+  &lt;li&gt;Liang Chen&lt;/li&gt;
+  &lt;li&gt;Felix Cheung&lt;/li&gt;
+  &lt;li&gt;Hubert Czerpak&lt;/li&gt;
+  &lt;li&gt;Vimal Das&lt;/li&gt;
+  &lt;li&gt;Behrouz Derakhshan&lt;/li&gt;
+  &lt;li&gt;Suminda Dharmasena&lt;/li&gt;
+  &lt;li&gt;Stephan Ewen&lt;/li&gt;
+  &lt;li&gt;Fengbin Fang&lt;/li&gt;
+  &lt;li&gt;Gyula Fora&lt;/li&gt;
+  &lt;li&gt;Lun Gao&lt;/li&gt;
+  &lt;li&gt;Gabor Gevay&lt;/li&gt;
+  &lt;li&gt;Piotr Godek&lt;/li&gt;
+  &lt;li&gt;Sachin Goel&lt;/li&gt;
+  &lt;li&gt;Anton Haglund&lt;/li&gt;
+  &lt;li&gt;Gábor Hermann&lt;/li&gt;
+  &lt;li&gt;Greg Hogan&lt;/li&gt;
+  &lt;li&gt;Fabian Hueske&lt;/li&gt;
+  &lt;li&gt;Martin Junghanns&lt;/li&gt;
+  &lt;li&gt;Vasia Kalavri&lt;/li&gt;
+  &lt;li&gt;Ulf Karlsson&lt;/li&gt;
+  &lt;li&gt;Frederick F. Kautz&lt;/li&gt;
+  &lt;li&gt;Samia Khalid&lt;/li&gt;
+  &lt;li&gt;Johannes Kirschnick&lt;/li&gt;
+  &lt;li&gt;Kostas Kloudas&lt;/li&gt;
+  &lt;li&gt;Alexander Kolb&lt;/li&gt;
+  &lt;li&gt;Johann Kovacs&lt;/li&gt;
+  &lt;li&gt;Aljoscha Krettek&lt;/li&gt;
+  &lt;li&gt;Sebastian Kruse&lt;/li&gt;
+  &lt;li&gt;Andreas Kunft&lt;/li&gt;
+  &lt;li&gt;Chengxiang Li&lt;/li&gt;
+  &lt;li&gt;Chen Liang&lt;/li&gt;
+  &lt;li&gt;Andra Lungu&lt;/li&gt;
+  &lt;li&gt;Suneel Marthi&lt;/li&gt;
+  &lt;li&gt;Tamara Mendt&lt;/li&gt;
+  &lt;li&gt;Robert Metzger&lt;/li&gt;
+  &lt;li&gt;Maximilian Michels&lt;/li&gt;
+  &lt;li&gt;Chiwan Park&lt;/li&gt;
+  &lt;li&gt;Sahitya Pavurala&lt;/li&gt;
+  &lt;li&gt;Pietro Pinoli&lt;/li&gt;
+  &lt;li&gt;Ricky Pogalz&lt;/li&gt;
+  &lt;li&gt;Niraj Rai&lt;/li&gt;
+  &lt;li&gt;Lokesh Rajaram&lt;/li&gt;
+  &lt;li&gt;Johannes Reifferscheid&lt;/li&gt;
+  &lt;li&gt;Till Rohrmann&lt;/li&gt;
+  &lt;li&gt;Henry Saputra&lt;/li&gt;
+  &lt;li&gt;Matthias Sax&lt;/li&gt;
+  &lt;li&gt;Shiti Saxena&lt;/li&gt;
+  &lt;li&gt;Chesnay Schepler&lt;/li&gt;
+  &lt;li&gt;Peter Schrott&lt;/li&gt;
+  &lt;li&gt;Saumitra Shahapure&lt;/li&gt;
+  &lt;li&gt;Nikolaas Steenbergen&lt;/li&gt;
+  &lt;li&gt;Thomas Sun&lt;/li&gt;
+  &lt;li&gt;Peter Szabo&lt;/li&gt;
+  &lt;li&gt;Viktor Taranenko&lt;/li&gt;
+  &lt;li&gt;Kostas Tzoumas&lt;/li&gt;
+  &lt;li&gt;Pieter-Jan Van Aeken&lt;/li&gt;
+  &lt;li&gt;Theodore Vasiloudis&lt;/li&gt;
+  &lt;li&gt;Timo Walther&lt;/li&gt;
+  &lt;li&gt;Chengxuan Wang&lt;/li&gt;
+  &lt;li&gt;Huang Wei&lt;/li&gt;
+  &lt;li&gt;Dawid Wysakowicz&lt;/li&gt;
+  &lt;li&gt;Rerngvit Yanggratoke&lt;/li&gt;
+  &lt;li&gt;Nezih Yigitbasi&lt;/li&gt;
+  &lt;li&gt;Ted Yu&lt;/li&gt;
+  &lt;li&gt;Rucong Zhang&lt;/li&gt;
+  &lt;li&gt;Vyacheslav Zholudev&lt;/li&gt;
+  &lt;li&gt;Zoltán Zvara&lt;/li&gt;
+&lt;/ul&gt;
+
+</description>
+<pubDate>Mon, 16 Nov 2015 09:00:00 +0100</pubDate>
+<link>http://flink.apache.org/news/2015/11/16/release-0.10.0.html</link>
+<guid isPermaLink="true">/news/2015/11/16/release-0.10.0.html</guid>
+</item>
+
+<item>
 <title>Off-heap Memory in Apache Flink and the curious JIT compiler</title>
 <description>&lt;p&gt;Running data-intensive code in the JVM and making it well-behaved is tricky. Systems that put billions of data objects naively onto the JVM heap face unpredictable OutOfMemoryErrors and Garbage Collection stalls. Of course, you still want to to keep your data in memory as much as possible, for speed and responsiveness of the processing applications. In that context, “off-heap” has become almost something like a magic word to solve these problems.&lt;/p&gt;
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/4e2b2396/content/blog/index.html
----------------------------------------------------------------------
diff --git a/content/blog/index.html b/content/blog/index.html
index 4b63460..d98401a 100644
--- a/content/blog/index.html
+++ b/content/blog/index.html
@@ -155,6 +155,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2015/11/16/release-0.10.0.html">Announcing Apache Flink 0.10.0</a></h2>
+      <p>16 Nov 2015</p>
+
+      <p><p>The Apache Flink community is pleased to announce the availability of the 0.10.0 release. The community put significant effort into improving and extending Apache Flink since the last release, focusing on data stream processing and operational features. About 80 contributors provided bug fixes, improvements, and new features such that in total more than 400 JIRA issues could be resolved.</p>
+
+</p>
+
+      <p><a href="/news/2015/11/16/release-0.10.0.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2015/09/16/off-heap-memory.html">Off-heap Memory in Apache Flink and the curious JIT compiler</a></h2>
       <p>16 Sep 2015 by Stephan Ewen (<a href="https://twitter.com/stephanewen">@stephanewen</a>)</p>
 
@@ -279,17 +292,6 @@ release is a preview release that contains known issues.</p>
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html">Peeking into Apache Flink's Engine Room</a></h2>
-      <p>13 Mar 2015 by Fabian Hüske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
-
-      <p>Joins are prevalent operations in many data processing applications. Most data processing systems feature APIs that make joining data sets very easy. However, the internal algorithms for join processing are much more involved – especially if large data sets need to be efficiently handled. In this blog post, we cut through Apache Flink’s layered architecture and take a look at its internals with a focus on how it handles joins.</p>
-
-      <p><a href="/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -322,6 +324,16 @@ release is a preview release that contains known issues.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2015/11/16/release-0.10.0.html">Announcing Apache Flink 0.10.0</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2015/09/16/off-heap-memory.html">Off-heap Memory in Apache Flink and the curious JIT compiler</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/4e2b2396/content/blog/page2/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page2/index.html b/content/blog/page2/index.html
index 22c109c..e92811b 100644
--- a/content/blog/page2/index.html
+++ b/content/blog/page2/index.html
@@ -155,6 +155,17 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html">Peeking into Apache Flink's Engine Room</a></h2>
+      <p>13 Mar 2015 by Fabian Hüske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
+
+      <p>Joins are prevalent operations in many data processing applications. Most data processing systems feature APIs that make joining data sets very easy. However, the internal algorithms for join processing are much more involved – especially if large data sets need to be efficiently handled. In this blog post, we cut through Apache Flink’s layered architecture and take a look at its internals with a focus on how it handles joins.</p>
+
+      <p><a href="/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2015/03/02/february-2015-in-flink.html">February 2015 in the Flink community</a></h2>
       <p>02 Mar 2015</p>
 
@@ -278,22 +289,6 @@ and offers a new API including definition of flexible windows.</p>
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2014/08/26/release-0.6.html">Apache Flink 0.6 available</a></h2>
-      <p>26 Aug 2014</p>
-
-      <p><p>We are happy to announce the availability of Flink 0.6. This is the
-first release of the system inside the Apache Incubator and under the
-name Flink. Releases up to 0.5 were under the name Stratosphere, the
-academic and open source project that Flink originates from.</p>
-
-</p>
-
-      <p><a href="/news/2014/08/26/release-0.6.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -326,6 +321,16 @@ academic and open source project that Flink originates from.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2015/11/16/release-0.10.0.html">Announcing Apache Flink 0.10.0</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2015/09/16/off-heap-memory.html">Off-heap Memory in Apache Flink and the curious JIT compiler</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/4e2b2396/content/blog/page3/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page3/index.html b/content/blog/page3/index.html
index cf79f0e..1584296 100644
--- a/content/blog/page3/index.html
+++ b/content/blog/page3/index.html
@@ -155,6 +155,22 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2014/08/26/release-0.6.html">Apache Flink 0.6 available</a></h2>
+      <p>26 Aug 2014</p>
+
+      <p><p>We are happy to announce the availability of Flink 0.6. This is the
+first release of the system inside the Apache Incubator and under the
+name Flink. Releases up to 0.5 were under the name Stratosphere, the
+academic and open source project that Flink originates from.</p>
+
+</p>
+
+      <p><a href="/news/2014/08/26/release-0.6.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2014/05/31/release-0.5.html">Stratosphere version 0.5 available</a></h2>
       <p>31 May 2014</p>
 
@@ -271,25 +287,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2013/12/13/humboldt-innovation-award.html">Stratosphere wins award at Humboldt Innovation Competition "Big Data: Research meets Startups"</a></h2>
-      <p>13 Dec 2013</p>
-
-      <p>    <p> Stratosphere won the second place in
-    the <a href="http://www.humboldt-innovation.de/de/newsdetail/News/View/Forum%2BJunge%2BSpitzenforscher%2BBIG%2BData%2B%2BResearch%2Bmeets%2BStartups-123.html">competition</a>
-    organized by Humboldt Innovation on "Big Data: Research meets
-    Startups," where several research projects were evaluated by a
-    panel of experts from the Berlin startup ecosystem. The award
-    includes a monetary prize of 10,000 euros.
-    </p>
-
-</p>
-
-      <p><a href="/news/2013/12/13/humboldt-innovation-award.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -322,6 +319,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2015/11/16/release-0.10.0.html">Announcing Apache Flink 0.10.0</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2015/09/16/off-heap-memory.html">Off-heap Memory in Apache Flink and the curious JIT compiler</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/4e2b2396/content/blog/page4/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page4/index.html b/content/blog/page4/index.html
index 0d298d6..940f2ac 100644
--- a/content/blog/page4/index.html
+++ b/content/blog/page4/index.html
@@ -155,6 +155,25 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2013/12/13/humboldt-innovation-award.html">Stratosphere wins award at Humboldt Innovation Competition "Big Data: Research meets Startups"</a></h2>
+      <p>13 Dec 2013</p>
+
+      <p>    <p> Stratosphere won the second place in
+    the <a href="http://www.humboldt-innovation.de/de/newsdetail/News/View/Forum%2BJunge%2BSpitzenforscher%2BBIG%2BData%2B%2BResearch%2Bmeets%2BStartups-123.html">competition</a>
+    organized by Humboldt Innovation on "Big Data: Research meets
+    Startups," where several research projects were evaluated by a
+    panel of experts from the Berlin startup ecosystem. The award
+    includes a monetary prize of 10,000 euros.
+    </p>
+
+</p>
+
+      <p><a href="/news/2013/12/13/humboldt-innovation-award.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2013/10/21/cikm2013-paper.html">Paper "All Roads Lead to Rome: Optimistic Recovery for Distributed Iterative Data Processing" accepted at CIKM 2013</a></h2>
       <p>21 Oct 2013</p>
 
@@ -301,6 +320,16 @@ We demonstrate our optimizer and a job submission client that allows users to pe
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2015/11/16/release-0.10.0.html">Announcing Apache Flink 0.10.0</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2015/09/16/off-heap-memory.html">Off-heap Memory in Apache Flink and the curious JIT compiler</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/4e2b2396/content/img/blog/new-dashboard-screenshot.png
----------------------------------------------------------------------
diff --git a/content/img/blog/new-dashboard-screenshot.png b/content/img/blog/new-dashboard-screenshot.png
new file mode 100644
index 0000000..2184f47
Binary files /dev/null and b/content/img/blog/new-dashboard-screenshot.png differ

http://git-wip-us.apache.org/repos/asf/flink-web/blob/4e2b2396/content/index.html
----------------------------------------------------------------------
diff --git a/content/index.html b/content/index.html
index 7592172..69049e8 100644
--- a/content/index.html
+++ b/content/index.html
@@ -226,6 +226,10 @@
 
     <ul class="list-group">
   
+      <li class="list-group-item"><span>16 Nov 2015</span> &raquo;
+        <a href="/news/2015/11/16/release-0.10.0.html">Announcing Apache Flink 0.10.0</a>
+      </li>
+  
       <li class="list-group-item"><span>16 Sep 2015</span> &raquo;
         <a href="/news/2015/09/16/off-heap-memory.html">Off-heap Memory in Apache Flink and the curious JIT compiler</a>
       </li>
@@ -241,10 +245,6 @@
       <li class="list-group-item"><span>24 Aug 2015</span> &raquo;
         <a href="/news/2015/08/24/introducing-flink-gelly.html">Introducing Gelly: Graph Processing with Apache Flink</a>
       </li>
-  
-      <li class="list-group-item"><span>24 Jun 2015</span> &raquo;
-        <a href="/news/2015/06/24/announcing-apache-flink-0.9.0-release.html">Announcing Apache Flink 0.9.0</a>
-      </li>
 
 </ul>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/4e2b2396/content/news/2015/11/16/release-0.10.0.html
----------------------------------------------------------------------
diff --git a/content/news/2015/11/16/release-0.10.0.html b/content/news/2015/11/16/release-0.10.0.html
new file mode 100644
index 0000000..aca9f15
--- /dev/null
+++ b/content/news/2015/11/16/release-0.10.0.html
@@ -0,0 +1,371 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+    <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->
+    <title>Apache Flink: Announcing Apache Flink 0.10.0</title>
+    <link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">
+    <link rel="icon" href="/favicon.ico" type="image/x-icon">
+
+    <!-- Bootstrap -->
+    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css">
+    <link rel="stylesheet" href="/css/flink.css">
+    <link rel="stylesheet" href="/css/syntax.css">
+
+    <!-- Blog RSS feed -->
+    <link href="/blog/feed.xml" rel="alternate" type="application/rss+xml" title="Apache Flink Blog: RSS feed" />
+
+    <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
+    <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
+    <!--[if lt IE 9]>
+      <script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
+      <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
+    <![endif]-->
+  </head>
+  <body>  
+    
+
+  <!-- Top navbar. -->
+    <nav class="navbar navbar-default navbar-fixed-top">
+      <div class="container">
+        <!-- The logo. -->
+        <div class="navbar-header">
+          <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+          </button>
+          <div class="navbar-logo">
+            <a href="/">
+              <img alt="Apache Flink" src="/img/navbar-brand-logo.jpg" width="78px" height="40px">
+            </a>
+          </div>
+        </div><!-- /.navbar-header -->
+
+        <!-- The navigation links. -->
+        <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
+          <ul class="nav navbar-nav">
+
+            <!-- Overview -->
+            <li><a href="/index.html">Overview</a></li>
+
+            <!-- Features -->
+            <li><a href="/features.html">Features</a></li>
+
+            <!-- Downloads -->
+            <li><a href="/downloads.html">Downloads</a></li>
+
+            <!-- FAQ -->
+            <li><a href="/faq.html">FAQ</a></li>
+
+
+            <!-- Quickstart -->
+            <li class="dropdown">
+              <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false"><small><span class="glyphicon glyphicon-new-window"></span></small> Quickstart <span class="caret"></span></a>
+              <ul class="dropdown-menu" role="menu">
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.10/quickstart/setup_quickstart.html">Setup</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.10/quickstart/java_api_quickstart.html">Java API</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.10/quickstart/scala_api_quickstart.html">Scala API</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.10/quickstart/run_example_quickstart.html">Run Step-by-Step Example</a></li>
+              </ul>
+            </li>
+
+
+            <!-- Documentation -->
+            <li class="dropdown">
+              <a href="" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false"><small><span class="glyphicon glyphicon-new-window"></span></small> Documentation <span class="caret"></span></a>
+              <ul class="dropdown-menu" role="menu">
+                <!-- Latest stable release -->
+                <li role="presentation" class="dropdown-header"><strong>Latest Release</strong> (Stable)</li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.10">0.10.0 Documentation</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.10/api/java" class="active">0.10.0 Javadocs</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-0.10/api/scala/index.html" class="active">0.10.0 ScalaDocs</a></li>
+
+                <!-- Snapshot docs -->
+                <li class="divider"></li>
+                <li role="presentation" class="dropdown-header"><strong>Snapshot</strong> (Development)</li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-master">1.0 Documentation</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-master/api/java" class="active">1.0 Javadocs</a></li>
+                <li><a href="http://ci.apache.org/projects/flink/flink-docs-master/api/scala/index.html" class="active">1.0 ScalaDocs</a></li>
+
+                <!-- Wiki -->
+                <li class="divider"></li>
+                <li><a href="https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Home">Wiki</a></li>
+              </ul>
+            </li>
+
+          </ul>
+
+          <ul class="nav navbar-nav navbar-right">
+            <!-- Blog -->
+            <li class=" active hidden-md hidden-sm"><a href="/blog/">Blog</a></li>
+
+            <li class="dropdown hidden-md hidden-sm">
+              <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">Community <span class="caret"></span></a>
+              <ul class="dropdown-menu" role="menu">
+                <!-- Community -->
+                <li role="presentation" class="dropdown-header"><strong>Community</strong></li>
+                <li><a href="/community.html#mailing-lists">Mailing Lists</a></li>
+                <li><a href="/community.html#irc">IRC</a></li>
+                <li><a href="/community.html#stack-overflow">Stack Overflow</a></li>
+                <li><a href="/community.html#issue-tracker">Issue Tracker</a></li>
+                <li><a href="/community.html#third-party-packages">Third Party Packages</a></li>
+                <li><a href="/community.html#source-code">Source Code</a></li>
+                <li><a href="/community.html#people">People</a></li>
+
+                <!-- Contribute -->
+                <li class="divider"></li>
+                <li role="presentation" class="dropdown-header"><strong>Contribute</strong></li>
+                <li><a href="/how-to-contribute.html">How to Contribute</a></li>
+                <li><a href="/contribute-code.html">Contribute Code</a></li>
+                <li><a href="/contribute-documentation.html">Contribute Documentation</a></li>
+                <li><a href="/improve-website.html">Improve the Website</a></li>
+              </ul>
+            </li>
+
+            <li class="dropdown hidden-md hidden-sm">
+              <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false">Project <span class="caret"></span></a>
+              <ul class="dropdown-menu" role="menu">
+                <!-- Project -->
+                <li role="presentation" class="dropdown-header"><strong>Project</strong></li>
+                <li><a href="/material.html">Material</a></li>
+                <li><a href="https://twitter.com/apacheflink"><small><span class="glyphicon glyphicon-new-window"></span></small> Twitter</a></li>
+                <li><a href="https://github.com/apache/flink"><small><span class="glyphicon glyphicon-new-window"></span></small> GitHub</a></li>
+                <li><a href="https://cwiki.apache.org/confluence/display/FLINK/Apache+Flink+Home"><small><span class="glyphicon glyphicon-new-window"></span></small> Wiki</a></li>
+              </ul>
+            </li>
+          </ul>
+        </div><!-- /.navbar-collapse -->
+      </div><!-- /.container -->
+    </nav>
+
+
+    <!-- Main content. -->
+    <div class="container">
+      
+
+<div class="row">
+  <div class="col-sm-8 col-sm-offset-2">
+    <div class="row">
+      <h1>Announcing Apache Flink 0.10.0</h1>
+
+      <article>
+        <p>16 Nov 2015</p>
+
+<p>The Apache Flink community is pleased to announce the availability of the 0.10.0 release. The community put significant effort into improving and extending Apache Flink since the last release, focusing on data stream processing and operational features. About 80 contributors provided bug fixes, improvements, and new features such that in total more than 400 JIRA issues could be resolved.</p>
+
+<p>For Flink 0.10.0, the focus of the community was to graduate the DataStream API from beta and to evolve Apache Flink into a production-ready stream data processor with a competitive feature set. These efforts resulted in support for event-time and out-of-order streams, exactly-once guarantees in the case of failures, a very flexible windowing mechanism, sophisticated operator state management, and a highly-available cluster operation mode. Flink 0.10.0 also brings a new monitoring dashboard with real-time system and job monitoring capabilities. Both batch and streaming modes of Flink benefit from the new high availability and improved monitoring features. Needless to say that Flink 0.10.0 includes many more features, improvements, and bug fixes. </p>
+
+<p>We encourage everyone to <a href="/downloads.html">download the release</a> and <a href="https://ci.apache.org/projects/flink/flink-docs-release-0.10/">check out the documentation</a>. Feedback through the Flink <a href="/community.html#mailing-lists">mailing lists</a> is, as always, very welcome!</p>
+
+<h2 id="new-features">New Features</h2>
+
+<h3 id="event-time-stream-processing">Event-time Stream Processing</h3>
+
+<p>Many stream processing applications consume data from sources that produce events with associated timestamps such as sensor or user-interaction events. Very often, events have to be collected from several sources such that it is usually not guaranteed that events arrive in the exact order of their timestamps at the stream processor. Consequently, stream processors must take out-of-order elements into account in order to produce results which are correct and consistent with respect to the timestamps of the events. With release 0.10.0, Apache Flink supports event-time processing as well as ingestion-time and processing-time processing. See <a href="https://issues.apache.org/jira/browse/FLINK-2674">FLINK-2674</a> for details.</p>
+
+<h3 id="stateful-stream-processing">Stateful Stream Processing</h3>
+
+<p>Operators that maintain and update state are a common pattern in many stream processing applications. Since streaming applications tend to run for a very long time, operator state can become very valuable and impossible to recompute. In order to enable fault-tolerance, operator state must be backed up to persistent storage in regular intervals. Flink 0.10.0 offers flexible interfaces to define, update, and query operator state and hooks to connect various state backends.</p>
+
+<h3 id="highly-available-cluster-operations">Highly-available Cluster Operations</h3>
+
+<p>Stream processing applications may be live for months. Therefore, a production-ready stream processor must be highly-available and continue to process data even in the face of failures. With release 0.10.0, Flink supports high availability modes for standalone cluster and <a href="https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html">YARN</a> setups, eliminating any single point of failure. In this mode, Flink relies on <a href="https://zookeeper.apache.org">Apache Zookeeper</a> for leader election and persisting small sized meta-data of running jobs. You can <a href="https://ci.apache.org/projects/flink/flink-docs-release-0.10/setup/jobmanager_high_availability.html">check out the documentation</a> to see how to enable high availability. See <a href="https://issues.apache.org/jira/browse/FLINK-2287">FLINK-2287</a> for details.</p>
+
+<h3 id="graduated-datastream-api">Graduated DataStream API</h3>
+
+<p>The DataStream API was revised based on user feedback and with foresight for upcoming features and graduated from beta status to fully supported. The most obvious changes are related to the methods for stream partitioning and window operations. The new windowing system is based on the concepts of window assigners, triggers, and evictors, inspired by the <a href="http://www.vldb.org/pvldb/vol8/p1792-Akidau.pdf">Dataflow Model</a>. The new API is fully described in the <a href="https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/streaming_guide.html">DataStream API documentation</a>. This <a href="https://cwiki.apache.org/confluence/display/FLINK/Migration+Guide%3A+0.9.x+to+0.10.x">migration guide</a> will help to port your Flink 0.9 DataStream programs to the revised API of Flink 0.10.0. See <a href="https://issues.apache.org/jira/browse/FLINK-2674">FLINK-2674</a> and <a href="https://issues.apache.org/jira/browse/FLINK-2877">FLINK-2877</a> for details.</p>
+
+<h3 id="new-connectors-for-data-streams">New Connectors for Data Streams</h3>
+
+<p>Apache Flink 0.10.0 features DataStream sources and sinks for many common data producers and stores. This includes an exactly-once rolling file sink which supports any file system, including HDFS, local FS, and S3. We also updated the <a href="https://kafka.apache.org">Apache Kafka</a> producer to use the new producer API, and added a connectors for <a href="https://github.com/elastic/elasticsearch">ElasticSearch</a> and <a href="https://nifi.apache.org">Apache Nifi</a>. More connectors for DataStream programs will be added by the community in the future. See the following JIRA issues for details <a href="https://issues.apache.org/jira/browse/FLINK-2583">FLINK-2583</a>, <a href="https://issues.apache.org/jira/browse/FLINK-2386">FLINK-2386</a>, <a href="https://issues.apache.org/jira/browse/FLINK-2372">FLINK-2372</a>, <a href="https://issues.apache.org/jira/browse/FLINK-2740">FLINK-2740</a>, and <a href="https://issues.apache.org/jira/browse/FLINK-2558">FLINK-2558</a>.</p>
+
+<h3 id="new-web-dashboard--real-time-monitoring">New Web Dashboard &amp; Real-time Monitoring</h3>
+
+<p>The 0.10.0 release features a newly designed and significantly improved monitoring dashboard for Apache Flink. The new dashboard visualizes the progress of running jobs and shows real-time statistics of processed data volumes and record counts. Moreover, it gives access to resource usage and JVM statistics of TaskManagers including JVM heap usage and garbage collection details. The following screenshot shows the job view of the new dashboard.</p>
+
+<center>
+<img src="/img/blog/new-dashboard-screenshot.png" style="width:90%;margin:15px" />
+</center>
+
+<p>The web server that provides all monitoring statistics has been designed with a REST interface allowing other systems to also access the internal system metrics. See <a href="https://issues.apache.org/jira/browse/FLINK-2357">FLINK-2357</a> for details.</p>
+
+<h3 id="off-heap-managed-memory">Off-heap Managed Memory</h3>
+
+<p>Flink’s internal operators (such as its sort algorithm and hash tables) write data to and read data from managed memory to achieve memory-safe operations and reduce garbage collection overhead. Until version 0.10.0, managed memory was allocated only from JVM heap memory. With this release, managed memory can also be allocated from off-heap memory. This will facilitate shorter TaskManager start-up times as well as reduce garbage collection pressure. See <a href="https://ci.apache.org/projects/flink/flink-docs-release-0.10/setup/config.html#managed-memory">the documentation</a> to learn how to configure managed memory on off-heap memory. JIRA issue <a href="https://issues.apache.org/jira/browse/FLINK-1320">FLINK-1320</a> contains further details.</p>
+
+<h3 id="outer-joins">Outer Joins</h3>
+
+<p>Outer joins have been one of the most frequently requested features for Flink’s <a href="https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/programming_guide.html">DataSet API</a>. Although there was a workaround to implement outer joins as CoGroup function, it had significant drawbacks including added code complexity and not being fully memory-safe. With release 0.10.0, Flink adds native support for <a href="https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/dataset_transformations.html#outerjoin">left, right, and full outer joins</a> to the DataSet API. All outer joins are backed by a memory-safe operator implementation that leverages Flink’s managed memory. See <a href="https://issues.apache.org/jira/browse/FLINK-687">FLINK-687</a> and <a href="https://issues.apache.org/jira/browse/FLINK-2107">FLINK-2107</a> for details.</p>
+
+<h3 id="gelly-major-improvements-and-scala-api">Gelly: Major Improvements and Scala API</h3>
+
+<p><a href="https://ci.apache.org/projects/flink/flink-docs-release-0.10/libs/gelly_guide.html">Gelly</a> is Flink’s API and library for processing and analyzing large-scale graphs. Gelly was introduced with release 0.9.0 and has been very well received by users and contributors. Based on user feedback, Gelly has been improved since then. In addition, Flink 0.10.0 introduces a Scala API for Gelly. See <a href="https://issues.apache.org/jira/browse/FLINK-2857">FLINK-2857</a> and <a href="https://issues.apache.org/jira/browse/FLINK-1962">FLINK-1962</a> for details.</p>
+
+<h2 id="more-improvements-and-fixes">More Improvements and Fixes</h2>
+
+<p>The Flink community resolved more than 400 issues. The following list is a selection of new features and fixed bugs.</p>
+
+<ul>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-1851">FLINK-1851</a> Java Table API does not support Casting</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2152">FLINK-2152</a> Provide zipWithIndex utility in flink-contrib</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2158">FLINK-2158</a> NullPointerException in DateSerializer.</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2240">FLINK-2240</a> Use BloomFilter to minimize probe side records which are spilled to disk in Hybrid-Hash-Join</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2533">FLINK-2533</a> Gap based random sample optimization</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2555">FLINK-2555</a> Hadoop Input/Output Formats are unable to access secured HDFS clusters</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2565">FLINK-2565</a> Support primitive arrays as keys</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2582">FLINK-2582</a> Document how to build Flink with other Scala versions</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2584">FLINK-2584</a> ASM dependency is not shaded away</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2689">FLINK-2689</a> Reusing null object for joins with SolutionSet</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2703">FLINK-2703</a> Remove log4j classes from fat jar / document how to use Flink with logback</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2763">FLINK-2763</a> Bug in Hybrid Hash Join: Request to spill a partition with less than two buffers.</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2767">FLINK-2767</a> Add support Scala 2.11 to Scala shell</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2774">FLINK-2774</a> Import Java API classes automatically in Flink’s Scala shell</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2782">FLINK-2782</a> Remove deprecated features for 0.10</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2800">FLINK-2800</a> kryo serialization problem</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2834">FLINK-2834</a> Global round-robin for temporary directories</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2842">FLINK-2842</a> S3FileSystem is broken</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2874">FLINK-2874</a> Certain Avro generated getters/setters not recognized</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2895">FLINK-2895</a> Duplicate immutable object creation</li>
+  <li><a href="https://issues.apache.org/jira/browse/FLINK-2964">FLINK-2964</a> MutableHashTable fails when spilling partitions without overflow segments</li>
+</ul>
+
+<h2 id="notice">Notice</h2>
+
+<p>As previously announced, Flink 0.10.0 no longer supports Java 6. If you are still using Java 6, please consider upgrading to Java 8 (Java 7 ended its free support in April 2015).
+Also note that some methods in the DataStream API had to be renamed as part of the API rework. For example the <code>groupBy</code> method has been renamed to <code>keyBy</code> and the windowing API changed. This <a href="https://cwiki.apache.org/confluence/display/FLINK/Migration+Guide%3A+0.9.x+to+0.10.x">migration guide</a> will help to port your Flink 0.9 DataStream programs to the revised API of Flink 0.10.0.</p>
+
+<h2 id="contributors">Contributors</h2>
+
+<ul>
+  <li>Alexander Alexandrov</li>
+  <li>Marton Balassi</li>
+  <li>Enrique Bautista</li>
+  <li>Faye Beligianni</li>
+  <li>Bryan Bende</li>
+  <li>Ajay Bhat</li>
+  <li>Chris Brinkman</li>
+  <li>Dmitry Buzdin</li>
+  <li>Kun Cao</li>
+  <li>Paris Carbone</li>
+  <li>Ufuk Celebi</li>
+  <li>Shivani Chandna</li>
+  <li>Liang Chen</li>
+  <li>Felix Cheung</li>
+  <li>Hubert Czerpak</li>
+  <li>Vimal Das</li>
+  <li>Behrouz Derakhshan</li>
+  <li>Suminda Dharmasena</li>
+  <li>Stephan Ewen</li>
+  <li>Fengbin Fang</li>
+  <li>Gyula Fora</li>
+  <li>Lun Gao</li>
+  <li>Gabor Gevay</li>
+  <li>Piotr Godek</li>
+  <li>Sachin Goel</li>
+  <li>Anton Haglund</li>
+  <li>Gábor Hermann</li>
+  <li>Greg Hogan</li>
+  <li>Fabian Hueske</li>
+  <li>Martin Junghanns</li>
+  <li>Vasia Kalavri</li>
+  <li>Ulf Karlsson</li>
+  <li>Frederick F. Kautz</li>
+  <li>Samia Khalid</li>
+  <li>Johannes Kirschnick</li>
+  <li>Kostas Kloudas</li>
+  <li>Alexander Kolb</li>
+  <li>Johann Kovacs</li>
+  <li>Aljoscha Krettek</li>
+  <li>Sebastian Kruse</li>
+  <li>Andreas Kunft</li>
+  <li>Chengxiang Li</li>
+  <li>Chen Liang</li>
+  <li>Andra Lungu</li>
+  <li>Suneel Marthi</li>
+  <li>Tamara Mendt</li>
+  <li>Robert Metzger</li>
+  <li>Maximilian Michels</li>
+  <li>Chiwan Park</li>
+  <li>Sahitya Pavurala</li>
+  <li>Pietro Pinoli</li>
+  <li>Ricky Pogalz</li>
+  <li>Niraj Rai</li>
+  <li>Lokesh Rajaram</li>
+  <li>Johannes Reifferscheid</li>
+  <li>Till Rohrmann</li>
+  <li>Henry Saputra</li>
+  <li>Matthias Sax</li>
+  <li>Shiti Saxena</li>
+  <li>Chesnay Schepler</li>
+  <li>Peter Schrott</li>
+  <li>Saumitra Shahapure</li>
+  <li>Nikolaas Steenbergen</li>
+  <li>Thomas Sun</li>
+  <li>Peter Szabo</li>
+  <li>Viktor Taranenko</li>
+  <li>Kostas Tzoumas</li>
+  <li>Pieter-Jan Van Aeken</li>
+  <li>Theodore Vasiloudis</li>
+  <li>Timo Walther</li>
+  <li>Chengxuan Wang</li>
+  <li>Huang Wei</li>
+  <li>Dawid Wysakowicz</li>
+  <li>Rerngvit Yanggratoke</li>
+  <li>Nezih Yigitbasi</li>
+  <li>Ted Yu</li>
+  <li>Rucong Zhang</li>
+  <li>Vyacheslav Zholudev</li>
+  <li>Zoltán Zvara</li>
+</ul>
+
+
+      </article>
+    </div>
+
+    <div class="row">
+      <div id="disqus_thread"></div>
+      <script type="text/javascript">
+        /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */
+        var disqus_shortname = 'stratosphere-eu'; // required: replace example with your forum shortname
+
+        /* * * DON'T EDIT BELOW THIS LINE * * */
+        (function() {
+            var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
+            dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
+             (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
+        })();
+      </script>
+    </div>
+  </div>
+</div>
+
+      <hr />
+      <div class="footer text-center">
+        <p>Copyright © 2014-2015 <a href="http://apache.org">The Apache Software Foundation</a>. All Rights Reserved.</p>
+        <p>Apache Flink, Apache, and the Apache feather logo are trademarks of The Apache Software Foundation.</p>
+        <p><a href="/privacy-policy.html">Privacy Policy</a> &middot; <a href="/blog/feed.xml">RSS feed</a></p>
+      </div>
+
+    </div><!-- /.container -->
+
+    <!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
+    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js"></script>
+    <!-- Include all compiled plugins (below), or include individual files as needed -->
+    <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js"></script>
+    <script src="/js/codetabs.js"></script>
+
+    <!-- Google Analytics -->
+    <script>
+      (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+      (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+      m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+      })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+
+      ga('create', 'UA-52545728-1', 'auto');
+      ga('send', 'pageview');
+    </script>
+  </body>
+</html>


Mime
View raw message