Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 54EAB200C00 for ; Wed, 18 Jan 2017 15:03:22 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 53B03160B34; Wed, 18 Jan 2017 14:03:22 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 841D3160B6B for ; Wed, 18 Jan 2017 15:03:19 +0100 (CET) Received: (qmail 84686 invoked by uid 500); 18 Jan 2017 14:03:18 -0000 Mailing-List: contact commits-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@flink.apache.org Delivered-To: mailing list commits@flink.apache.org Received: (qmail 74348 invoked by uid 99); 18 Jan 2017 14:00:15 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Jan 2017 14:00:15 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 8247BF4063; Wed, 18 Jan 2017 14:00:04 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: uce@apache.org To: commits@flink.apache.org Date: Wed, 18 Jan 2017 14:01:19 -0000 Message-Id: <81859cc8f76f42f19634cfe8bf673116@git.apache.org> In-Reply-To: <7f404cb01cdd4af5ac1587c8a9908179@git.apache.org> References: <7f404cb01cdd4af5ac1587c8a9908179@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [79/84] [abbrv] flink-web git commit: Rebuild website archived-at: Wed, 18 Jan 2017 14:03:22 -0000 http://git-wip-us.apache.org/repos/asf/flink-web/blob/61adc137/content/community.html ---------------------------------------------------------------------- diff --git a/content/community.html b/content/community.html new file mode 100644 index 0000000..04869c7 --- /dev/null +++ b/content/community.html @@ -0,0 +1,743 @@ + + + + + + + + Apache Flink: Community & Project Info + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+
+
+

Community & Project Info

+ + + +

There are many ways to get help from the Apache Flink community. The mailing lists are the primary place where all Flink committers are present. If you want to talk with the Flink committers and users in a chat, there is a IRC channel. Some committers are also monitoring Stack Overflow. Please remember to tag your questions with the flink tag. Bugs and feature requests can either be discussed on dev mailing list or on JIRA. Those interested in contributing to Flink should check out the contribution guide.

+ +

Mailing Lists

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameSubscribeDigestUnsubscribePostArchive
+ news@flink.apache.org
+ News and announcements from the Flink community. +
Subscribe Subscribe Unsubscribe Read only list + Archives
+
+ community@flink.apache.org
+ Broader community discussions related to meetups, conferences, blog posts and job offers. +
Subscribe Subscribe Unsubscribe Post + Archives
+
+ user@flink.apache.org
+ User support and questions mailing list +
Subscribe Subscribe Unsubscribe Post + Archives
+ Nabble Archive +
+ dev@flink.apache.org
+ Development related discussions +
Subscribe Subscribe Unsubscribe Post + Archives
+ Nabble Archive +
+ issues@flink.apache.org +
+ Mirror of all JIRA activity +
Subscribe Subscribe UnsubscribeRead only listArchives
+ commits@flink.apache.org +
+ All commits to our repositories +
Subscribe Subscribe Unsubscribe Read only listArchives
+ +

IRC

+ +

There is an IRC channel called #flink dedicated to Apache Flink at irc.freenode.org. There is also a web-based IRC client available.

+ +

The IRC channel can be used for online discussions about Apache Flink as community, but developers should be careful to move or duplicate all the official or useful discussions to the issue tracking system or dev mailing list.

+ +

Stack Overflow

+ +

Committers are watching Stack Overflow for the flink tag.

+ +

Make sure to tag your questions there accordingly to get answers from the Flink community.

+ +

Issue Tracker

+ +

We use JIRA to track all code related issues: https://issues.apache.org/jira/browse/FLINK.

+ +

All issue activity is also mirrored to the issues mailing list.

+ +

Source Code

+ +

Main source repositories

+ + + +

Note: Flink does not build with Oracle JDK 6. It runs with Oracle JDK 6.

+ +

Website repositories

+ + + +

Training

+ +

dataArtisans currently maintains free Apache Flink training. Their training website has slides and exercises with solutions. The slides are also available on SlideShare.

+ +

Project Wiki

+ +

The Apache Flink project wiki contains a range of relevant resources for Flink users. However, some content on the wiki might be out-of-date. When in doubt, please refer to the Flink documentation.

+ + + +

Flink Forward 2015 (October 12-13, 2015) was the first conference to bring together the Apache Flink developer and user community. You can find slides and videos of all talks on the Flink Forward 2015 page.

+ +

The second edition of Flink Forward took place on September 12-14, 2016. All slides and videos are available on the Flink Forward 2016 page.

+ +

People

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + </tr> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameRoleApache ID
Márton BalassiPMC, Committermbalassi
Paris CarboneCommittersenorcarbone
</a>Ufuk CelebiPMC, Committeruce
Stephan EwenPMC, Committer, VPsewen
Gyula FóraPMC, Committergyfora
Alan GatesPMC, Committergates
Fabian HueskePMC, Committerfhueske
Vasia KalavriPMC, Committervasia
Aljoscha KrettekPMC, Committeraljoscha
Andra LunguCommitterandra
Robert MetzgerPMC, Committerrmetzger
Maximilian MichelsPMC, Committermxm
Chiwan ParkCommitterchiwanpark
Till RohrmannPMC, Committertrohrmann
Henry SaputraPMC, Committerhsaputra
Matthias J. SaxCommittermjsax
Sebastian SchelterPMC, Committerssc
Kostas TzoumasPMC, Committerktzoumas
Timo WaltherPMC, Committertwalthr
Daniel WarnekePMC, Committerwarneke
ChengXiang LiCommitterchengxiang
Greg HoganCommittergreg
Tzu-Li (Gordon) TaiCommittertzulitai
+ +

You can reach committers directly at <apache-id>@apache.org. A list of all contributors can be found here.

+ +

Former mentors

+ +

The following people were very kind to mentor the project while in incubation.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + </tr> + + + + + + + + + + + + + + + + + + +
NameRoleApache ID
Ashutosh ChauhanFormer PPMC, Mentorhashutosh
Ted DunningFormer PPMC, Mentortdunning
Alan GatesFormer PPMC, Mentorgates
Owen O'MalleyFormer PPMC, Mentoromalley
Sean OwenFormer PPMC, Mentorsrowen
Henry SaputraFormer PPMC, Mentorhsaputra
+ +

Slides

+ +

Note: Keep in mind that code examples on slides have a chance of being incomplete or outdated. Always refer to the latest documentation for an up to date reference.

+ +

2016

+ +
    +
  • Stefan Richter: A look at Apache Flink 1.2 and beyond Apache Flink Meetup Berlin, November 2016: SlideShare
  • +
  • Robert Metzger: Apache Flink Community Updates November 2016 Apache Flink Meetup Berlin, November 2016: SlideShare
  • +
  • Aljoscha Krettek: Apache Flink for IoT: How Event-Time Processing Enables Easy and Accurate Analytics Big Data Spain, Madrid November 2016: SlideShare
  • +
  • Stephan Ewen: Keynote -The maturing data streaming ecosystem and Apache Flink’s accelerated growth Apache Big Data Europe 2016, Seville November 2016: SlideShare
  • +
  • Kostas Tzoumas: Stream Processing with Apache Flink® Apache Flink London Meetup, November 2016: SlideShare
  • +
  • Kostas Tzoumas: Apache Flink®: State of the Union and What’s Next Strata + Hadoop World New York, September 2016: SlideShare
  • +
  • Kostas Tzoumas & Stephan Ewen: Keynote -The maturing data streaming ecosystem and Apache Flink’s accelerated growth Flink Forward, Berlin September 2016: SlideShare
  • +
  • Robert Metzger: Connecting Apache Flink to the World - Reviewing the streaming connectors Flink Forward, Berlin September 2016: SlideShare
  • +
  • Till Rohrmann & Fabian Hueske: Declarative stream processing with StreamSQL and CEP Flink Forward, Berlin September 2016: SlideShare
  • +
  • Jamie Grier: Robust Stream Processing with Apache Flink Flink Forward, Berlin September 2016: SlideShare
  • +
  • Jamie Grier: The Stream Processor as a Database- Building Online Applications directly on Streams Flink Forward, Berlin September 2016: SlideShare
  • +
  • Till Rohramnn: Dynamic Scaling - How Apache Flink adapts to changing workloads Flink Forward, Berlin September 2016: SlideShare
  • +
  • Stephan Ewen: Running Flink Everywhere Flink Forward, Berlin September 2016: SlideShare
  • +
  • Stephan Ewen: Scaling Apache Flink to very large State Flink Forward, Berlin September 2016: SlideShare
  • +
  • Aljoscha Krettek: The Future of Apache Flink Flink Forward, Berlin September 2016: SlideShare
  • +
  • Fabian Hueske: Taking a look under the hood of Apache Flink’s relational APIs Flink Forward, Berlin September 2016: SlideShare
  • +
  • Kostas Tzoumas: Streaming in the Wild with Apache Flink Hadoop Summit San Jose, June 2016: SlideShare
  • +
  • Stephan Ewen: The Stream Processor as the Database - Apache Flink Berlin Buzzwords, June 2016: SlideShare
  • +
  • Till Rohrmann & Fabian Hueske: Streaming Analytics & CEP - Two sides of the same coin? Berlin Buzzwords, June 2016: SlideShare
  • +
  • Robert Metzger: A Data Streaming Architecture with Apache Flink Berlin Buzzwords, June 2016: SlideShare
  • +
  • Stephan Ewen: Continuous Processing with Apache Flink Strata + Hadoop World London, May 2016: SlideShare
  • +
  • Stephan Ewen: Streaming Analytics with Apache Flink 1.0 Flink NYC Flink, May 2016: SlideShare
  • +
  • Ufuk Celebi: Unified Stream & Batch Processing with Apache Flink. Hadoop Summit Dublin, April 2016: SlideShare
  • +
  • Kostas Tzoumas: Counting Elements in Streams. Strata San Jose, March 2016: SlideShare
  • +
  • Jamie Grier: Extending the Yahoo! Streaming Benchmark. Flink Washington DC Meetup, March 2016: SlideShare
  • +
  • Jamie Grier: Stateful Stream Processing at In-Memory Speed. Flink NYC Meetup, March 2016: SlideShare
  • +
  • Robert Metzger: Stream Processing with Apache Flink. QCon London, March 2016: SlideShare
  • +
  • Vasia Kalavri: Batch and Stream Graph Processing with Apache Flink. Flink and Neo4j Meetup Berlin, March 2016: SlideShare
  • +
  • Maximilian Michels: Stream Processing with Apache Flink. Big Data Technology Summit, February 2016: +SlideShare
  • +
  • Vasia Kalavri: Single-Pass Graph Streaming Analytics with Apache Flink. FOSDEM, January 2016: SlideShare
  • +
  • Till Rohrmann: Streaming Done Right. FOSDEM, January 2016: SlideShare
  • +
+ +

2015

+ +
    +
  • Till Rohrmann: Streaming Data Flow with Apache Flink (October 29th, 2015): SlideShare
  • +
  • Stephan Ewen: Flink-0.10 (October 28th, 2015): SlideShare
  • +
  • Robert Metzger: Architecture of Flink’s Streaming Runtime (ApacheCon, September 29th, 2015): SlideShare
  • +
  • Robert Metzger: Click-Through Example for Flink’s KafkaConsumer Checkpointing (September, 2015): SlideShare
  • +
  • Paris Carbone: Apache Flink Streaming. Resiliency and Consistency (Google Tech Talk, August 2015: SlideShare
  • +
  • Andra Lungu: Graph Processing with Apache Flink (August 26th, 2015): SlideShare
  • +
  • Till Rohrmann: Interactive data analytisis with Apache Flink (June 23rd, 2015): SlideShare
  • +
  • Gyula Fóra: Real-time data processing with Apache Flink (Budapest Data Forum, June 4th, 2015): SlideShare
  • +
  • Till Rohrmann: Machine Learning with Apache Flink (March 23th, 2015): SlideShare
  • +
  • Marton Balassi: Flink Streaming (February 26th, 2015): SlideShare
  • +
  • Vasia Kalavri: Large-Scale Graph Processing with Apache Flink (FOSDEM, 31st January, 2015): SlideShare
  • +
  • Fabian Hueske: Hadoop Compatibility (January 28th, 2015): SlideShare
  • +
  • Kostas Tzoumas: Apache Flink Overview (January 14th, 2015): SlideShare
  • +
+ +

2014

+ +
    +
  • Kostas Tzoumas: Flink Internals (November 18th, 2014): SlideShare
  • +
  • Marton Balassi & Gyula Fóra: The Flink Big Data Analytics Platform (ApachecCon, November 11th, 2014): SlideShare
  • +
  • Till Rohrmann: Introduction to Apache Flink (October 15th, 2014): SlideShare
  • +
+ +

Materials

+ + + +

We provide the Apache Flink logo in different sizes and formats. You can download all variants (7.7 MB) or just pick the one you need from this page.

+ +

Portable Network Graphics (PNG)

+ +
+
+

Colored logo

+ +

Apache Flink Logo

+ +

Sizes (px): + 50x50, + 100x100, + 200x200, + 500x500, + 1000x1000

+
+ +
+

White filled logo

+ +

Apache Flink Logo

+ +

Sizes (px): + 50x50, + 100x100, + 200x200, + 500x500, + 1000x1000

+
+ +
+

Black outline logo

+ +

Apache Flink Logo

+ +

Sizes (px): + 50x50, + 100x100, + 200x200, + 500x500, + 1000x1000

+
+
+ +
+
+ You can find more variants of the logo in this directory or download all variants (7.7 MB). +
+
+ +

Scalable Vector Graphics (SVG)

+ +
+
+

Colored logo

+ +

Apache Flink Logo

+ +

Colored logo with black text (color_black.svg)

+
+
+

White filled logo

+ +

Apache Flink Logo

+ +

White filled logo (white_filled.svg)

+
+
+

Black outline logo

+ +

Apache Flink Logo

+ +

Black outline logo (black_outline.svg)

+
+
+ +
+
+ You can find more variants of the logo in this directory or download all variants (7.7 MB). +
+
+ +

Photoshop (PSD)

+ +
+
+

You can download the logo in PSD format as well:

+ + + +

You can find more variants of the logo in this directory or download all variants (7.7 MB).

+
+
+ +

Color Scheme

+ +

You can use the provided color scheme which incorporates some colors of the Flink logo:

+ + + + +
+
+ +
+
+ +
+ +
+ +
+
+ + + + + + + + + + + http://git-wip-us.apache.org/repos/asf/flink-web/blob/61adc137/content/contribute-code.html ---------------------------------------------------------------------- diff --git a/content/contribute-code.html b/content/contribute-code.html new file mode 100644 index 0000000..4cfd367 --- /dev/null +++ b/content/contribute-code.html @@ -0,0 +1,519 @@ + + + + + + + + Apache Flink: Contributing Code + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + + +
+
+
+

Contributing Code

+ +

Apache Flink is maintained, improved, and extended by code contributions of volunteers. The Apache Flink community encourages anybody to contribute source code. In order to ensure a pleasant contribution experience for contributors and reviewers and to preserve the high quality of the code base, we follow a contribution process that is explained in this document.

+ +

This document contains everything you need to know about contributing code to Apache Flink. It describes the process of preparing, testing and submitting a contribution, explains coding guidelines and code style of Flink’s code base, and gives instructions to setup a development environment.

+ +

IMPORTANT: Please read this document carefully before starting to work on a code contribution. It is important to follow the process and guidelines explained below. Otherwise, your pull request might not be accepted or might require substantial rework. In particular, before opening a pull request that implements a new feature, you need to open a JIRA ticket and reach consensus with the community on whether this feature is needed.

+ + + +

Code Contribution Process

+ +

Before you start coding…

+ +

… please make sure there is a JIRA issue that corresponds to your contribution. This is a general rule that the Flink community follows for all code contributions, including bug fixes, improvements, or new features, with an exception for trivial hot fixes. If you would like to fix a bug that you found or if you would like to add a new feature or improvement to Flink, please follow the File a bug report or Propose an improvement or a new feature guidelines to open an issue in Flink’s JIRA before starting with the implementation.

+ +

If the description of a JIRA issue indicates that its resolution will touch sensible parts of the code base, be sufficiently complex, or add significant amounts of new code, the Flink community might request a design document (most contributions should not require a design document). The purpose of this document is to ensure that the overall approach to address the issue is sensible and agreed upon by the community. JIRA issues that require a design document are tagged with the requires-design-doc label. The label can be attached by any community member who feels that a design document is necessary. A good description helps to decide whether a JIRA issue requires a design document or not. The design document must be added or attached to or link from the JIRA issue and cover the following aspects:

+ +
    +
  • Overview of the general approach
  • +
  • List of API changes (changed interfaces, new and deprecated configuration parameters, changed behavior, …)
  • +
  • Main components and classes to be touched
  • +
  • Known limitations of the proposed approach
  • +
+ +

A design document can be added by anybody, including the reporter of the issue or the person working on it.

+ +

Contributions for JIRA issues that require a design document will not be added to Flink’s code base before a design document has been accepted by the community with lazy consensus. Please check if a design document is required before starting to code.

+ +

While coding…

+ +

… please respect the following rules:

+ +
    +
  • Take any discussion or requirement that is recorded in the JIRA issue into account.
  • +
  • Follow the design document (if a design document is required) as close as possible. Please update the design document and seek consensus, if your implementation deviates too much from the solution proposed by the design document. Minor variations are OK but should be pointed out when the contribution is submitted.
  • +
  • Closely follow the coding guidelines and the code style.
  • +
  • Do not mix unrelated issues into one contribution.
  • +
+ +

Please feel free to ask questions at any time. Either send a mail to the dev mailing list or comment on the JIRA issue.

+ +

The following instructions will help you to setup a development environment.

+ +

Verifying the compliance of your code

+ +

It is very important to verify the compliance of changes before submitting your contribution. This includes:

+ +
    +
  • Making sure the code builds.
  • +
  • Verifying that all existing and new tests pass.
  • +
  • Check that the code style is not violated.
  • +
  • Making sure no unrelated or unnecessary reformatting changes are included.
  • +
+ +

You can build the code, run the tests, and check (parts of) the code style by calling

+ +
mvn clean verify
+
+ +

Please note, that some tests in Flink’s code base are flaky and can fail by chance. The Flink community is working hard on improving these tests but sometimes this is not possible, e.g., when tests include external dependencies. We maintain all tests that are known to be flaky in JIRA and attach the test-stability label. Please check (and extend) this list of known flaky tests if you encounter a test failure that seems to be unrelated to your changes.

+ +

Please note, that we run additional build profiles for different combinations of Java, Scala, and Hadoop versions to validate your contribution. We encourage every contributor to use a continuous integration service that will automatically test the code in your repository whenever you push a change. The Best practices guide shows how to integrate Travis with your Github repository.

+ +

In addition to the automated tests, please check the diff of your changes and remove all unrelated changes such as unnecessary reformatting.

+ +

Preparing and submitting your contribution

+ +

To make the changes easily mergeable, please rebase them to the latest version of the main repositories master branch. Please do also respect the commit message guidelines, clean up your commit history, and squash your commits to an appropriate set. Please verify your contribution one more time after rebasing and commit squashing as described above.

+ +

The Flink project accepts code contributions through the GitHub Mirror, in the form of Pull Requests. Pull requests are a simple way to offer a patch, by providing a pointer to a code branch that contains the change.

+ +

To open a pull request, push our contribution back into your fork of the Flink repository.

+ +
git push origin myBranch
+
+ +

Go the website of your repository fork (https://github.com/<your-user-name>/flink) and use the “Create Pull Request” button to start creating a pull request. Make sure that the base fork is apache/flink master and the head fork selects the branch with your changes. Give the pull request a meaningful description and send it.

+ +

It is also possible to attach a patch to a JIRA issue.

+ +
+ +

Coding guidelines

+ +

Pull requests and commit message

+ +
    +
  • +

    Single change per PR. Please do not combine various unrelated changes in a single pull request. Rather, open multiple individual pull requests where each PR refers to a JIRA issue. This ensures that pull requests are topic related, can be merged more easily, and typically result in topic-specific merge conflicts only.

    +
  • +
  • +

    No WIP pull requests. We consider pull requests as requests to merge the referenced code as is into the current stable master branch. Therefore, a pull request should not be “work in progress”. Open a pull request if you are confident that it can be merged into the current master branch without problems. If you rather want comments on your code, post a link to your working branch.

    +
  • +
  • +

    Commit message. A pull request must relate to a JIRA issue; create an issue if none exists for the change you want to make. The latest commit message should reference that issue. An example commit message would be [FLINK-633] Fix NullPointerException for empty UDF parameters. That way, the pull request automatically gives a description of what it does, for example what bug does it fix in what way.

    +
  • +
  • +

    Append review commits. When you get comments on the pull request asking for changes, append commits for these changes. Do not rebase and squash them. It allows people to review the cleanup work independently. Otherwise reviewers have to go through the entire set of diffs again.

    +
  • +
  • +

    No merge commits. Please do not open pull requests containing merge commits. Use git pull --rebase origin master if you want to update your changes to the latest master prior to opening a pull request.

    +
  • +
+ +

Exceptions and error messages

+ +
    +
  • +

    Exception swallowing. Do not swallow exceptions and print the stacktrace. Instead check how exceptions are handled by similar classes.

    +
  • +
  • +

    Meaningful error messages. Give meaningful exception messages. Try to imagine why an exception could be thrown (what a user did wrong) and give a message that will help a user to resolve the problem.

    +
  • +
+ +

Tests

+ +
    +
  • +

    Tests need to pass. Any pull request where the tests do not pass or which does not compile will not undergo any further review. We recommend to connect your private GitHub accounts with Travis CI (like the Flink GitHub repository). Travis will run tests for all tested environments whenever you push something into your Github repository. Please note the previous comment about flaky tests.

    +
  • +
  • +

    Tests for new features are required. All new features need to be backed by tests, strictly. It is very easy that a later merge accidentally throws out a feature or breaks it. This will not be caught if the feature is not guarded by tests. Anything not covered by a test is considered cosmetic.

    +
  • +
  • +

    Use appropriate test mechanisms. Please use unit tests to test isolated functionality, such as methods. Unit tests should execute in subseconds and should be preferred whenever possible. The name of unit test classes have to on *Test. Use integration tests to implement long-running tests. Flink offers test utilities for end-to-end tests that start a Flink instance and run a job. These tests are pretty heavy and can significantly increase build time. Hence, they should be added with care. The name of unit test classes have to on *ITCase.

    +
  • +
+ +

Documentation

+ +
    +
  • +

    Documentation Updates. Many changes in the system will also affect the documentation (both JavaDocs and the user documentation in the docs/ directory.). Pull requests and patches are required to update the documentation accordingly, otherwise the change can not be accepted to the source code. See the Contribute documentation guide for how to update the documentation.

    +
  • +
  • +

    Javadocs for public methods. All public methods and classes need to have JavaDocs. Please write meaningful docs. Good docs are concise and informative. Please do also update JavaDocs if you change the signature or behavior of a documented method.

    +
  • +
+ +

Code formatting

+ +
    +
  • No reformattings. Please keep reformatting of source files to a minimum. Diffs become unreadable if you (or your IDE automatically) remove or replace whitespaces, reformat code, or comments. Also, other patches that affect the same files become un-mergeable. Please configure your IDE such that code is not automatically reformatted. Pull requests with excessive or unnecessary code reformatting might be rejected.
  • +
+ +
+ +

Code style

+ +
    +
  • Apache license headers. Make sure you have Apache License headers in your files. The RAT plugin is checking for that when you build the code.
  • +
  • Tabs vs. spaces. We are using tabs for indentation, not spaces. We are not religious there, it just happened to be that we started with tabs, and it is important to not mix them (merge/diff conflicts).
  • +
  • +

    Blocks. All statements after if, for, while, do, … must always be encapsulated in a block with curly braces (even if the block contains one statement):

    + +
    for (...) {
    + ...
    +}
    +

    If you are wondering why, recall the famous goto bug in Apple’s SSL library.

    +
  • +
  • No wildcard imports. Do not use wildcard imports in the core files. They can cause problems when adding to the code and in some cases even during refactoring. Exceptions are the Tuple classes, Tuple-related utilities, and Flink user programs, when importing operators/functions. Tests are a special case of the user programs.
  • +
  • No unused imports. Remove all unused imports.
  • +
  • Use Guava Checks. To increase homogeneity, consistently use Guava methods checkNotNull and checkArgument rather than Apache Commons Validate.
  • +
  • No raw generic types. Do not use raw generic types, unless strictly necessary (sometime necessary for signature matches, arrays).
  • +
  • Supress warnings. Add annotations to suppress warnings, if they cannot be avoided (such as “unchecked”, or “serial”).
  • +
  • +

    Comments. Add comments to your code. What is it doing? Add JavaDocs or inherit them by not adding any comments to the methods. Do not automatically generate comments and avoid unnecessary comments like:

    + +
    i++; // increment by one
    +
  • +
+ +
+ +

Best practices

+ +
    +
  • Travis: Flink is pre-configured for Travis CI, which can be easily enabled for your private repository fork (it uses GitHub for authentication, so you so not need an additional account). Simply add the Travis CI hook to your repository (settings –> webhooks & services –> add service) and enable tests for the flink repository on Travis.
  • +
+ +
+ +

Setup a development environment

+ + + +
    +
  • Unix-like environment (We use Linux, Mac OS X, Cygwin)
  • +
  • git
  • +
  • Maven (at least version 3.0.4)
  • +
  • Java 7 or 8
  • +
+ +

Clone the repository

+ +

Apache Flink’s source code is stored in a git repository which is mirrored to Github. The common way to exchange code on Github is to fork a the repository into your personal Github account. For that, you need to have a Github account or create one for free. Forking a repository means that Github creates a copy of the forked repository for you. This is done by clicking on the fork button on the upper right of the repository website. Once you have a fork of Flink’s repository in your personal account, you can clone that repository to your local machine.

+ +
git clone https://github.com/<your-user-name>/flink.git
+
+ +

The code is downloaded into a directory called flink.

+ +

Proxy Settings

+ +

If you are behind a firewall you may need to provide Proxy settings to Maven and your IDE.

+ +

For example, the WikipediaEditsSourceTest communicates over IRC and need a SOCKS proxy server to pass.

+ +

Setup an IDE and import the source code

+ +

The Flink committers use IntelliJ IDEA and Eclipse IDE to develop the Flink code base.

+ +

Minimal requirements for an IDE are:

+ +
    +
  • Support for Java and Scala (also mixed projects)
  • +
  • Support for Maven with Java and Scala
  • +
+ +

IntelliJ IDEA

+ +

The IntelliJ IDE supports Maven out of the box and offers a plugin for Scala development.

+ + + +

Check out our Setting up IntelliJ guide for details.

+ +

Eclipse Scala IDE

+ +

For Eclipse users, we recommend using Scala IDE 3.0.3, based on Eclipse Kepler. While this is a slightly older version, +we found it to be the version that works most robustly for a complex project like Flink.

+ +

Further details, and a guide to newer Scala IDE versions can be found in the +How to setup Eclipse docs.

+ +

Note: Before following this setup, make sure to run the build from the command line once +(mvn clean install -DskipTests, see above)

+ +
    +
  1. Download the Scala IDE (preferred) or install the plugin to Eclipse Kepler. See +How to setup Eclipse for download links and instructions.
  2. +
  3. Add the “macroparadise” compiler plugin to the Scala compiler. +Open “Window” -> “Preferences” -> “Scala” -> “Compiler” -> “Advanced” and put into the “Xplugin” field the path to +the macroparadise jar file (typically “/home/-your-user-/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar”). +Note: If you do not have the jar file, you probably did not run the command line build.
  4. +
  5. Import the Flink Maven projects (“File” -> “Import” -> “Maven” -> “Existing Maven Projects”)
  6. +
  7. During the import, Eclipse will ask to automatically install additional Maven build helper plugins.
  8. +
  9. Close the “flink-java8” project. Since Eclipse Kepler does not support Java 8, you cannot develop this project.
  10. +
+ +

Import the source code

+ +

Apache Flink uses Apache Maven as build tool. Most IDE are capable of importing Maven projects.

+ +

Build the code

+ +

To build Flink from source code, open a terminal, navigate to the root directory of the Flink source code, and call

+ +
mvn clean package
+
+ +

This will build Flink and run all tests. Flink is now installed in build-target.

+ +

To build Flink without executing the tests you can call

+ +
mvn -DskipTests clean package
+
+ +
+ +

How to use Git as a committer

+ +

Only the infrastructure team of the ASF has administrative access to the GitHub mirror. Therefore, comitters have to push changes to the git repository at the ASF.

+ +

Main source repositories

+ +

ASF writable: https://git-wip-us.apache.org/repos/asf/flink.git

+ +

ASF read-only: git://git.apache.org/repos/asf/flink.git

+ +

ASF read-only: https://github.com/apache/flink.git

+ +

Note: Flink does not build with Oracle JDK 6. It runs with Oracle JDK 6.

+ +

If you want to build for Hadoop 1, activate the build profile via mvn clean package -DskipTests -Dhadoop.profile=1.

+ +
+ +

Snapshots (Nightly Builds)

+ +

Apache Flink 1.2-SNAPSHOT is our latest development version.

+ +

You can download a packaged version of our nightly builds, which include +the most recent development code. You can use them if you need a feature +before its release. Only builds that pass all tests are published here.

+ + + +

Add the Apache Snapshot repository to your Maven pom.xml:

+ +
<repositories>
+  <repository>
+    <id>apache.snapshots</id>
+    <name>Apache Development Snapshot Repository</name>
+    <url>https://repository.apache.org/content/repositories/snapshots/</url>
+    <releases><enabled>false</enabled></releases>
+    <snapshots><enabled>true</enabled></snapshots>
+  </repository>
+</repositories>
+ +

You can now include Apache Flink as a Maven dependency (see above) with version 1.2-SNAPSHOT (or 1.2-SNAPSHOT-hadoop1 for compatibility with old Hadoop 1.x versions).

+ + +
+
+ +
+
+ +
+ +
+ +
+
+ + + + + + + + + + +