Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id CA86C200B35 for ; Tue, 21 Jun 2016 00:14:30 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id C9062160A66; Mon, 20 Jun 2016 22:14:30 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id A0B3C160A55 for ; Tue, 21 Jun 2016 00:14:28 +0200 (CEST) Received: (qmail 4868 invoked by uid 500); 20 Jun 2016 22:14:25 -0000 Mailing-List: contact commits-help@beam.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@beam.incubator.apache.org Delivered-To: mailing list commits@beam.incubator.apache.org Received: (qmail 4805 invoked by uid 99); 20 Jun 2016 22:14:25 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 20 Jun 2016 22:14:25 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 39E831A0147 for ; Mon, 20 Jun 2016 22:14:25 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -4.646 X-Spam-Level: X-Spam-Status: No, score=-4.646 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-1.426] autolearn=disabled Received: from mx2-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id tq0hdwlIx0d9 for ; Mon, 20 Jun 2016 22:14:14 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx2-lw-eu.apache.org (ASF Mail Server at mx2-lw-eu.apache.org) with SMTP id 8A2055F369 for ; Mon, 20 Jun 2016 22:14:11 +0000 (UTC) Received: (qmail 2764 invoked by uid 99); 20 Jun 2016 22:14:10 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 20 Jun 2016 22:14:10 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id A454BE009D; Mon, 20 Jun 2016 22:14:10 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: jamesmalone@apache.org To: commits@beam.incubator.apache.org Date: Mon, 20 Jun 2016 22:14:12 -0000 Message-Id: <4403cb5e286649218b91ca3a7d3f4ad7@git.apache.org> In-Reply-To: <169e3eb5d61b4f3aa5aa49398dd8b518@git.apache.org> References: <169e3eb5d61b4f3aa5aa49398dd8b518@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [3/5] incubator-beam-site git commit: Fixed HTML errors; added link and HTML tests via rake archived-at: Mon, 20 Jun 2016 22:14:31 -0000 http://git-wip-us.apache.org/repos/asf/incubator-beam-site/blob/2a61d388/content/capability-matrix/index.html ---------------------------------------------------------------------- diff --git a/content/capability-matrix/index.html b/content/capability-matrix/index.html index 06992fd..eee24a0 100644 --- a/content/capability-matrix/index.html +++ b/content/capability-matrix/index.html @@ -99,7 +99,7 @@

Apache Beam Capability Matrix

-

Last updated: 2016-06-14 18:36 PDT

+

Last updated: 2016-06-20 14:17 PDT

Apache Beam (incubating) provides a portable API layer for building sophisticated data-parallel processing engines that may be executed across a diversity of exeuction engines, or runners. The core concepts of this layer are based upon the Beam Model (formerly referred to as the Dataflow Model), and implemented to varying degrees in each Beam runner. To help clarify the capabilities of individual runners, we’ve created the capability matrix below.

@@ -157,19 +157,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -178,19 +182,23 @@ - +
+ - +
+ - +
+ - +
~
+ @@ -199,19 +207,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -220,19 +232,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -241,19 +257,23 @@ - +
+ - +
~
+ - +
~
+ - +
~
+ @@ -262,19 +282,23 @@ - +
+ - +
+ - +
~ (BEAM-102)
+ - +
~
+ @@ -283,19 +307,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -304,19 +332,23 @@ - +
~
+ - +
~
+ - +
~
+ - +
~
+ @@ -325,19 +357,23 @@ - +
✕ (BEAM-25)
+ - +
+ - +
+ - +
+ @@ -366,19 +402,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -387,19 +427,23 @@ - +
+ - +
+ - +
+ - +
~
+ @@ -408,19 +452,23 @@ - +
+ - +
+ - +
+ - +
~
+ @@ -429,19 +477,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -450,19 +502,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -471,19 +527,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -492,19 +552,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -533,19 +597,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -554,19 +622,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -575,19 +647,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -596,19 +672,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -617,19 +697,23 @@ - +
✕ (BEAM-101)
+ - +
+ - +
+ - +
+ @@ -638,19 +722,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -659,19 +747,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -680,19 +772,23 @@ - +
✕ (BEAM-27)
+ - +
+ - +
+ - +
+ @@ -721,19 +817,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -742,19 +842,23 @@ - +
+ - +
+ - +
+ - +
+ @@ -763,19 +867,23 @@ - +
✕ (BEAM-91)
+ - +
+ - +
+ - +
+ @@ -818,19 +926,23 @@ - +
Yes: element-wise processing

Element-wise transformation parameterized by a chunk of user code. Elements are processed in bundles, with initialization and termination hooks. Bundle size is chosen by the runner and cannot be controlled by user code. ParDo processes a main input PCollection one element at a time, but provides side input access to additional PCollections. + - +
Yes: fully supported

Batch mode uses large bundle sizes. Streaming uses smaller bundle sizes. + - +
Yes: fully supported

ParDo itself, as per-element transformation with UDFs, is fully supported by Flink for both batch and streaming. + - +
Yes: fully supported

ParDo applies per-element transformations as Spark FlatMapFunction. + @@ -839,19 +951,23 @@ - +
Yes: key grouping

Grouping of key-value pairs per key, window, and pane. (See also other tabs.) + - +
Yes: fully supported

+ - +
Yes: fully supported

Uses Flink's keyBy for key grouping. When grouping by window in streaming (creating the panes) the Flink runner uses the Beam code. This guarantees support for all windowing and triggering mechanisms. + - +
Partially: group by window in batch only

Uses Spark's groupByKey for grouping. Grouping by window is currently only supported in batch. + @@ -860,19 +976,23 @@ - +
Yes: collection concatenation

Concatenates multiple homogenously typed collections together. + - +
Yes: fully supported

+ - +
Yes: fully supported

+ - +
Yes: fully supported

+ @@ -881,19 +1001,23 @@ - +
Yes: associative & commutative aggregation

Application of an associative, commutative operation over all values ("globally") or over all values associated with each key ("per key"). Can be implemented using ParDo, but often more efficient implementations exist. + - +
Yes: efficient execution

+ - +
Yes: fully supported

Uses a combiner for pre-aggregation for batch and streaming. + - +
Yes: fully supported

Supports GroupedValues, Globally and PerKey. + @@ -902,19 +1026,23 @@ - +
Yes: user-defined transformation subgraphs

Allows easy extensibility for library writers. In the near future, we expect there to be more information provided at this level -- customized metadata hooks for monitoring, additional runtime/environment hooks, etc. + - +
Partially: supported via inlining

Currently composite transformations are inlined during execution. The structure is later recreated from the names, but other transform level information (if added to the model) will be lost. + - +
Partially: supported via inlining

+ - +
Partially: supported via inlining

+ @@ -923,19 +1051,23 @@ - +
Yes: additional elements available during DoFn execution

Side inputs are additional PCollections whose contents are computed during pipeline execution and then made accessible to DoFn code. The exact shape of the side input depends both on the PCollectionView used to describe the access pattern (interable, map, singleton) and the window of the element from the main input that is currently being processed. + - +
Yes: some size restrictions in streaming

Batch implemented supports a distributed implementation, but streaming mode may force some size restrictions. Neither mode is able to push lookups directly up into key-based sources. + - +
Partially: no supported in streaming
(BEAM-102)

Supported in batch. Side inputs for streaming are currently WiP. + - +
Partially: not supported in streaming

Side input is actually a broadcast variable in Spark so it can't be updated during the life of a job. Spark-runner implementation of side input is more of an immutable, static, side input. + @@ -944,19 +1076,23 @@ - +
Yes: user-defined sources

Allows users to provide additional input sources. Supports both bounded and unbounded data. Includes hooks necessary to provide efficient parallelization (size estimation, progress information, dynamic splitting, etc). + - +
Yes: fully supported

+ - +
Yes: fully supported

+ - +
Yes: fully supported

+ @@ -965,19 +1101,23 @@ - +
Partially: user-provided metrics

Allow transforms to aggregate simple metrics across bundles in a DoFn. Semantically equivalent to using a side output, but support partial results as the transform executes. Will likely want to augment Aggregators to be more useful for processing unbounded data by making them windowed. + - +
Partially: may miscount in streaming mode

Current model is fully supported in batch mode. In streaming mode, Aggregators may under or overcount when bundles are retried. + - +
Partially: may undercount in streaming

Current model is fully supported in batch. In streaming mode, Aggregators may undercount. + - +
Partially: streaming requires more testing

Uses Spark's AccumulatorParam mechanism + @@ -986,19 +1126,23 @@ - +
No: storage per key, per window
(BEAM-25)

Allows fine-grained access to per-key, per-window persistent state. Necessary for certain use cases (e.g. high-volume windows which store large amounts of data, but typically only access small portions of it; complex state machines; etc.) that are not easily or efficiently addressed via Combine or GroupByKey+ParDo. + - +
No: pending model support

Dataflow already supports keyed state internally, so adding support for this should be easy once the Beam model exposes it. + - +
No: pending model support

Flink already supports keyed state, so adding support for this should be easy once the Beam model exposes it. + - +
No: pending model support

Spark supports keyed state with mapWithState() so support shuold be straight forward. + @@ -1027,19 +1171,23 @@ - +
Yes: all time

The default window which covers all of time. (Basically how traditional batch cases fit in the model.) + - +
Yes: default

+ - +
Yes: supported

+ - +
Yes: supported

+ @@ -1048,19 +1196,23 @@ - +
Yes: periodic, non-overlapping

Fixed-size, timestamp-based windows. (Hourly, Daily, etc) + - +
Yes: built-in

+ - +
Yes: supported

+ - +
Partially: currently only supported in batch

+ @@ -1069,19 +1221,23 @@ - +
Yes: periodic, overlapping

Possibly overlapping fixed-size timestamp-based windows (Every minute, use the last ten minutes of data.) + - +
Yes: built-in

+ - +
Yes: supported

+ - +
Partially: currently only supported in batch

+ @@ -1090,19 +1246,23 @@ - +
Yes: activity-based

Based on bursts of activity separated by a gap size. Different per key. + - +
Yes: built-in

+ - +
Yes: supported

The Runner uses Beam's Windowing and Triggering logic and code. + - +
No: pending Spark engine support

+ @@ -1111,19 +1271,23 @@ - +
Yes: user-defined windows

All windows must implement BoundedWindow, which specifies a max timestamp. Each WindowFn assigns elements to an associated window. + - +
Yes: supported

+ - +
Yes: supported

The Runner uses Beam's Windowing and Triggering logic and code. + - +
No: pending Spark engine support

+ @@ -1132,19 +1296,23 @@ - +
Yes: user-defined merging windows

A custom WindowFn additionally specifies whether and how to merge windows. + - +
Yes: supported

+ - +
Yes: supported

The Runner uses Beam's Windowing and Triggering logic and code. + - +
No: pending Spark engine support

+ @@ -1153,19 +1321,23 @@ - +
Yes: output timestamp for window panes

For a grouping transform, such as GBK or Combine, an OutputTimeFn specifies (1) how to combine input timestamps within a window and (2) how to merge aggregated timestamps when windows merge. + - +
Yes: supported

+ - +
Yes: supported

The Runner uses Beam's Windowing and Triggering logic and code. + - +
No: pending Spark engine support

+ @@ -1194,19 +1366,23 @@ - +
Yes: user customizable

Triggering may be specified by the user (instead of simply driven by hardcoded defaults). + - +
Yes: fully supported

Fully supported in streaming mode. In batch mode, intermediate trigger firings are effectively meaningless. + - +
Yes: fully supported

The Runner uses Beam's Windowing and Triggering logic and code. + - +
No

+ @@ -1215,19 +1391,23 @@ - +
Yes: relative to event time

Triggers that fire in response to event-time completeness signals, such as watermarks progressing. + - +
Yes: yes in streaming, fixed granularity in batch

Fully supported in streaming mode. In batch mode, currently watermark progress jumps from the beginning of time to the end of time once the input has been fully consumed, thus no additional triggering granularity is available. + - +
Yes: fully supported

The Runner uses Beam's Windowing and Triggering logic and code. + - +
No

+ @@ -1236,19 +1416,23 @@ - +
Yes: relative to processing time

Triggers that fire in response to processing-time advancing. + - +
Yes: yes in streaming, fixed granularity in batch

Fully supported in streaming mode. In batch mode, from the perspective of triggers, processing time currently jumps from the beginning of time to the end of time once the input has been fully consumed, thus no additional triggering granularity is available. + - +
Yes: fully supported

The Runner uses Beam's Windowing and Triggering logic and code. + - +
Yes: This is Spark streaming's native model

Spark processes streams in micro-batches. The micro-batch size is actually a pre-set, fixed, time interval. Currently, the runner takes the first window size in the pipeline and sets it's size as the batch interval. Any following window operations will be considered processing time windows and will affect triggering. + @@ -1257,19 +1441,23 @@ - +
Yes: every N elements

Triggers that fire after seeing at least N elements. + - +
Yes: fully supported

Fully supported in streaming mode. In batch mode, elements are processed in the largest bundles possible, so count-based triggers are effectively meaningless. + - +
Yes: fully supported

The Runner uses Beam's Windowing and Triggering logic and code. + - +
No

+ @@ -1278,19 +1466,23 @@ - +
No: in response to data
(BEAM-101)

Triggers that fire in response to attributes of the data being processed. + - +
No: pending model support

+ - +
No: pending model support

+ - +
No: pending model support

+ @@ -1299,19 +1491,23 @@ - +
Yes: compositions of one or more sub-triggers

Triggers which compose other triggers in more complex structures, such as logical AND, logical OR, early/on-time/late, etc. + - +
Yes: fully supported

+ - +
Yes: fully supported

The Runner uses Beam's Windowing and Triggering logic and code. + - +
No

+ @@ -1320,19 +1516,23 @@ - +
Yes: event-time bound on window lifetimes

A way to bound the useful lifetime of a window (in event time), after which any unemitted results may be materialized, the window contents may be garbage collected, and any addtional late data that arrive for the window may be discarded. + - +
Yes: fully supported

Fully supported in streaming mode. In batch mode no data is ever late. + - +
Yes: fully supported

The Runner uses Beam's Windowing and Triggering logic and code. + - +
No

+ @@ -1341,19 +1541,23 @@ - +
No: delayed processing callbacks
(BEAM-27)

A fine-grained mechanism for performing work at some point in the future, in either the event-time or processing-time domain. Useful for orchestrating delayed events, timeouts, etc in complex state per-key, per-window state machines. + - +
No: pending model support

Dataflow already supports timers internally, so adding support for this should be easy once the Beam model exposes it. + - +
No: pending model support

Flink already supports timers internally, so adding support for this should be easy once the Beam model exposes it. + - +
No: pending model support

+ @@ -1382,19 +1586,23 @@ - +
Yes: panes discard elements when fired

Elements are discarded from accumulated state as their pane is fired. + - +
Yes: fully supported

+ - +
Yes: fully supported

The Runner uses Beam's Windowing and Triggering logic and code. + - +
Yes: fully supported

Spark streaming natively discards elements after firing. + @@ -1403,19 +1611,23 @@ - +
Yes: panes accumulate elements across firings

Elements are accumulated in state across multiple pane firings for the same window. + - +
Yes: fully supported

Requires that the accumulated pane fits in memory, after being passed through the combiner (if relevant) + - +
Yes: fully supported

The Runner uses Beam's Windowing and Triggering logic and code. + - +
No

+ @@ -1424,19 +1636,23 @@ - +
No: accumulation plus retraction of old panes
(BEAM-91)

Elements are accumulated across multiple pane firings and old emitted values are retracted. Also known as "backsies" ;-D + - +
No: pending model support

+ - +
No: pending model support

+ - +
No: pending model support

+ http://git-wip-us.apache.org/repos/asf/incubator-beam-site/blob/2a61d388/content/coming-soon.html ---------------------------------------------------------------------- diff --git a/content/coming-soon.html b/content/coming-soon.html index 8174ac8..29cfd8f 100644 --- a/content/coming-soon.html +++ b/content/coming-soon.html @@ -98,13 +98,11 @@
-

-

-

+

Documentation Coming Soon

@@ -112,7 +110,6 @@

Go Back to the main Beam site.

-
http://git-wip-us.apache.org/repos/asf/incubator-beam-site/blob/2a61d388/content/contribution-guide/index.html ---------------------------------------------------------------------- diff --git a/content/contribution-guide/index.html b/content/contribution-guide/index.html index c4e22a7..28a08c8 100644 --- a/content/contribution-guide/index.html +++ b/content/contribution-guide/index.html @@ -174,9 +174,9 @@

Engage

Mailing list(s)

-

We discuss design and implementation issues on dev@beam.incubator.apache.org mailing list, which is archived here. Join by emailing dev-subscribe@beam.incubator.apache.org.

+

We discuss design and implementation issues on dev@beam.incubator.apache.org mailing list, which is archived here. Join by emailing dev-subscribe@beam.incubator.apache.org.

-

If interested, you can also join user@beam.incubator.apache.org and commits@beam.incubator.apache.org too.

+

If interested, you can also join user@beam.incubator.apache.org and commits@beam.incubator.apache.org too.

Apache JIRA

We use Apache JIRA as an issue tracking and project management tool, as well as a way to communicate among a very diverse and distributed set of contributors. To be able to gather feedback, avoid frustration, and avoid duplicated efforts all Beam-related work should be tracked there.

@@ -223,9 +223,10 @@

Clone Beam’s read-only GitHub mirror.

-
$ git clone https://github.com/apache/incubator-beam.git
+
$ git clone https://github.com/apache/incubator-beam.git
 $ cd incubator-beam
 
+

Add your forked repository as an additional Git remote, where you’ll push your changes.

@@ -234,7 +235,7 @@ $ cd incubator-beam

You are now ready to start developing!

Create a branch in your fork

-

You’ll work on your contribution in a branch in your own (forked) repository. Create a local branch, initialized with the state of the branch you expect your changes to be merged into. Keep in mind that we use several branches, including master, feature-specific, and release-specific branches. If you are unsure, initialize with the state of the master branch.

+

You’ll work on your contribution in a branch in your own (forked) repository. Create a local branch, initialized with the state of the branch you expect your changes to be merged into. Keep in mind that we use several branches, including master, feature-specific, and release-specific branches. If you are unsure, initialize with the state of the master branch.

$ git fetch --all
 $ git checkout -b <my-branch> origin/master
@@ -244,10 +245,11 @@ $ git checkout -b <my-branch> origin/master

Syncing and pushing your branch

Periodically while you work, and certainly before submitting a pull request, you should update your branch with the most recent changes to the target branch.

-
$ git pull --rebase
+
$ git pull --rebase
 
+
-

Remember to always use --rebase parameter to avoid extraneous merge commits.

+

Remember to always use --rebase parameter to avoid extraneous merge commits.

To push your local, committed changes to your (forked) repository on GitHub, run:

@@ -258,8 +260,9 @@ $ git checkout -b <my-branch> origin/master

For contributions to the Java code, run unit tests locally via Maven. Alternatively, you can use Travis-CI.

-
$ mvn clean verify
+
$ mvn clean verify
 
+

Review

Once the initial code is complete and the tests pass, it’s time to start the code review process. We review and discuss all code, no matter who authors it. It’s a great way to build community, since you can learn from other developers, and they become familiar with your contribution. It also builds a strong project by encouraging a high quality bar and keeping code consistent throughout the project.

@@ -267,8 +270,9 @@ $ git checkout -b <my-branch> origin/master

Create a pull request

Organize your commits to make your reviewer’s job easier. Use the following command to re-order, squash, edit, or change description of individual commits.

-
$ git rebase -i origin/master
+
$ git rebase -i origin/master
 
+

Navigate to the Beam GitHub mirror to create a pull request. The title of the pull request should be strictly in the following format:

@@ -315,19 +319,22 @@ $ git push <GitHub_user> --delete <my-branch>
One-time Setup

Add the Apache Git remote in your local clone, by running:

-
$ git remote add apache https://git-wip-us.apache.org/repos/asf/incubator-beam.git
+
$ git remote add apache https://git-wip-us.apache.org/repos/asf/incubator-beam.git
 
+
-

We recommend renaming the origin remote to github, to avoid confusion when dealing with this many remotes.

+

We recommend renaming the origin remote to github, to avoid confusion when dealing with this many remotes.

-
$ git remote rename origin github
+
$ git remote rename origin github
 
+
-

For the github remote, add an additional fetch reference, which will cause every pull request to be made available as a remote branch in your workspace.

+

For the github remote, add an additional fetch reference, which will cause every pull request to be made available as a remote branch in your workspace.

-
$ git config --local --add remote.github.fetch \
+
$ git config --local --add remote.github.fetch \
     '+refs/pull/*/head:refs/remotes/github/pr/*'
 
+

You can confirm your configuration by running the following command.

@@ -364,10 +371,11 @@ $ git checkout -b finish-pr-<pull-request-#> github/pr/<pull-
  • Reorganize commits that are part of the pull request, such as squash them into fewer commits that make sense for a historical perspective.
  • -

    You will often need the following command, assuming you’ll be merging changes into the master branch:

    +

    You will often need the following command, assuming you’ll be merging changes into the master branch:

    -
    $ git rebase -i apache/master
    +
    $ git rebase -i apache/master
     
    +

    Please make sure to retain authorship of original commits to give proper credit to the contributor. You are welcome to change their commits slightly (e.g., fix a typo) and squash them, but more substantive changes should be a separate commit and review.

    @@ -380,12 +388,13 @@ $ git merge --no-ff \     -m $'[BEAM-<JIRA-issue-#>] <Title>\n\nThis closes #<pull-request-#>' \     finish-pr-<pull-request-#>
    -

    Always use --no-ff option and the specific commit message “This closes #<pull request #>”” – it ensures proper marking in the tooling. It would be nice to include additional information in the merge commit message, such as the title and summary of the pull request.

    +

    Always use --no-ff option and the specific commit message “This closes #<pull request #>”” – it ensures proper marking in the tooling. It would be nice to include additional information in the merge commit message, such as the title and summary of the pull request.

    -

    At this point, you want to ensure everything is right. Test it with mvn verify. Run gitk or git log --graph, etc. When you are happy with how it looks, push it. This is the point of no return – proceed with caution.

    +

    At this point, you want to ensure everything is right. Test it with mvn verify. Run gitk or git log --graph, etc. When you are happy with how it looks, push it. This is the point of no return – proceed with caution.

    -
    $ git push apache HEAD:master
    +
    $ git push apache HEAD:master
     
    +

    Done. You can delete the local finish-pr-<pull-request-#> branch if you like.

    http://git-wip-us.apache.org/repos/asf/incubator-beam-site/blob/2a61d388/content/docs/index.html ---------------------------------------------------------------------- diff --git a/content/docs/index.html b/content/docs/index.html index ba64568..7327bed 100644 --- a/content/docs/index.html +++ b/content/docs/index.html @@ -98,13 +98,11 @@
    -

    -

    -

    +

    Apache Beam Documentation