flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From tzulitai <...@git.apache.org>
Subject [GitHub] flink pull request #3964: [FLINK-6660][docs] expand the connectors overview ...
Date Tue, 23 May 2017 07:13:41 GMT
Github user tzulitai commented on a diff in the pull request:

    --- Diff: docs/dev/connectors/index.md ---
    @@ -25,22 +25,54 @@ specific language governing permissions and limitations
     under the License.
    -Connectors provide code for interfacing with various third-party systems.
    +* toc
    -Currently these systems are supported: (Please select the respective documentation page
from the navigation on the left.)
    +## Predefined Sources and Sinks
    - * [Apache Kafka](https://kafka.apache.org/) (sink/source)
    - * [Elasticsearch](https://elastic.co/) (sink)
    - * [Hadoop FileSystem](http://hadoop.apache.org) (sink)
    - * [RabbitMQ](http://www.rabbitmq.com/) (sink/source)
    - * [Amazon Kinesis Streams](http://aws.amazon.com/kinesis/streams/) (sink/source)
    - * [Twitter Streaming API](https://dev.twitter.com/docs/streaming-apis) (source)
    - * [Apache NiFi](https://nifi.apache.org) (sink/source)
    - * [Apache Cassandra](https://cassandra.apache.org/) (sink)
    +A few basic data sources and sinks are built into Flink and are always available.
    +The [predefined data sources]({{ site.baseurll }}/dev/datastream_api.html#data-sources)
include reading from files, directories, and sockets, and
    +ingesting data from collections and iterators.
    +The [predefined data sinks]({{ site.baseurl }}/dev/datastream_api.html#data-sinks) support
writing to files, to stdout and stderr, and to sockets.
    +## Bundled Connectors
    +Connectors provide code for interfacing with various third-party systems. Currently these
systems are supported:
    -To run an application using one of these connectors, additional third party
    -components are usually required to be installed and launched, e.g. the servers
    -for the message queues. Further instructions for these can be found in the
    -corresponding subsections.
    + * [Apache Kafka](kafka.html) (sink/source)
    + * [Apache Cassandra](cassandra.html) (sink)
    + * [Amazon Kinesis Streams](kinesis.html) (sink/source)
    + * [Elasticsearch](elasticsearch.html) (sink)
    + * [Hadoop FileSystem](filesystem_sink.html) (sink)
    + * [RabbitMQ](rabbitmq.html) (sink/source)
    + * [Apache NiFi](nifi.html) (sink/source)
    + * [Twitter Streaming API](twitter.html) (source)
    +Keep in mind that to use one of these connectors in an application, additional third
    +components are usually required, e.g. servers for the data stores or message queues.
    +Note also that while the streaming connectors listed in this section are part of the
    +Flink project and are included in source releases, they are not included in the binary
    +Further instructions can be found in the corresponding subsections.
    --- End diff --
    The "further instructions" in all the connectors, I think, just link to https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/linking.html.
Perhaps can do that here, instead of repeatedly the same instruction in the connector pages.
    Either that, or we actually put some effort in adding more per-connector-specific detail
(ex. exactly which dependencies to bundle with uber jar) in each respective page.

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.

View raw message