From commits-return-8910-archive-asf-public=cust-asf.ponee.io@kafka.apache.org Thu Feb 8 18:32:05 2018 Return-Path: X-Original-To: archive-asf-public@eu.ponee.io Delivered-To: archive-asf-public@eu.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by mx-eu-01.ponee.io (Postfix) with ESMTP id C613418064F for ; Thu, 8 Feb 2018 18:32:05 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id B5FBB160C4A; Thu, 8 Feb 2018 17:32:05 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 64C31160C34 for ; Thu, 8 Feb 2018 18:32:03 +0100 (CET) Received: (qmail 50607 invoked by uid 500); 8 Feb 2018 17:32:02 -0000 Mailing-List: contact commits-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@kafka.apache.org Delivered-To: mailing list commits@kafka.apache.org Received: (qmail 50598 invoked by uid 99); 8 Feb 2018 17:32:02 -0000 Received: from ec2-52-202-80-70.compute-1.amazonaws.com (HELO gitbox.apache.org) (52.202.80.70) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 08 Feb 2018 17:32:02 +0000 Received: by gitbox.apache.org (ASF Mail Server at gitbox.apache.org, from userid 33) id BD77F823D4; Thu, 8 Feb 2018 17:32:01 +0000 (UTC) Date: Thu, 08 Feb 2018 17:32:01 +0000 To: "commits@kafka.apache.org" Subject: [kafka] branch trunk updated: HOTFIX: Fix broken javadoc links on web docs(#4543) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Message-ID: <151811112130.6381.14886742489512994929@gitbox.apache.org> From: guozhang@apache.org X-Git-Host: gitbox.apache.org X-Git-Repo: kafka X-Git-Refname: refs/heads/trunk X-Git-Reftype: branch X-Git-Oldrev: 79f22805a75cf2eb49560bd8387cb6ec14769bd7 X-Git-Newrev: 077fd9ced3f5c7101c0d4e91c0ae3aeb05f9318f X-Git-Rev: 077fd9ced3f5c7101c0d4e91c0ae3aeb05f9318f X-Git-NotificationType: ref_changed_plus_diff X-Git-Multimail-Version: 1.5.dev Auto-Submitted: auto-generated This is an automated email from the ASF dual-hosted git repository. guozhang pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/kafka.git The following commit(s) were added to refs/heads/trunk by this push: new 077fd9c HOTFIX: Fix broken javadoc links on web docs(#4543) 077fd9c is described below commit 077fd9ced3f5c7101c0d4e91c0ae3aeb05f9318f Author: Guozhang Wang AuthorDate: Thu Feb 8 09:31:58 2018 -0800 HOTFIX: Fix broken javadoc links on web docs(#4543) Reviewers: Matthias J. Sax --- docs/streams/developer-guide/app-reset-tool.html | 2 +- docs/streams/developer-guide/config-streams.html | 32 +++--- docs/streams/developer-guide/dsl-api.html | 110 ++++++++++----------- .../developer-guide/interactive-queries.html | 6 +- docs/streams/developer-guide/processor-api.html | 4 +- docs/upgrade.html | 2 +- 6 files changed, 78 insertions(+), 78 deletions(-) diff --git a/docs/streams/developer-guide/app-reset-tool.html b/docs/streams/developer-guide/app-reset-tool.html index 769dc39..84b6930 100644 --- a/docs/streams/developer-guide/app-reset-tool.html +++ b/docs/streams/developer-guide/app-reset-tool.html @@ -141,7 +141,7 @@ use either of these methods:

  • The API method KafkaStreams#cleanUp() in your application code.
  • -
  • Manually delete the corresponding local state directory (default location: /var/lib/kafka-streams/<application.id>). For more information, see state.dir StreamsConfig class.
  • +
  • Manually delete the corresponding local state directory (default location: /var/lib/kafka-streams/<application.id>). For more information, see Streams javadocs.
diff --git a/docs/streams/developer-guide/config-streams.html b/docs/streams/developer-guide/config-streams.html index 8fbb6d5..1f18a95 100644 --- a/docs/streams/developer-guide/config-streams.html +++ b/docs/streams/developer-guide/config-streams.html @@ -59,7 +59,7 @@

Configuration parameter reference

-

This section contains the most common Streams configuration parameters. For a full reference, see the Streams and Client Javadocs.

+

This section contains the most common Streams configuration parameters. For a full reference, see the Streams Javadocs.

Optional configuration parameters

-

Here are the optional Streams configuration parameters, sorted by level of importance:

+

Here are the optional Streams javadocs, sorted by level of importance:

  • High: These parameters can have a significant impact on performance. Take care when deciding the values of these parameters.
  • @@ -306,20 +306,20 @@ can be caused by corrupt data, incorrect serialization logic, or unhandled record types. These exception handlers are available:

-

default.production.exception.handler

+

default.production.exception.handler

The default production exception handler allows you to manage exceptions triggered when trying to interact with a broker - such as attempting to produce a record that is too large. By default, Kafka provides and uses the DefaultProductionExceptionHandler + such as attempting to produce a record that is too large. By default, Kafka provides and uses the DefaultProductionExceptionHandler that always fails when these exceptions occur.

Each exception handler can return a FAIL or CONTINUE depending on the record and the exception thrown. Returning FAIL will signal that Streams should shut down and CONTINUE will signal that Streams @@ -399,7 +399,7 @@

partition.grouper

A partition grouper creates a list of stream tasks from the partitions of source topics, where each created task is assigned with a group of source topic partitions. - The default implementation provided by Kafka Streams is DefaultPartitionGrouper. + The default implementation provided by Kafka Streams is DefaultPartitionGrouper. It assigns each task with one partition for each of the source topic partitions. The generated number of tasks equals the largest number of partitions among the input topics. Usually an application does not need to customize the partition grouper.
@@ -426,10 +426,10 @@

timestamp.extractor

-

A timestamp extractor pulls a timestamp from an instance of ConsumerRecord. +

A timestamp extractor pulls a timestamp from an instance of ConsumerRecord. Timestamps are used to control the progress of streams.

The default extractor is - FailOnInvalidTimestamp. + FailOnInvalidTimestamp. This extractor retrieves built-in timestamps that are automatically embedded into Kafka messages by the Kafka producer client since Kafka version 0.10. @@ -451,19 +451,19 @@

If you have data with invalid timestamps and want to process it, then there are two alternative extractors available. Both work on built-in timestamps, but handle invalid timestamps differently.

    -
  • LogAndSkipOnInvalidTimestamp: +
  • LogAndSkipOnInvalidTimestamp: This extractor logs a warn message and returns the invalid timestamp to Kafka Streams, which will not process but silently drop the record. This log-and-skip strategy allows Kafka Streams to make progress instead of failing if there are records with an invalid built-in timestamp in your input data.
  • -
  • UsePreviousTimeOnInvalidTimestamp. +
  • UsePreviousTimeOnInvalidTimestamp. This extractor returns the record’s built-in timestamp if it is valid (i.e. not negative). If the record does not have a valid built-in timestamps, the extractor returns the previously extracted valid timestamp from a record of the same topic partition as the current record as a timestamp estimation. In case that no timestamp can be estimated, it throws an exception.

Another built-in extractor is - WallclockTimestampExtractor. + WallclockTimestampExtractor. This extractor does not actually “extract” a timestamp from the consumed record but rather returns the current time in milliseconds from the system clock (think: System.currentTimeMillis()), which effectively means Streams will operate on the basis of the so-called processing-time of events.

@@ -515,9 +515,9 @@

Kafka consumers and producer configuration parameters

-

You can specify parameters for the Kafka consumers and producers that are used internally. The consumer and producer settings +

You can specify parameters for the Kafka consumers and producers that are used internally. The consumer and producer settings are defined by specifying parameters in a StreamsConfig instance.

-

In this example, the Kafka consumer session timeout is configured to be 60000 milliseconds in the Streams settings:

+

In this example, the Kafka consumer session timeout is configured to be 60000 milliseconds in the Streams settings:

Properties streamsSettings = new Properties();
 // Example of a "normal" setting for Kafka Streams
 streamsSettings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker-01:9092");
@@ -603,7 +603,7 @@
           

rocksdb.config.setter

The RocksDB configuration. Kafka Streams uses RocksDB as the default storage engine for persistent stores. To change the default - configuration for RocksDB, implement RocksDBConfigSetter and provide your custom class via rocksdb.config.setter.

+ configuration for RocksDB, implement RocksDBConfigSetter and provide your custom class via rocksdb.config.setter.

Here is an example that adjusts the memory size consumed by RocksDB.

    public static class CustomRocksDBConfig implements RocksDBConfigSetter {
 
@@ -704,7 +704,7 @@
             

Note

A future version of Kafka Streams will allow developers to set their own app-specific configuration settings through StreamsConfig as well, which can then be accessed through - ProcessorContext.

+ ProcessorContext.

diff --git a/docs/streams/developer-guide/dsl-api.html b/docs/streams/developer-guide/dsl-api.html index ba03e2e..f8fa8c9 100644 --- a/docs/streams/developer-guide/dsl-api.html +++ b/docs/streams/developer-guide/dsl-api.html @@ -95,7 +95,7 @@

After the application is run, the defined processor topologies are continuously executed (i.e., the processing plan is put into action). A step-by-step guide for writing a stream processing application using the DSL is provided below.

-

For a complete list of available API functionality, see also the Kafka Streams Javadocs.

+

For a complete list of available API functionality, see also the Streams API docs.

Creating source streams from Kafka

@@ -119,7 +119,7 @@

Creates a KStream from the specified Kafka input topics and interprets the data as a record stream. A KStream represents a partitioned record stream. - (details)

+ (details)

In the case of a KStream, the local KStream instance of every application instance will be populated with data from only a subset of the partitions of the input topic. Collectively, across all application instances, all input topic partitions are read and processed.

@@ -153,7 +153,7 @@

Reads the specified Kafka input topic into a KTable. The topic is interpreted as a changelog stream, where records with the same key are interpreted as UPSERT aka INSERT/UPDATE (when the record value is not null) or as DELETE (when the value is null) for that key. - (details)

+ (details)

In the case of a KStream, the local KStream instance of every application instance will be populated with data from only a subset of the partitions of the input topic. Collectively, across all application instances, all input topic partitions are read and processed.

@@ -178,7 +178,7 @@

Reads the specified Kafka input topic into a GlobalKTable. The topic is interpreted as a changelog stream, where records with the same key are interpreted as UPSERT aka INSERT/UPDATE (when the record value is not null) or as DELETE (when the value is null) for that key. - (details)

+ (details)

In the case of a GlobalKTable, the local GlobalKTable instance of every application instance will be populated with data from only a subset of the partitions of the input topic. Collectively, across all application instances, all input topic partitions are read and processed.

@@ -250,7 +250,7 @@

Branch (or split) a KStream based on the supplied predicates into one or more KStream instances. - (details)

+ (details)

Predicates are evaluated in order. A record is placed to one and only one output stream on the first match: if the n-th predicate evaluates to true, the record is placed to n-th stream. If no predicate matches, the the record is dropped.

@@ -278,8 +278,8 @@

Evaluates a boolean function for each element and retains those for which the function returns true. - (KStream details, - KTable details)

+ (KStream details, + KTable details)

KStream<String, Long> stream = ...;
 
 // A filter that selects (keeps) only positive numbers
@@ -305,8 +305,8 @@
                         
                     
                         

Evaluates a boolean function for each element and drops those for which the function returns true. - (KStream details, - KTable details)

+ (KStream details, + KTable details)

KStream<String, Long> stream = ...;
 
 // An inverse filter that discards any negative numbers or zero
@@ -332,7 +332,7 @@
                     
                         

Takes one record and produces zero, one, or more records. You can modify the record keys and values, including their types. - (details)

+ (details)

Marks the stream for data re-partitioning: Applying a grouping or a join after flatMap will result in re-partitioning of the records. If possible use flatMapValues instead, which will not cause data re-partitioning.

@@ -361,7 +361,7 @@

Takes one record and produces zero, one, or more records, while retaining the key of the original record. You can modify the record values and the value type. - (details)

+ (details)

flatMapValues is preferable to flatMap because it will not cause data re-partitioning. However, you cannot modify the key or key type like flatMap does.

// Split a sentence into words.
@@ -381,7 +381,7 @@
                         
                     
                         

Terminal operation. Performs a stateless action on each record. - (details)

+ (details)

You would use foreach to cause side effects based on the input data (similar to peek) and then stop further processing of the input data (unlike peek, which is not a terminal operation).

Note on processing guarantees: Any side effects of an action (such as writing to external systems) are not @@ -410,7 +410,7 @@

Groups the records by the existing key. - (details)

+ (details)

Grouping is a prerequisite for aggregating a stream or a table and ensures that data is properly partitioned (“keyed”) for subsequent operations.

When to set explicit SerDes: @@ -455,8 +455,8 @@

Groups the records by a new key, which may be of a different key type. When grouping a table, you may also specify a new value and value type. groupBy is a shorthand for selectKey(...).groupByKey(). - (KStream details, - KTable details)

+ (KStream details, + KTable details)

Grouping is a prerequisite for aggregating a stream or a table and ensures that data is properly partitioned (“keyed”) for subsequent operations.

When to set explicit SerDes: @@ -532,7 +532,7 @@

Takes one record and produces one record. You can modify the record key and value, including their types. - (details)

+ (details)

Marks the stream for data re-partitioning: Applying a grouping or a join after map will result in re-partitioning of the records. If possible use mapValues instead, which will not cause data re-partitioning.

@@ -564,8 +564,8 @@

Takes one record and produces one record, while retaining the key of the original record. You can modify the record value and the value type. - (KStream details, - KTable details)

+ (KStream details, + KTable details)

mapValues is preferable to map because it will not cause data re-partitioning. However, it does not allow you to modify the key or key type like map does.

KStream<byte[], String> stream = ...;
@@ -591,7 +591,7 @@
                         
                     
                         

Performs a stateless action on each record, and returns an unchanged stream. - (details)

+ (details)

You would use peek to cause side effects based on the input data (similar to foreach) and continue processing the input data (unlike foreach, which is a terminal operation). peek returns the input stream as-is; if you need to modify the input stream, use map or mapValues instead.

@@ -623,7 +623,7 @@

Terminal operation. Prints the records to System.out. See Javadocs for serde and toString() caveats. - (details)

+ (details)

Calling print() is the same as calling foreach((key, value) -> System.out.println(key + ", " + value))

KStream<byte[], String> stream = ...;
 // print to sysout
@@ -641,7 +641,7 @@
                         
                     
                         

Assigns a new key – possibly of a new key type – to each record. - (details)

+ (details)

Calling selectKey(mapper) is the same as calling map((key, value) -> mapper(key, value), value).

Marks the stream for data re-partitioning: Applying a grouping or a join after selectKey will result in re-partitioning of the records.

@@ -669,7 +669,7 @@

Get the changelog stream of this table. - (details)

+ (details)

KTable<byte[], String> table = ...;
 
 // Also, a variant of `toStream` exists that allows you
@@ -773,8 +773,8 @@
                             

Rolling aggregation. Aggregates the values of (non-windowed) records by the grouped key. Aggregating is a generalization of reduce and allows, for example, the aggregate value to have a different type than the input values. - (KGroupedStream details, - KGroupedTable details)

+ (KGroupedStream details, + KGroupedTable details)

When aggregating a grouped stream, you must provide an initializer (e.g., aggValue = 0) and an “adder” aggregator (e.g., aggValue + curValue). When aggregating a grouped table, you must provide a “subtractor” aggregator (think: aggValue - oldValue).

@@ -876,8 +876,8 @@ Aggregates the values of records, per window, by the grouped key. Aggregating is a generalization of reduce and allows, for example, the aggregate value to have a different type than the input values. - (TimeWindowedKStream details, - SessionWindowedKStream details)

+ (TimeWindowedKStream details, + SessionWindowedKStream details)

You must provide an initializer (e.g., aggValue = 0), “adder” aggregator (e.g., aggValue + curValue), and a window. When windowing based on sessions, you must additionally provide a “session merger” aggregator (e.g., mergedAggValue = leftAggValue + rightAggValue).

@@ -971,8 +971,8 @@

Rolling aggregation. Counts the number of records by the grouped key. - (KGroupedStream details, - KGroupedTable details)

+ (KGroupedStream details, + KGroupedTable details)

Several variants of count exist, see Javadocs for details.

KGroupedStream<String, Long> groupedStream = ...;
 KGroupedTable<String, Long> groupedTable = ...;
@@ -1002,8 +1002,8 @@
                         
                             

Windowed aggregation. Counts the number of records, per window, by the grouped key. - (TimeWindowedKStream details, - SessionWindowedKStream details)

+ (TimeWindowedKStream details, + SessionWindowedKStream details)

The windowed count turns a TimeWindowedKStream<K, V> or SessionWindowedKStream<K, V> into a windowed KTable<Windowed<K>, V>.

Several variants of count exist, see Javadocs for details.

@@ -1036,8 +1036,8 @@

Rolling aggregation. Combines the values of (non-windowed) records by the grouped key. The current record value is combined with the last reduced value, and a new reduced value is returned. The result value type cannot be changed, unlike aggregate. - (KGroupedStream details, - KGroupedTable details)

+ (KGroupedStream details, + KGroupedTable details)

When reducing a grouped stream, you must provide an “adder” reducer (e.g., aggValue + curValue). When reducing a grouped table, you must additionally provide a “subtractor” reducer (e.g., aggValue - oldValue).

@@ -1120,8 +1120,8 @@ The current record value is combined with the last reduced value, and a new reduced value is returned. Records with null key or value are ignored. The result value type cannot be changed, unlike aggregate. - (TimeWindowedKStream details, - SessionWindowedKStream details)

+ (TimeWindowedKStream details, + SessionWindowedKStream details)

The windowed reduce turns a turns a TimeWindowedKStream<K, V> or a SessionWindowedKStream<K, V> into a windowed KTable<Windowed<K>, V>.

Several variants of reduce exist, see Javadocs for details.

@@ -1646,7 +1646,7 @@

Performs an INNER JOIN of this stream with another stream. Even though this operation is windowed, the joined stream will be of type KStream<K, ...> rather than KStream<Windowed<K>, ...>. - (details)

+ (details)

Data must be co-partitioned: The input data for both sides must be co-partitioned.

Causes data re-partitioning of a stream if and only if the stream was marked for re-partitioning (if both are marked, both are re-partitioned).

Several variants of join exists, see the Javadocs for details.

@@ -1705,7 +1705,7 @@

Performs a LEFT JOIN of this stream with another stream. Even though this operation is windowed, the joined stream will be of type KStream<K, ...> rather than KStream<Windowed<K>, ...>. - (details)

+ (details)

Data must be co-partitioned: The input data for both sides must be co-partitioned.

Causes data re-partitioning of a stream if and only if the stream was marked for re-partitioning (if both are marked, both are re-partitioned).

Several variants of leftJoin exists, see the Javadocs for details.

@@ -1767,7 +1767,7 @@

Performs an OUTER JOIN of this stream with another stream. Even though this operation is windowed, the joined stream will be of type KStream<K, ...> rather than KStream<Windowed<K>, ...>. - (details)

+ (details)

Data must be co-partitioned: The input data for both sides must be co-partitioned.

Causes data re-partitioning of a stream if and only if the stream was marked for re-partitioning (if both are marked, both are re-partitioned).

Several variants of outerJoin exists, see the Javadocs for details.

@@ -1828,7 +1828,7 @@ The semantics of the various stream-stream join variants are explained below. To improve the readability of the table, assume that (1) all records have the same key (and thus the key in the table is omitted), (2) all records belong to a single join window, and (3) all records are processed in timestamp order. The columns INNER JOIN, LEFT JOIN, and OUTER JOIN denote what is passed as arguments to the user-supplied - ValueJoiner for the join, leftJoin, and + ValueJoiner for the join, leftJoin, and outerJoin methods, respectively, whenever a new input record is received on either side of the join. An empty table cell denotes that the ValueJoiner is not called at all.

@@ -1994,7 +1994,7 @@

Performs an INNER JOIN of this table with another table. The result is an ever-updating KTable that represents the “current” result of the join. - (details)

+ (details)

Data must be co-partitioned: The input data for both sides must be co-partitioned.

KTable<String, Long> left = ...;
 KTable<String, Double> right = ...;
@@ -2040,7 +2040,7 @@
                                 
                             

Performs a LEFT JOIN of this table with another table. - (details)

+ (details)

Data must be co-partitioned: The input data for both sides must be co-partitioned.

KTable<String, Long> left = ...;
 KTable<String, Double> right = ...;
@@ -2089,7 +2089,7 @@
                                 
                             

Performs an OUTER JOIN of this table with another table. - (details)

+ (details)

Data must be co-partitioned: The input data for both sides must be co-partitioned.

KTable<String, Long> left = ...;
 KTable<String, Double> right = ...;
@@ -2138,7 +2138,7 @@
                             The semantics of the various table-table join variants are explained below.
                             To improve the readability of the table, you can assume that (1) all records have the same key (and thus the key in the table is omitted) and that (2) all records are processed in timestamp order.
                             The columns INNER JOIN, LEFT JOIN, and OUTER JOIN denote what is passed as arguments to the user-supplied
-                            ValueJoiner for the join, leftJoin, and
+                            ValueJoiner for the join, leftJoin, and
                             outerJoin methods, respectively, whenever a new input record is received on either side of the join.  An empty table
                             cell denotes that the ValueJoiner is not called at all.

@@ -2302,7 +2302,7 @@

Performs an INNER JOIN of this stream with the table, effectively doing a table lookup. - (details)

+ (details)

Data must be co-partitioned: The input data for both sides must be co-partitioned.

Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.

Several variants of join exists, see the Javadocs for details.

@@ -2355,7 +2355,7 @@

Performs a LEFT JOIN of this stream with the table, effectively doing a table lookup. - (details)

+ (details)

Data must be co-partitioned: The input data for both sides must be co-partitioned.

Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.

Several variants of leftJoin exists, see the Javadocs for details.

@@ -2411,7 +2411,7 @@ To improve the readability of the table we assume that (1) all records have the same key (and thus we omit the key in the table) and that (2) all records are processed in timestamp order. The columns INNER JOIN and LEFT JOIN denote what is passed as arguments to the user-supplied - ValueJoiner for the join and leftJoin + ValueJoiner for the join and leftJoin methods, respectively, whenever a new input record is received on either side of the join. An empty table cell denotes that the ValueJoiner is not called at all.

@@ -2573,7 +2573,7 @@ @@ -2938,14 +2938,14 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res @@ -3051,7 +3051,7 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res

Performs an INNER JOIN of this stream with the global table, effectively doing a table lookup. - (details)

+ (details)

The GlobalKTable is fully bootstrapped upon (re)start of a KafkaStreams instance, which means the table is fully populated with all the data in the underlying topic that is available at the time of the startup. The actual data processing begins only once the bootstrapping has completed.

Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.

@@ -2627,7 +2627,7 @@

Performs a LEFT JOIN of this stream with the global table, effectively doing a table lookup. - (details)

+ (details)

The GlobalKTable is fully bootstrapped upon (re)start of a KafkaStreams instance, which means the table is fully populated with all the data in the underlying topic that is available at the time of the startup. The actual data processing begins only once the bootstrapping has completed.

Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.

@@ -2903,11 +2903,11 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res

Terminal operation. Applies a Processor to each record. process() allows you to leverage the Processor API from the DSL. - (details)

+ (details)

This is essentially equivalent to adding the Processor via Topology#addProcessor() to your processor topology.

An example is available in the - javadocs.

+ javadocs.

Transform

@@ -2917,7 +2917,7 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res

Applies a Transformer to each record. transform() allows you to leverage the Processor API from the DSL. - (details)

+ (details)

Each input record is transformed into zero, one, or more output records (similar to the stateless flatMap). The Transformer must return null for zero output. You can modify the record’s key and value, including their types.

@@ -2927,7 +2927,7 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res

transform is essentially equivalent to adding the Transformer via Topology#addProcessor() to your processor topology.

An example is available in the - javadocs. + javadocs.

Applies a ValueTransformer to each record, while retaining the key of the original record. transformValues() allows you to leverage the Processor API from the DSL. - (details)

+ (details)

Each input record is transformed into exactly one output record (zero output records or multiple output records are not possible). The ValueTransformer may return null as the new value for a record.

transformValues is preferable to transform because it will not cause data re-partitioning.

transformValues is essentially equivalent to adding the ValueTransformer via Topology#addProcessor() to your processor topology.

An example is available in the - javadocs.

+ javadocs.

Terminal operation. Write the records to a Kafka topic. - (KStream details)

+ (KStream details)

When to provide serdes explicitly:

  • If you do not specify SerDes explicitly, the default SerDes from the @@ -3098,7 +3098,7 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res

Write the records to a Kafka topic and create a new stream/table from that topic. Essentially a shorthand for KStream#to() followed by StreamsBuilder#stream(), same for tables. - (KStream details)

+ (KStream details)

When to provide SerDes explicitly:

  • If you do not specify SerDes explicitly, the default SerDes from the diff --git a/docs/streams/developer-guide/interactive-queries.html b/docs/streams/developer-guide/interactive-queries.html index 0a37d56..4358774 100644 --- a/docs/streams/developer-guide/interactive-queries.html +++ b/docs/streams/developer-guide/interactive-queries.html @@ -371,7 +371,7 @@ interactive queries

    To enable remote state store discovery in a distributed Kafka Streams application, you must set the configuration property in StreamsConfig. The application.server property defines a unique host:port pair that points to the RPC endpoint of the respective instance of a Kafka Streams application. The value of this configuration property will vary across the instances of your application. - When this property is set, Kafka Streams will keep track of the RPC endpoint information for every instance of an application, its state stores, and assigned stream partitions through instances of StreamsMetadata.

    + When this property is set, Kafka Streams will keep track of the RPC endpoint information for every instance of an application, its state stores, and assigned stream partitions through instances of StreamsMetadata.

    Tip

    Consider leveraging the exposed RPC endpoints of your application for further functionality, such as @@ -417,7 +417,7 @@ interactive queries

    Discovering and accessing application instances and their local state stores

    -

    The following methods return StreamsMetadata objects, which provide meta-information about application instances such as their RPC endpoint and locally available state stores.

    +

    The following methods return StreamsMetadata objects, which provide meta-information about application instances such as their RPC endpoint and locally available state stores.

    • KafkaStreams#allMetadata(): find all instances of this application
    • KafkaStreams#allMetadataForStore(String storeName): find those applications instances that manage local instances of the state store “storeName”
    • @@ -426,7 +426,7 @@ interactive queries

    Attention

    -

    If application.server is not configured for an application instance, then the above methods will not find any StreamsMetadata for it.

    +

    If application.server is not configured for an application instance, then the above methods will not find any StreamsMetadata for it.

    For example, we can now find the StreamsMetadata for the state store named “word-count” that we defined in the code example shown in the previous section:

    diff --git a/docs/streams/developer-guide/processor-api.html b/docs/streams/developer-guide/processor-api.html index ecc5388..6719ad1 100644 --- a/docs/streams/developer-guide/processor-api.html +++ b/docs/streams/developer-guide/processor-api.html @@ -62,7 +62,7 @@ You can combine the convenience of the DSL with the power and flexibility of the Processor API as described in the section Applying processors and transformers (Processor API integration).

    -

    For a complete list of available API functionality, see the Kafka Streams API docs.

    +

    For a complete list of available API functionality, see the Streams API docs.

    Defining a Stream Processor

    @@ -204,7 +204,7 @@ space.
  • RocksDB settings can be fine-tuned, see RocksDB configuration.
  • -
  • Available store variants: +
  • Available store variants: time window key-value store, session window key-value store.
// Creating a persistent key-value store:
diff --git a/docs/upgrade.html b/docs/upgrade.html
index 775b62a..b976173 100644
--- a/docs/upgrade.html
+++ b/docs/upgrade.html
@@ -71,7 +71,7 @@
     
  • KIP-225 changed the metric "records.lag" to use tags for topic and partition. The original version with the name format "{topic}-{partition}.records-lag" is deprecated and will be removed in 2.0.0.
  • Kafka Streams is more robust against broker communication errors. Instead of stopping the Kafka Streams client with a fatal exception, Kafka Streams tries to self-heal and reconnect to the cluster. Using the new AdminClient you have better control of how often - Kafka Streams retries and can configure + Kafka Streams retries and can configure fine-grained timeouts (instead of hard coded retries as in older version).
  • Kafka Streams rebalance time was reduced further making Kafka Streams more responsive.
  • Kafka Connect now supports message headers in both sink and source connectors, and to manipulate them via simple message transforms. Connectors must be changed to explicitly use them. A new HeaderConverter is introduced to control how headers are (de)serialized, and the new "SimpleHeaderConverter" is used by default to use string representations of values.
  • -- To stop receiving notification emails like this one, please contact guozhang@apache.org.