kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From guozh...@apache.org
Subject [kafka] branch 1.0 updated: HOTFIX: Fix broken javadoc links on web docs(#4543)
Date Thu, 08 Feb 2018 17:38:26 GMT
This is an automated email from the ASF dual-hosted git repository.

guozhang pushed a commit to branch 1.0
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/1.0 by this push:
     new 4c6e2a6  HOTFIX: Fix broken javadoc links on web docs(#4543)
4c6e2a6 is described below

commit 4c6e2a64f727982f80b351da65edb8b3584a616f
Author: Guozhang Wang <wangguoz@gmail.com>
AuthorDate: Thu Feb 8 09:31:58 2018 -0800

    HOTFIX: Fix broken javadoc links on web docs(#4543)
    
    Reviewers: Matthias J. Sax <matthias@confluent.io>
---
 docs/streams/developer-guide/app-reset-tool.html   |   2 +-
 docs/streams/developer-guide/config-streams.html   | 761 +++++++++++++++++++++
 docs/streams/developer-guide/dsl-api.html          | 110 +--
 .../developer-guide/interactive-queries.html       |   6 +-
 docs/streams/developer-guide/processor-api.html    |   4 +-
 docs/upgrade.html                                  |  71 ++
 6 files changed, 893 insertions(+), 61 deletions(-)

diff --git a/docs/streams/developer-guide/app-reset-tool.html b/docs/streams/developer-guide/app-reset-tool.html
index dfee64f..5773f91 100644
--- a/docs/streams/developer-guide/app-reset-tool.html
+++ b/docs/streams/developer-guide/app-reset-tool.html
@@ -118,7 +118,7 @@
                 use either of these methods:</p>
             <ul class="simple">
                 <li>The API method <code class="docutils literal"><span class="pre">KafkaStreams#cleanUp()</span></code> in your application code.</li>
-                <li>Manually delete the corresponding local state directory (default location: <code class="docutils literal"><span class="pre">/var/lib/kafka-streams/&lt;application.id&gt;</span></code>). For more information, see <a class="reference internal" href="../javadocs.html#streams-javadocs"><span class="std std-ref">state.dir</span></a> StreamsConfig class.</li>
+                <li>Manually delete the corresponding local state directory (default location: <code class="docutils literal"><span class="pre">/var/lib/kafka-streams/&lt;application.id&gt;</span></code>). For more information, see <a href="../../../javadoc/org/apache/kafka/streams/StreamsConfig.html#STATE_DIR_CONFIG">Streams</a> javadocs.</li>
             </ul>
 </div>
 </div>
diff --git a/docs/streams/developer-guide/config-streams.html b/docs/streams/developer-guide/config-streams.html
new file mode 100644
index 0000000..1f18a95
--- /dev/null
+++ b/docs/streams/developer-guide/config-streams.html
@@ -0,0 +1,761 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="../../js/templateData.js" --></script>
+
+<script id="content-template" type="text/x-handlebars-template">
+  <!-- h1>Developer Guide for Kafka Streams</h1 -->
+  <div class="sub-nav-sticky">
+    <div class="sticky-top">
+      <!-- div style="height:35px">
+        <a href="/{{version}}/documentation/streams/">Introduction</a>
+        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
+        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+      </div -->
+    </div>
+  </div>
+
+
+  <div class="section" id="configuring-a-streams-application">
+    <span id="streams-developer-guide-configuration"></span><h1>Configuring a Streams Application<a class="headerlink" href="#configuring-a-streams-application" title="Permalink to this headline"></a></h1>
+    <p>Kafka and Kafka Streams configuration options must be configured before using Streams. You can configure Kafka Streams by specifying parameters in a <code class="docutils literal"><span class="pre">StreamsConfig</span></code> instance.</p>
+    <ol class="arabic">
+      <li><p class="first">Create a <code class="docutils literal"><span class="pre">java.util.Properties</span></code> instance.</p>
+      </li>
+      <li><p class="first">Set the <a class="reference internal" href="#streams-developer-guide-required-configs"><span class="std std-ref">parameters</span></a>.</p>
+      </li>
+      <li><p class="first">Construct a <code class="docutils literal"><span class="pre">StreamsConfig</span></code> instance from the <code class="docutils literal"><span class="pre">Properties</span></code> instance. For example:</p>
+        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.util.Properties</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.StreamsConfig</span><span class="o">;</span>
+
+<span class="n">Properties</span> <span class="n">settings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+<span class="c1">// Set a few key parameters</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">APPLICATION_ID_CONFIG</span><span class="o">,</span> <span class="s">&quot;my-first-streams-application&quot;</span><span class="o">);</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">BOOTSTRAP_SERVERS_CONFIG</span><span class="o">,</span> <span class="s">&quot;kafka-broker1:9092&quot;</span><span class="o">);</span>
+<span class="c1">// Any further settings</span>
+<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(...</span> <span class="o">,</span> <span class="o">...);</span>
+
+<span class="c1">// Create an instance of StreamsConfig from the Properties instance</span>
+<span class="n">StreamsConfig</span> <span class="n">config</span> <span class="o">=</span> <span class="k">new</span> <span class="n">StreamsConfig</span><span class="o">(</span><span class="n">settings</span><span class="o">);</span>
+</pre></div>
+        </div>
+      </li>
+    </ol>
+    <div class="section" id="configuration-parameter-reference">
+      <span id="streams-developer-guide-required-configs"></span><h2>Configuration parameter reference<a class="headerlink" href="#configuration-parameter-reference" title="Permalink to this headline"></a></h2>
+      <p>This section contains the most common Streams configuration parameters. For a full reference, see the <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/StreamsConfig.html">Streams</a> Javadocs.</p>
+      <div class="contents local topic" id="contents">
+        <ul class="simple">
+          <li><a class="reference internal" href="#required-configuration-parameters" id="id3">Required configuration parameters</a><ul>
+            <li><a class="reference internal" href="#application-id" id="id4">application.id</a></li>
+            <li><a class="reference internal" href="#bootstrap-servers" id="id5">bootstrap.servers</a></li>
+          </ul>
+          </li>
+          <li><a class="reference internal" href="#optional-configuration-parameters" id="id6">Optional configuration parameters</a><ul>
+            <li><a class="reference internal" href="#default-deserialization-exception-handler" id="id7">default.deserialization.exception.handler</a></li>
+            <li><a class="reference internal" href="#default-production-exception-handler" id="id24">default.production.exception.handler</a></li>
+            <li><a class="reference internal" href="#default-key-serde" id="id8">default.key.serde</a></li>
+            <li><a class="reference internal" href="#default-value-serde" id="id9">default.value.serde</a></li>
+            <li><a class="reference internal" href="#num-standby-replicas" id="id10">num.standby.replicas</a></li>
+            <li><a class="reference internal" href="#num-stream-threads" id="id11">num.stream.threads</a></li>
+            <li><a class="reference internal" href="#partition-grouper" id="id12">partition.grouper</a></li>
+            <li><a class="reference internal" href="#replication-factor" id="id13">replication.factor</a></li>
+            <li><a class="reference internal" href="#state-dir" id="id14">state.dir</a></li>
+            <li><a class="reference internal" href="#timestamp-extractor" id="id15">timestamp.extractor</a></li>
+          </ul>
+          </li>
+          <li><a class="reference internal" href="#kafka-consumers-and-producer-configuration-parameters" id="id16">Kafka consumers and producer configuration parameters</a><ul>
+            <li><a class="reference internal" href="#naming" id="id17">Naming</a></li>
+            <li><a class="reference internal" href="#default-values" id="id18">Default Values</a></li>
+            <li><a class="reference internal" href="#enable-auto-commit" id="id19">enable.auto.commit</a></li>
+            <li><a class="reference internal" href="#rocksdb-config-setter" id="id20">rocksdb.config.setter</a></li>
+          </ul>
+          </li>
+          <li><a class="reference internal" href="#recommended-configuration-parameters-for-resiliency" id="id21">Recommended configuration parameters for resiliency</a><ul>
+            <li><a class="reference internal" href="#acks" id="id22">acks</a></li>
+            <li><a class="reference internal" href="#id2" id="id23">replication.factor</a></li>
+          </ul>
+          </li>
+        </ul>
+      </div>
+      <div class="section" id="required-configuration-parameters">
+        <h3><a class="toc-backref" href="#id3">Required configuration parameters</a><a class="headerlink" href="#required-configuration-parameters" title="Permalink to this headline"></a></h3>
+        <p>Here are the required Streams configuration parameters.</p>
+        <table border="1" class="non-scrolling-table docutils">
+          <colgroup>
+            <col width="20%" />
+            <col width="5%" />
+            <col width="7%" />
+            <col width="38%" />
+            <col width="31%" />
+          </colgroup>
+          <thead valign="bottom">
+          <tr class="row-odd"><th class="head">Parameter Name</th>
+            <th class="head">Importance</th>
+            <th class="head" colspan="2">Description</th>
+            <th class="head">Default Value</th>
+          </tr>
+          </thead>
+          <tbody valign="top">
+          <tr class="row-even"><td>application.id</td>
+            <td>Required</td>
+            <td colspan="2">An identifier for the stream processing application.  Must be unique within the Kafka cluster.</td>
+            <td>None</td>
+          </tr>
+          <tr class="row-odd"><td>bootstrap.servers</td>
+            <td>Required</td>
+            <td colspan="2">A list of host/port pairs to use for establishing the initial connection to the Kafka cluster.</td>
+            <td>None</td>
+          </tr>
+          </tbody>
+        </table>
+        <div class="section" id="application-id">
+          <h4><a class="toc-backref" href="#id4">application.id</a><a class="headerlink" href="#application-id" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div><p>(Required) The application ID. Each stream processing application must have a unique ID. The same ID must be given to
+              all instances of the application.  It is recommended to use only alphanumeric characters, <code class="docutils literal"><span class="pre">.</span></code> (dot), <code class="docutils literal"><span class="pre">-</span></code> (hyphen), and <code class="docutils literal"><span class="pre">_</span></code> (underscore). Examples: <code class="docutils literal"><span class="pre">&quot;hello_world&quot;</span></code>, <code class="docutils literal"><span class="pre">&quot;hello [...]
+              <p>This ID is used in the following places to isolate resources used by the application from others:</p>
+              <ul class="simple">
+                <li>As the default Kafka consumer and producer <code class="docutils literal"><span class="pre">client.id</span></code> prefix</li>
+                <li>As the Kafka consumer <code class="docutils literal"><span class="pre">group.id</span></code> for coordination</li>
+                <li>As the name of the subdirectory in the state directory (cf. <code class="docutils literal"><span class="pre">state.dir</span></code>)</li>
+                <li>As the prefix of internal Kafka topic names</li>
+              </ul>
+              <dl class="docutils">
+                <dt>Tip:</dt>
+                <dd>When an application is updated, the <code class="docutils literal"><span class="pre">application.id</span></code> should be changed unless you want to reuse the existing data in internal topics and state stores.
+                  For example, you could embed the version information within <code class="docutils literal"><span class="pre">application.id</span></code>, as <code class="docutils literal"><span class="pre">my-app-v1.0.0</span></code> and <code class="docutils literal"><span class="pre">my-app-v1.0.2</span></code>.</dd>
+              </dl>
+            </div></blockquote>
+        </div>
+        <div class="section" id="bootstrap-servers">
+          <h4><a class="toc-backref" href="#id5">bootstrap.servers</a><a class="headerlink" href="#bootstrap-servers" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div><p>(Required) The Kafka bootstrap servers. This is the same <a class="reference external" href="http://kafka.apache.org/documentation.html#producerconfigs">setting</a> that is used by the underlying producer and consumer clients to connect to the Kafka cluster.
+              Example: <code class="docutils literal"><span class="pre">&quot;kafka-broker1:9092,kafka-broker2:9092&quot;</span></code>.</p>
+              <dl class="docutils">
+                <dt>Tip:</dt>
+                <dd>Kafka Streams applications can only communicate with a single Kafka cluster specified by this config value.
+                  Future versions of Kafka Streams will support connecting to different Kafka clusters for reading input
+                  streams and writing output streams.</dd>
+              </dl>
+            </div></blockquote>
+        </div>
+      </div>
+      <div class="section" id="optional-configuration-parameters">
+        <span id="streams-developer-guide-optional-configs"></span><h3><a class="toc-backref" href="#id6">Optional configuration parameters</a><a class="headerlink" href="#optional-configuration-parameters" title="Permalink to this headline"></a></h3>
+        <p>Here are the optional <a href="../../../javadoc/org/apache/kafka/streams/StreamsConfig.html">Streams</a> javadocs, sorted by level of importance:</p>
+        <blockquote>
+          <div><ul class="simple">
+            <li>High: These parameters can have a significant impact on performance. Take care when deciding the values of these parameters.</li>
+            <li>Medium: These parameters can have some impact on performance. Your specific environment will determine how much tuning effort should be focused on these parameters.</li>
+            <li>Low: These parameters have a less general or less significant impact on performance.</li>
+          </ul>
+          </div></blockquote>
+        <table border="1" class="non-scrolling-table docutils">
+          <colgroup>
+            <col width="20%" />
+            <col width="5%" />
+            <col width="7%" />
+            <col width="38%" />
+            <col width="31%" />
+          </colgroup>
+          <thead valign="bottom">
+          <tr class="row-odd"><th class="head">Parameter Name</th>
+            <th class="head">Importance</th>
+            <th class="head" colspan="2">Description</th>
+            <th class="head">Default Value</th>
+          </tr>
+          </thead>
+          <tbody valign="top">
+          <tr class="row-even"><td>application.server</td>
+            <td>Low</td>
+            <td colspan="2">A host:port pair pointing to an embedded user defined endpoint that can be used for discovering the locations of
+              state stores within a single Kafka Streams application. The value of this must be different for each instance
+              of the application.</td>
+            <td>the empty string</td>
+          </tr>
+          <tr class="row-odd"><td>buffered.records.per.partition</td>
+            <td>Low</td>
+            <td colspan="2">The maximum number of records to buffer per partition.</td>
+            <td>1000</td>
+          </tr>
+          <tr class="row-even"><td>cache.max.bytes.buffering</td>
+            <td>Medium</td>
+            <td colspan="2">Maximum number of memory bytes to be used for record caches across all threads.</td>
+            <td>10485760 bytes</td>
+          </tr>
+          <tr class="row-odd"><td>client.id</td>
+            <td>Medium</td>
+            <td colspan="2">An ID string to pass to the server when making requests.
+              (This setting is passed to the consumer/producer clients used internally by Kafka Streams.)</td>
+            <td>the empty string</td>
+          </tr>
+          <tr class="row-even"><td>commit.interval.ms</td>
+            <td>Low</td>
+            <td colspan="2">The frequency with which to save the position (offsets in source topics) of tasks.</td>
+            <td>30000 milliseconds</td>
+          </tr>
+          <tr class="row-odd"><td>default.deserialization.exception.handler</td>
+            <td>Medium</td>
+            <td colspan="2">Exception handling class that implements the <code class="docutils literal"><span class="pre">DeserializationExceptionHandler</span></code> interface.</td>
+            <td>30000 milliseconds</td>
+          </tr>
+          <tr class="row-even"><td>default.production.exception.handler</td>
+            <td>Medium</td>
+            <td colspan="2">Exception handling class that implements the <code class="docutils literal"><span class="pre">ProductionExceptionHandler</span></code> interface.</td>
+            <td><code class="docutils literal"><span class="pre">DefaultProductionExceptionHandler</span></code></td>
+          </tr>
+          <tr class="row-odd"><td>key.serde</td>
+            <td>Medium</td>
+            <td colspan="2">Default serializer/deserializer class for record keys, implements the <code class="docutils literal"><span class="pre">Serde</span></code> interface (see also value.serde).</td>
+            <td><code class="docutils literal"><span class="pre">Serdes.ByteArray().getClass().getName()</span></code></td>
+          </tr>
+          <tr class="row-even"><td>metric.reporters</td>
+            <td>Low</td>
+            <td colspan="2">A list of classes to use as metrics reporters.</td>
+            <td>the empty list</td>
+          </tr>
+          <tr class="row-odd"><td>metrics.num.samples</td>
+            <td>Low</td>
+            <td colspan="2">The number of samples maintained to compute metrics.</td>
+            <td>2</td>
+          </tr>
+          <tr class="row-even"><td>metrics.recording.level</td>
+            <td>Low</td>
+            <td colspan="2">The highest recording level for metrics.</td>
+            <td><code class="docutils literal"><span class="pre">INFO</span></code></td>
+          </tr>
+          <tr class="row-odd"><td>metrics.sample.window.ms</td>
+            <td>Low</td>
+            <td colspan="2">The window of time a metrics sample is computed over.</td>
+            <td>30000 milliseconds</td>
+          </tr>
+          <tr class="row-even"><td>num.standby.replicas</td>
+            <td>Medium</td>
+            <td colspan="2">The number of standby replicas for each task.</td>
+            <td>0</td>
+          </tr>
+          <tr class="row-odd"><td>num.stream.threads</td>
+            <td>Medium</td>
+            <td colspan="2">The number of threads to execute stream processing.</td>
+            <td>1</td>
+          </tr>
+          <tr class="row-even"><td>partition.grouper</td>
+            <td>Low</td>
+            <td colspan="2">Partition grouper class that implements the <code class="docutils literal"><span class="pre">PartitionGrouper</span></code> interface.</td>
+            <td>See <a class="reference internal" href="#streams-developer-guide-partition-grouper"><span class="std std-ref">Partition Grouper</span></a></td>
+          </tr>
+          <tr class="row-odd"><td>poll.ms</td>
+            <td>Low</td>
+            <td colspan="2">The amount of time in milliseconds to block waiting for input.</td>
+            <td>100 milliseconds</td>
+          </tr>
+          <tr class="row-even"><td>replication.factor</td>
+            <td>High</td>
+            <td colspan="2">The replication factor for changelog topics and repartition topics created by the application.</td>
+            <td>1</td>
+          </tr>
+          <tr class="row-odd"><td>state.cleanup.delay.ms</td>
+            <td>Low</td>
+            <td colspan="2">The amount of time in milliseconds to wait before deleting state when a partition has migrated.</td>
+            <td>6000000 milliseconds</td>
+          </tr>
+          <tr class="row-even"><td>state.dir</td>
+            <td>High</td>
+            <td colspan="2">Directory location for state stores.</td>
+            <td><code class="docutils literal"><span class="pre">/var/lib/kafka-streams</span></code></td>
+          </tr>
+          <tr class="row-odd"><td>timestamp.extractor</td>
+            <td>Medium</td>
+            <td colspan="2">Timestamp extractor class that implements the <code class="docutils literal"><span class="pre">TimestampExtractor</span></code> interface.</td>
+            <td>See <a class="reference internal" href="#streams-developer-guide-timestamp-extractor"><span class="std std-ref">Timestamp Extractor</span></a></td>
+          </tr>
+          <tr class="row-even"><td>value.serde</td>
+            <td>Medium</td>
+            <td colspan="2">Default serializer/deserializer class for record values, implements the <code class="docutils literal"><span class="pre">Serde</span></code> interface (see also key.serde).</td>
+            <td><code class="docutils literal"><span class="pre">Serdes.ByteArray().getClass().getName()</span></code></td>
+          </tr>
+          <tr class="row-odd"><td>windowstore.changelog.additional.retention.ms</td>
+            <td>Low</td>
+            <td colspan="2">Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift.</td>
+            <td>86400000 milliseconds = 1 day</td>
+          </tr>
+          </tbody>
+        </table>
+        <div class="section" id="default-deserialization-exception-handler">
+          <span id="streams-developer-guide-deh"></span><h4><a class="toc-backref" href="#id7">default.deserialization.exception.handler</a><a class="headerlink" href="#default-deserialization-exception-handler" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div><p>The default deserialization exception handler allows you to manage record exceptions that fail to deserialize. This
+              can be caused by corrupt data, incorrect serialization logic, or unhandled record types. These exception handlers
+              are available:</p>
+              <ul class="simple">
+                <li><a class="reference external" href="../../../javadoc/org/apache/kafka/streams/errors/LogAndContinueExceptionHandler.html">LogAndContinueExceptionHandler</a>:
+                  This handler logs the deserialization exception and then signals the processing pipeline to continue processing more records.
+                  This log-and-skip strategy allows Kafka Streams to make progress instead of failing if there are records that fail
+                  to deserialize.</li>
+                <li><a class="reference external" href="../../../javadoc/org/apache/kafka/streams/errors/LogAndFailExceptionHandler.html">LogAndFailExceptionHandler</a>.
+                  This handler logs the deserialization exception and then signals the processing pipeline to stop processing more records.</li>
+              </ul>
+            </div></blockquote>
+        </div>
+        <div class="section" id="default-production-exception-handler">
+          <span id="streams-developer-guide-peh"></span><h4><a class="toc-backref" href="#id24">default.production.exception.handler</a><a class="headerlink" href="#default-production-exception-handler" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div><p>The default production exception handler allows you to manage exceptions triggered when trying to interact with a broker
+              such as attempting to produce a record that is too large. By default, Kafka provides and uses the <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/errors/DefaultProductionExceptionHandler.html">DefaultProductionExceptionHandler</a>
+              that always fails when these exceptions occur.</p>
+
+            <p>Each exception handler can return a <code>FAIL</code> or <code>CONTINUE</code> depending on the record and the exception thrown. Returning <code>FAIL</code> will signal that Streams should shut down and <code>CONTINUE</code> will signal that Streams
+            should ignore the issue and continue processing. If you want to provide an exception handler that always ignores records that are too large, you could implement something like the following:</p>
+
+            <pre class="brush: java;">
+            import java.util.Properties;
+            import org.apache.kafka.streams.StreamsConfig;
+            import org.apache.kafka.common.errors.RecordTooLargeException;
+            import org.apache.kafka.streams.errors.ProductionExceptionHandler;
+            import org.apache.kafka.streams.errors.ProductionExceptionHandler.ProductionExceptionHandlerResponse;
+
+            class IgnoreRecordTooLargeHandler implements ProductionExceptionHandler {
+                public void configure(Map&lt;String, Object&gt; config) {}
+
+                public ProductionExceptionHandlerResponse handle(final ProducerRecord&lt;byte[], byte[]&gt; record,
+                                                                 final Exception exception) {
+                    if (exception instanceof RecordTooLargeException) {
+                        return ProductionExceptionHandlerResponse.CONTINUE;
+                    } else {
+                        return ProductionExceptionHandlerResponse.FAIL;
+                    }
+                }
+            }
+
+            Properties settings = new Properties();
+
+            // other various kafka streams settings, e.g. bootstrap servers, application id, etc
+
+            settings.put(StreamsConfig.DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG,
+                         IgnoreRecordTooLargeHandler.class);</pre></div>
+          </blockquote>
+        </div>
+        <div class="section" id="default-key-serde">
+          <h4><a class="toc-backref" href="#id8">default.key.serde</a><a class="headerlink" href="#default-key-serde" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div><p>The default Serializer/Deserializer class for record keys. Serialization and deserialization in Kafka Streams happens
+              whenever data needs to be materialized, for example:</p>
+              <blockquote>
+                <div><ul class="simple">
+                  <li>Whenever data is read from or written to a <em>Kafka topic</em> (e.g., via the <code class="docutils literal"><span class="pre">StreamsBuilder#stream()</span></code> and <code class="docutils literal"><span class="pre">KStream#to()</span></code> methods).</li>
+                  <li>Whenever data is read from or written to a <em>state store</em>.</li>
+                </ul>
+                  <p>This is discussed in more detail in <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data types and serialization</span></a>.</p>
+                </div></blockquote>
+            </div></blockquote>
+        </div>
+        <div class="section" id="default-value-serde">
+          <h4><a class="toc-backref" href="#id9">default.value.serde</a><a class="headerlink" href="#default-value-serde" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div><p>The default Serializer/Deserializer class for record values. Serialization and deserialization in Kafka Streams
+              happens whenever data needs to be materialized, for example:</p>
+              <ul class="simple">
+                <li>Whenever data is read from or written to a <em>Kafka topic</em> (e.g., via the <code class="docutils literal"><span class="pre">StreamsBuilder#stream()</span></code> and <code class="docutils literal"><span class="pre">KStream#to()</span></code> methods).</li>
+                <li>Whenever data is read from or written to a <em>state store</em>.</li>
+              </ul>
+              <p>This is discussed in more detail in <a class="reference internal" href="datatypes.html#streams-developer-guide-serdes"><span class="std std-ref">Data types and serialization</span></a>.</p>
+            </div></blockquote>
+        </div>
+        <div class="section" id="num-standby-replicas">
+          <span id="streams-developer-guide-standby-replicas"></span><h4><a class="toc-backref" href="#id10">num.standby.replicas</a><a class="headerlink" href="#num-standby-replicas" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div>The number of standby replicas. Standby replicas are shadow copies of local state stores. Kafka Streams attempts to create the
+              specified number of replicas and keep them up to date as long as there are enough instances running.
+              Standby replicas are used to minimize the latency of task failover.  A task that was previously running on a failed instance is
+              preferred to restart on an instance that has standby replicas so that the local state store restoration process from its
+              changelog can be minimized.  Details about how Kafka Streams makes use of the standby replicas to minimize the cost of
+              resuming tasks on failover can be found in the <a class="reference internal" href="../architecture.html#streams-architecture-state"><span class="std std-ref">State</span></a> section.</div></blockquote>
+        </div>
+        <div class="section" id="num-stream-threads">
+          <h4><a class="toc-backref" href="#id11">num.stream.threads</a><a class="headerlink" href="#num-stream-threads" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div>This specifies the number of stream threads in an instance of the Kafka Streams application. The stream processing code runs in these thread.
+              For more information about Kafka Streams threading model, see <a class="reference internal" href="../architecture.html#streams-architecture-threads"><span class="std std-ref">Threading Model</span></a>.</div></blockquote>
+        </div>
+        <div class="section" id="partition-grouper">
+          <span id="streams-developer-guide-partition-grouper"></span><h4><a class="toc-backref" href="#id12">partition.grouper</a><a class="headerlink" href="#partition-grouper" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div>A partition grouper creates a list of stream tasks from the partitions of source topics, where each created task is assigned with a group of source topic partitions.
+              The default implementation provided by Kafka Streams is <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/processor/DefaultPartitionGrouper.html">DefaultPartitionGrouper</a>.
+              It assigns each task with one partition for each of the source topic partitions. The generated number of tasks equals the largest
+              number of partitions among the input topics. Usually an application does not need to customize the partition grouper.</div></blockquote>
+        </div>
+        <div class="section" id="replication-factor">
+          <span id="replication-factor-parm"></span><h4><a class="toc-backref" href="#id13">replication.factor</a><a class="headerlink" href="#replication-factor" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div><p>This specifies the replication factor of internal topics that Kafka Streams creates when local states are used or a stream is
+              repartitioned for aggregation. Replication is important for fault tolerance. Without replication even a single broker failure
+              may prevent progress of the stream processing application. It is recommended to use a similar replication factor as source topics.</p>
+              <dl class="docutils">
+                <dt>Recommendation:</dt>
+                <dd>Increase the replication factor to 3 to ensure that the internal Kafka Streams topic can tolerate up to 2 broker failures.
+                  Note that you will require more storage space as well (3 times more with the replication factor of 3).</dd>
+              </dl>
+            </div></blockquote>
+        </div>
+        <div class="section" id="state-dir">
+          <h4><a class="toc-backref" href="#id14">state.dir</a><a class="headerlink" href="#state-dir" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div>The state directory. Kafka Streams persists local states under the state directory. Each application has a subdirectory on its hosting
+              machine that is located under the state directory. The name of the subdirectory is the application ID. The state stores associated
+              with the application are created under this subdirectory.</div></blockquote>
+        </div>
+        <div class="section" id="timestamp-extractor">
+          <span id="streams-developer-guide-timestamp-extractor"></span><h4><a class="toc-backref" href="#id15">timestamp.extractor</a><a class="headerlink" href="#timestamp-extractor" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div><p>A timestamp extractor pulls a timestamp from an instance of <a class="reference external" href="../../../javadoc/org/apache/kafka/clients/consumer/ConsumerRecord.html">ConsumerRecord</a>.
+              Timestamps are used to control the progress of streams.</p>
+              <p>The default extractor is
+                <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/processor/FailOnInvalidTimestamp.html">FailOnInvalidTimestamp</a>.
+                This extractor retrieves built-in timestamps that are automatically embedded into Kafka messages by the Kafka producer
+                client since
+                <a class="reference external" href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-32+-+Add+timestamps+to+Kafka+message">Kafka version 0.10</a>.
+                Depending on the setting of Kafka&#8217;s server-side <code class="docutils literal"><span class="pre">log.message.timestamp.type</span></code> broker and <code class="docutils literal"><span class="pre">message.timestamp.type</span></code> topic parameters,
+                this extractor provides you with:</p>
+              <ul class="simple">
+                <li><strong>event-time</strong> processing semantics if <code class="docutils literal"><span class="pre">log.message.timestamp.type</span></code> is set to <code class="docutils literal"><span class="pre">CreateTime</span></code> aka &#8220;producer time&#8221;
+                  (which is the default).  This represents the time when a Kafka producer sent the original message.  If you use Kafka&#8217;s
+                  official producer client, the timestamp represents milliseconds since the epoch.</li>
+                <li><strong>ingestion-time</strong> processing semantics if <code class="docutils literal"><span class="pre">log.message.timestamp.type</span></code> is set to <code class="docutils literal"><span class="pre">LogAppendTime</span></code> aka &#8220;broker
+                  time&#8221;.  This represents the time when the Kafka broker received the original message, in milliseconds since the epoch.</li>
+              </ul>
+              <p>The <code class="docutils literal"><span class="pre">FailOnInvalidTimestamp</span></code> extractor throws an exception if a record contains an invalid (i.e. negative) built-in
+                timestamp, because Kafka Streams would not process this record but silently drop it.  Invalid built-in timestamps can
+                occur for various reasons:  if for example, you consume a topic that is written to by pre-0.10 Kafka producer clients
+                or by third-party producer clients that don&#8217;t support the new Kafka 0.10 message format yet;  another situation where
+                this may happen is after upgrading your Kafka cluster from <code class="docutils literal"><span class="pre">0.9</span></code> to <code class="docutils literal"><span class="pre">0.10</span></code>, where all the data that was generated
+                with <code class="docutils literal"><span class="pre">0.9</span></code> does not include the <code class="docutils literal"><span class="pre">0.10</span></code> message timestamps.</p>
+              <p>If you have data with invalid timestamps and want to process it, then there are two alternative extractors available.
+                Both work on built-in timestamps, but handle invalid timestamps differently.</p>
+              <ul class="simple">
+                <li><a class="reference external" href="../../../javadoc/org/apache/kafka/streams/processor/LogAndSkipOnInvalidTimestamp.html">LogAndSkipOnInvalidTimestamp</a>:
+                  This extractor logs a warn message and returns the invalid timestamp to Kafka Streams, which will not process but
+                  silently drop the record.
+                  This log-and-skip strategy allows Kafka Streams to make progress instead of failing if there are records with an
+                  invalid built-in timestamp in your input data.</li>
+                <li><a class="reference external" href="../../../javadoc/org/apache/kafka/streams/processor/UsePreviousTimeOnInvalidTimestamp.html">UsePreviousTimeOnInvalidTimestamp</a>.
+                  This extractor returns the record&#8217;s built-in timestamp if it is valid (i.e. not negative).  If the record does not
+                  have a valid built-in timestamps, the extractor returns the previously extracted valid timestamp from a record of the
+                  same topic partition as the current record as a timestamp estimation.  In case that no timestamp can be estimated, it
+                  throws an exception.</li>
+              </ul>
+              <p>Another built-in extractor is
+                <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/processor/WallclockTimestampExtractor.html">WallclockTimestampExtractor</a>.
+                This extractor does not actually &#8220;extract&#8221; a timestamp from the consumed record but rather returns the current time in
+                milliseconds from the system clock (think: <code class="docutils literal"><span class="pre">System.currentTimeMillis()</span></code>), which effectively means Streams will operate
+                on the basis of the so-called <strong>processing-time</strong> of events.</p>
+              <p>You can also provide your own timestamp extractors, for instance to retrieve timestamps embedded in the payload of
+                messages.  If you cannot extract a valid timestamp, you can either throw an exception, return a negative timestamp, or
+                estimate a timestamp.  Returning a negative timestamp will result in data loss &#8211; the corresponding record will not be
+                processed but silently dropped.  If you want to estimate a new timestamp, you can use the value provided via
+                <code class="docutils literal"><span class="pre">previousTimestamp</span></code> (i.e., a Kafka Streams timestamp estimation).  Here is an example of a custom
+                <code class="docutils literal"><span class="pre">TimestampExtractor</span></code> implementation:</p>
+              <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">org.apache.kafka.clients.consumer.ConsumerRecord</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.processor.TimestampExtractor</span><span class="o">;</span>
+
+<span class="c1">// Extracts the embedded timestamp of a record (giving you &quot;event-time&quot; semantics).</span>
+<span class="kd">public</span> <span class="kd">class</span> <span class="nc">MyEventTimeExtractor</span> <span class="kd">implements</span> <span class="n">TimestampExtractor</span> <span class="o">{</span>
+
+  <span class="nd">@Override</span>
+  <span class="kd">public</span> <span class="kt">long</span> <span class="nf">extract</span><span class="o">(</span><span class="kd">final</span> <span class="n">ConsumerRecord</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">,</span> <span class="n">Object</span><span class="o">&gt;</span> <span class="n">record</span><span class="o">,</span> <span class="kd">final</span> <span class="kt">long</span> <span class="n">previousTimestamp</span><span class="o">) [...]
+    <span class="c1">// `Foo` is your own custom class, which we assume has a method that returns</span>
+    <span class="c1">// the embedded timestamp (milliseconds since midnight, January 1, 1970 UTC).</span>
+    <span class="kt">long</span> <span class="n">timestamp</span> <span class="o">=</span> <span class="o">-</span><span class="mi">1</span><span class="o">;</span>
+    <span class="kd">final</span> <span class="n">Foo</span> <span class="n">myPojo</span> <span class="o">=</span> <span class="o">(</span><span class="n">Foo</span><span class="o">)</span> <span class="n">record</span><span class="o">.</span><span class="na">value</span><span class="o">();</span>
+    <span class="k">if</span> <span class="o">(</span><span class="n">myPojo</span> <span class="o">!=</span> <span class="kc">null</span><span class="o">)</span> <span class="o">{</span>
+      <span class="n">timestamp</span> <span class="o">=</span> <span class="n">myPojo</span><span class="o">.</span><span class="na">getTimestampInMillis</span><span class="o">();</span>
+    <span class="o">}</span>
+    <span class="k">if</span> <span class="o">(</span><span class="n">timestamp</span> <span class="o">&lt;</span> <span class="mi">0</span><span class="o">)</span> <span class="o">{</span>
+      <span class="c1">// Invalid timestamp!  Attempt to estimate a new timestamp,</span>
+      <span class="c1">// otherwise fall back to wall-clock time (processing-time).</span>
+      <span class="k">if</span> <span class="o">(</span><span class="n">previousTimestamp</span> <span class="o">&gt;=</span> <span class="mi">0</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">previousTimestamp</span><span class="o">;</span>
+      <span class="o">}</span> <span class="k">else</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">System</span><span class="o">.</span><span class="na">currentTimeMillis</span><span class="o">();</span>
+      <span class="o">}</span>
+    <span class="o">}</span>
+  <span class="o">}</span>
+
+<span class="o">}</span>
+</pre></div>
+              </div>
+              <p>You would then define the custom timestamp extractor in your Streams configuration as follows:</p>
+              <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">java.util.Properties</span><span class="o">;</span>
+<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.StreamsConfig</span><span class="o">;</span>
+
+<span class="n">Properties</span> <span class="n">streamsConfiguration</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+<span class="n">streamsConfiguration</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">TIMESTAMP_EXTRACTOR_CLASS_CONFIG</span><span class="o">,</span> <span class="n">MyEventTimeExtractor</span><span class="o">.</span><span class="na">class</span><span class="o">);</span>
+</pre></div>
+              </div>
+            </div></blockquote>
+        </div>
+      </div>
+      <div class="section" id="kafka-consumers-and-producer-configuration-parameters">
+        <h3><a class="toc-backref" href="#id16">Kafka consumers and producer configuration parameters</a><a class="headerlink" href="#kafka-consumers-and-producer-configuration-parameters" title="Permalink to this headline"></a></h3>
+        <p>You can specify parameters for the Kafka <a class="reference external" href="../../../javadoc/org/apache/kafka/clients/consumer/package-summary.html">consumers</a> and <a class="reference external" href="../../../javadoc/org/apache/kafka/clients/producer/package-summary.html">producers</a> that are used internally.  The consumer and producer settings
+          are defined by specifying parameters in a <code class="docutils literal"><span class="pre">StreamsConfig</span></code> instance.</p>
+        <p>In this example, the Kafka <a class="reference external" href="../../../javadoc/org/apache/kafka/clients/consumer/ConsumerConfig.html#SESSION_TIMEOUT_MS_CONFIG">consumer session timeout</a> is configured to be 60000 milliseconds in the Streams settings:</p>
+        <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+<span class="c1">// Example of a &quot;normal&quot; setting for Kafka Streams</span>
+<span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">BOOTSTRAP_SERVERS_CONFIG</span><span class="o">,</span> <span class="s">&quot;kafka-broker-01:9092&quot;</span><span class="o">);</span>
+<span class="c1">// Customize the Kafka consumer settings of your Streams application</span>
+<span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">ConsumerConfig</span><span class="o">.</span><span class="na">SESSION_TIMEOUT_MS_CONFIG</span><span class="o">,</span> <span class="mi">60000</span><span class="o">);</span>
+<span class="n">StreamsConfig</span> <span class="n">config</span> <span class="o">=</span> <span class="k">new</span> <span class="n">StreamsConfig</span><span class="o">(</span><span class="n">streamsSettings</span><span class="o">);</span>
+</pre></div>
+        </div>
+        <div class="section" id="naming">
+          <h4><a class="toc-backref" href="#id17">Naming</a><a class="headerlink" href="#naming" title="Permalink to this headline"></a></h4>
+          <p>Some consumer and producer configuration parameters use the same parameter name. For example, <code class="docutils literal"><span class="pre">send.buffer.bytes</span></code> and
+            <code class="docutils literal"><span class="pre">receive.buffer.bytes</span></code> are used to configure TCP buffers; <code class="docutils literal"><span class="pre">request.timeout.ms</span></code> and <code class="docutils literal"><span class="pre">retry.backoff.ms</span></code> control retries
+            for client request. You can avoid duplicate names by prefix parameter names with <code class="docutils literal"><span class="pre">consumer.</span></code> or <code class="docutils literal"><span class="pre">producer</span></code> (e.g., <code class="docutils literal"><span class="pre">consumer.send.buffer.bytes</span></code> and <code class="docutils literal"><span class="pre">producer.send.buffer.bytes</span></code>).</p>
+          <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+<span class="c1">// same value for consumer and producer</span>
+<span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;PARAMETER_NAME&quot;</span><span class="o">,</span> <span class="s">&quot;value&quot;</span><span class="o">);</span>
+<span class="c1">// different values for consumer and producer</span>
+<span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;consumer.PARAMETER_NAME&quot;</span><span class="o">,</span> <span class="s">&quot;consumer-value&quot;</span><span class="o">);</span>
+<span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;producer.PARAMETER_NAME&quot;</span><span class="o">,</span> <span class="s">&quot;producer-value&quot;</span><span class="o">);</span>
+<span class="c1">// alternatively, you can use</span>
+<span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">consumerPrefix</span><span class="o">(</span><span class="s">&quot;PARAMETER_NAME&quot;</span><span class="o">),</span> <span class="s">&quot;consumer-value&quot;</span><span class="o">);</span>
+<span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">producerPrefix</span><span class="o">(</span><span class="s">&quot;PARAMETER_NAME&quot;</span><span class="o">),</span> <span class="s">&quot;producer-value&quot;</span><span class="o">);</span>
+</pre></div>
+          </div>
+        </div>
+        <div class="section" id="default-values">
+          <h4><a class="toc-backref" href="#id18">Default Values</a><a class="headerlink" href="#default-values" title="Permalink to this headline"></a></h4>
+          <p>Kafka Streams uses different default values for some of the underlying client configs, which are summarized below. For detailed descriptions
+            of these configs, see <a class="reference external" href="http://kafka.apache.org/0100/documentation.html#producerconfigs">Producer Configs</a>
+            and <a class="reference external" href="http://kafka.apache.org/0100/documentation.html#newconsumerconfigs">Consumer Configs</a>.</p>
+          <table border="1" class="non-scrolling-table docutils">
+            <colgroup>
+              <col width="50%" />
+              <col width="19%" />
+              <col width="31%" />
+            </colgroup>
+            <thead valign="bottom">
+            <tr class="row-odd"><th class="head">Parameter Name</th>
+              <th class="head">Corresponding Client</th>
+              <th class="head">Streams Default</th>
+            </tr>
+            </thead>
+            <tbody valign="top">
+            <tr class="row-even"><td>auto.offset.reset</td>
+              <td>Consumer</td>
+              <td>earliest</td>
+            </tr>
+            <tr class="row-odd"><td>enable.auto.commit</td>
+              <td>Consumer</td>
+              <td>false</td>
+            </tr>
+            <tr class="row-even"><td>linger.ms</td>
+              <td>Producer</td>
+              <td>100</td>
+            </tr>
+            <tr class="row-odd"><td>max.poll.interval.ms</td>
+              <td>Consumer</td>
+              <td>Integer.MAX_VALUE</td>
+            </tr>
+            <tr class="row-even"><td>max.poll.records</td>
+              <td>Consumer</td>
+              <td>1000</td>
+            </tr>
+            <tr class="row-odd"><td>retries</td>
+              <td>Producer</td>
+              <td>10</td>
+            </tr>
+            <tr class="row-even"><td>rocksdb.config.setter</td>
+              <td>Consumer</td>
+              <td>&nbsp;</td>
+            </tr>
+            </tbody>
+          </table>
+        </div>
+        <div class="section" id="enable-auto-commit">
+          <span id="streams-developer-guide-consumer-auto-commit"></span><h4><a class="toc-backref" href="#id19">enable.auto.commit</a><a class="headerlink" href="#enable-auto-commit" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div>The consumer auto commit. To guarantee at-least-once processing semantics and turn off auto commits, Kafka Streams overrides this consumer config
+              value to <code class="docutils literal"><span class="pre">false</span></code>.  Consumers will only commit explicitly via <em>commitSync</em> calls when the Kafka Streams library or a user decides
+              to commit the current processing state.</div></blockquote>
+        </div>
+        <div class="section" id="rocksdb-config-setter">
+          <span id="streams-developer-guide-rocksdb-config"></span><h4><a class="toc-backref" href="#id20">rocksdb.config.setter</a><a class="headerlink" href="#rocksdb-config-setter" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div><p>The RocksDB configuration. Kafka Streams uses RocksDB as the default storage engine for persistent stores. To change the default
+              configuration for RocksDB, implement <code class="docutils literal"><span class="pre">RocksDBConfigSetter</span></code> and provide your custom class via <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/state/RocksDBConfigSetter.html">rocksdb.config.setter</a>.</p>
+              <p>Here is an example that adjusts the memory size consumed by RocksDB.</p>
+              <div class="highlight-java"><div class="highlight"><pre><span></span>    <span class="kd">public</span> <span class="kd">static</span> <span class="kd">class</span> <span class="nc">CustomRocksDBConfig</span> <span class="kd">implements</span> <span class="n">RocksDBConfigSetter</span> <span class="o">{</span>
+
+       <span class="nd">@Override</span>
+       <span class="kd">public</span> <span class="kt">void</span> <span class="nf">setConfig</span><span class="o">(</span><span class="kd">final</span> <span class="n">String</span> <span class="n">storeName</span><span class="o">,</span> <span class="kd">final</span> <span class="n">Options</span> <span class="n">options</span><span class="o">,</span> <span class="kd">final</span> <span class="n">Map</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span [...]
+         <span class="c1">// See #1 below.</span>
+         <span class="n">BlockBasedTableConfig</span> <span class="n">tableConfig</span> <span class="o">=</span> <span class="k">new</span> <span class="n">org</span><span class="o">.</span><span class="na">rocksdb</span><span class="o">.</span><span class="na">BlockBasedTableConfig</span><span class="o">();</span>
+         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setBlockCacheSize</span><span class="o">(</span><span class="mi">16</span> <span class="o">*</span> <span class="mi">1024</span> <span class="o">*</span> <span class="mi">1024L</span><span class="o">);</span>
+         <span class="c1">// See #2 below.</span>
+         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setBlockSize</span><span class="o">(</span><span class="mi">16</span> <span class="o">*</span> <span class="mi">1024L</span><span class="o">);</span>
+         <span class="c1">// See #3 below.</span>
+         <span class="n">tableConfig</span><span class="o">.</span><span class="na">setCacheIndexAndFilterBlocks</span><span class="o">(</span><span class="kc">true</span><span class="o">);</span>
+         <span class="n">options</span><span class="o">.</span><span class="na">setTableFormatConfig</span><span class="o">(</span><span class="n">tableConfig</span><span class="o">);</span>
+         <span class="c1">// See #4 below.</span>
+         <span class="n">options</span><span class="o">.</span><span class="na">setMaxWriteBufferNumber</span><span class="o">(</span><span class="mi">2</span><span class="o">);</span>
+       <span class="o">}</span>
+    <span class="o">}</span>
+
+<span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+<span class="n">streamsConfig</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">ROCKSDB_CONFIG_SETTER_CLASS_CONFIG</span><span class="o">,</span> <span class="n">CustomRocksDBConfig</span><span class="o">.</span><span class="na">class</span><span class="o">);</span>
+</pre></div>
+              </div>
+              <dl class="docutils">
+                <dt>Notes for example:</dt>
+                <dd><ol class="first last arabic simple">
+                  <li><code class="docutils literal"><span class="pre">BlockBasedTableConfig</span> <span class="pre">tableConfig</span> <span class="pre">=</span> <span class="pre">new</span> <span class="pre">org.rocksdb.BlockBasedTableConfig();</span></code> Reduce block cache size from the default, shown <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java#L81">here</a>,  as the t [...]
+                  <li><code class="docutils literal"><span class="pre">tableConfig.setBlockSize(16</span> <span class="pre">*</span> <span class="pre">1024L);</span></code> Modify the default <a class="reference external" href="https://github.com/apache/kafka/blob/1.0/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java#L82">block size</a> per these instructions from the <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Memory-usage- [...]
+                  <li><code class="docutils literal"><span class="pre">tableConfig.setCacheIndexAndFilterBlocks(true);</span></code> Do not let the index and filter blocks grow unbounded. For more information, see the <a class="reference external" href="https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-and-filter-blocks">RocksDB GitHub</a>.</li>
+                  <li><code class="docutils literal"><span class="pre">options.setMaxWriteBufferNumber(2);</span></code> See the advanced options in the <a class="reference external" href="https://github.com/facebook/rocksdb/blob/8dee8cad9ee6b70fd6e1a5989a8156650a70c04f/include/rocksdb/advanced_options.h#L103">RocksDB GitHub</a>.</li>
+                </ol>
+                </dd>
+              </dl>
+            </div></blockquote>
+        </div>
+      </div>
+      <div class="section" id="recommended-configuration-parameters-for-resiliency">
+        <h3><a class="toc-backref" href="#id21">Recommended configuration parameters for resiliency</a><a class="headerlink" href="#recommended-configuration-parameters-for-resiliency" title="Permalink to this headline"></a></h3>
+        <p>There are several Kafka and Kafka Streams configuration options that need to be configured explicitly for resiliency in face of broker failures:</p>
+        <table border="1" class="non-scrolling-table docutils">
+          <colgroup>
+            <col width="22%" />
+            <col width="19%" />
+            <col width="10%" />
+            <col width="49%" />
+          </colgroup>
+          <thead valign="bottom">
+          <tr class="row-odd"><th class="head">Parameter Name</th>
+            <th class="head">Corresponding Client</th>
+            <th class="head">Default value</th>
+            <th class="head">Consider setting to</th>
+          </tr>
+          </thead>
+          <tbody valign="top">
+          <tr class="row-even"><td>acks</td>
+            <td>Producer</td>
+            <td><code class="docutils literal"><span class="pre">acks=1</span></code></td>
+            <td><code class="docutils literal"><span class="pre">acks=all</span></code></td>
+          </tr>
+          <tr class="row-odd"><td>replication.factor</td>
+            <td>Streams</td>
+            <td><code class="docutils literal"><span class="pre">1</span></code></td>
+            <td><code class="docutils literal"><span class="pre">3</span></code></td>
+          </tr>
+          <tr class="row-even"><td>min.insync.replicas</td>
+            <td>Broker</td>
+            <td><code class="docutils literal"><span class="pre">1</span></code></td>
+            <td><code class="docutils literal"><span class="pre">2</span></code></td>
+          </tr>
+          </tbody>
+        </table>
+        <p>Increasing the replication factor to 3 ensures that the internal Kafka Streams topic can tolerate up to 2 broker failures. Changing the acks setting to &#8220;all&#8221;
+          guarantees that a record will not be lost as long as one replica is alive. The tradeoff from moving to the default values to the recommended ones is
+          that some performance and more storage space (3x with the replication factor of 3) are sacrificed for more resiliency.</p>
+        <div class="section" id="acks">
+          <h4><a class="toc-backref" href="#id22">acks</a><a class="headerlink" href="#acks" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div><p>The number of acknowledgments that the leader must have received before considering a request complete. This controls
+              the durability of records that are sent. The possible values are:</p>
+              <ul class="simple">
+                <li><code class="docutils literal"><span class="pre">acks=0</span></code> The producer does not wait for acknowledgment from the server and the record is immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the <code class="docutils literal"><span class="pre">retries</span></code> configuration will not take effect (as the client won&#8217;t generally know of any failures). The offse [...]
+                <li><code class="docutils literal"><span class="pre">acks=1</span></code> The leader writes the record to its local log and responds without waiting for full acknowledgement from all followers. If the leader immediately fails after acknowledging the record, but before the followers have replicated it, then the record will be lost.</li>
+                <li><code class="docutils literal"><span class="pre">acks=all</span></code> The leader waits for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost if there is at least one in-sync replica alive. This is the strongest available guarantee.</li>
+              </ul>
+              <p>For more information, see the <a class="reference external" href="https://kafka.apache.org/documentation/#producerconfigs">Kafka Producer documentation</a>.</p>
+            </div></blockquote>
+        </div>
+        <div class="section" id="id2">
+          <h4><a class="toc-backref" href="#id23">replication.factor</a><a class="headerlink" href="#id2" title="Permalink to this headline"></a></h4>
+          <blockquote>
+            <div>See the <a class="reference internal" href="#replication-factor-parm"><span class="std std-ref">description here</span></a>.</div></blockquote>
+          <p>You define these settings via <code class="docutils literal"><span class="pre">StreamsConfig</span></code>:</p>
+          <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">Properties</span> <span class="n">streamsSettings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
+<span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">REPLICATION_FACTOR_CONFIG</span><span class="o">,</span> <span class="mi">3</span><span class="o">);</span>
+<span class="n">streamsSettings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">producerPrefix</span><span class="o">(</span><span class="n">ProducerConfig</span><span class="o">.</span><span class="na">ACKS_CONFIG</span><span class="o">),</span> <span class="s">&quot;all&quot;</span><span class="o">);</span>
+</pre></div>
+          </div>
+          <div class="admonition note">
+            <p class="first admonition-title">Note</p>
+            <p class="last">A future version of Kafka Streams will allow developers to set their own app-specific configuration settings through
+              <code class="docutils literal"><span class="pre">StreamsConfig</span></code> as well, which can then be accessed through
+              <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/processor/ProcessorContext.html">ProcessorContext</a>.</p>
+</div>
+</div>
+</div>
+</div>
+</div>
+
+
+               </div>
+              </div>
+              <div class="pagination">
+                <a href="/{{version}}/documentation/streams/developer-guide/write-streams" class="pagination__btn pagination__btn__prev">Previous</a>
+                <a href="/{{version}}/documentation/streams/developer-guide/dsl-api" class="pagination__btn pagination__btn__next">Next</a>
+              </div>
+                </script>
+
+                <!--#include virtual="../../../includes/_header.htm" -->
+                <!--#include virtual="../../../includes/_top.htm" -->
+                    <div class="content documentation documentation--current">
+                    <!--#include virtual="../../../includes/_nav.htm" -->
+                    <div class="right">
+                    <!--#include virtual="../../../includes/_docs_banner.htm" -->
+                    <ul class="breadcrumbs">
+                    <li><a href="/documentation">Documentation</a></li>
+                    <li><a href="/documentation/streams">Kafka Streams</a></li>
+                    <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
+                </ul>
+                <div class="p-content"></div>
+                    </div>
+                    </div>
+                    <!--#include virtual="../../../includes/_footer.htm" -->
+                    <script>
+                    $(function() {
+                        // Show selected style on nav item
+                        $('.b-nav__streams').addClass('selected');
+
+                        //sticky secondary nav
+                        var $navbar = $(".sub-nav-sticky"),
+                            y_pos = $navbar.offset().top,
+                            height = $navbar.height();
+
+                        $(window).scroll(function() {
+                            var scrollTop = $(window).scrollTop();
+
+                            if (scrollTop > y_pos - height) {
+                                $navbar.addClass("navbar-fixed")
+                            } else if (scrollTop <= y_pos) {
+                                $navbar.removeClass("navbar-fixed")
+                            }
+                        });
+
+                        // Display docs subnav items
+                        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+                    });
+              </script>
diff --git a/docs/streams/developer-guide/dsl-api.html b/docs/streams/developer-guide/dsl-api.html
index 441e9df..5cf307a 100644
--- a/docs/streams/developer-guide/dsl-api.html
+++ b/docs/streams/developer-guide/dsl-api.html
@@ -95,7 +95,7 @@
             </ol>
             <p>After the application is run, the defined processor topologies are continuously executed (i.e., the processing plan is put into
                 action). A step-by-step guide for writing a stream processing application using the DSL is provided below.</p>
-            <p>For a complete list of available API functionality, see also the <a class="reference internal" href="../javadocs.html#streams-javadocs"><span class="std std-ref">Kafka Streams Javadocs</span></a>.</p>
+            <p>For a complete list of available API functionality, see also the <a href="../../../javadoc/org/apache/kafka/streams/package-summary.html">Streams</a> API docs.</p>
         </div>
         <div class="section" id="creating-source-streams-from-kafka">
             <span id="streams-developer-guide-dsl-sources"></span><h2><a class="toc-backref" href="#id8">Creating source streams from Kafka</a><a class="headerlink" href="#creating-source-streams-from-kafka" title="Permalink to this headline"></a></h2>
@@ -119,7 +119,7 @@
                     <td><p class="first">Creates a <a class="reference internal" href="../concepts.html#streams-concepts-kstream"><span class="std std-ref">KStream</span></a> from the specified Kafka input topics and interprets the data
                         as a <a class="reference internal" href="../concepts.html#streams-concepts-kstream"><span class="std std-ref">record stream</span></a>.
                         A <code class="docutils literal"><span class="pre">KStream</span></code> represents a <em>partitioned</em> record stream.
-                        <a class="reference external" href="../javadocs/org/apache/kafka/streams/StreamsBuilder.html#stream(java.lang.String)">(details)</a></p>
+                        <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/StreamsBuilder.html#stream(java.lang.String)">(details)</a></p>
                         <p>In the case of a KStream, the local KStream instance of every application instance will
                             be populated with data from only <strong>a subset</strong> of the partitions of the input topic.  Collectively, across
                             all application instances, all input topic partitions are read and processed.</p>
@@ -153,7 +153,7 @@
                     <td><p class="first">Reads the specified Kafka input topic into a <a class="reference internal" href="../concepts.html#streams-concepts-ktable"><span class="std std-ref">KTable</span></a>.  The topic is
                         interpreted as a changelog stream, where records with the same key are interpreted as UPSERT aka INSERT/UPDATE
                         (when the record value is not <code class="docutils literal"><span class="pre">null</span></code>) or as DELETE (when the value is <code class="docutils literal"><span class="pre">null</span></code>) for that key.
-                        <a class="reference external" href="../javadocs/org/apache/kafka/streams/StreamsBuilder.html#table-java.lang.String(java.lang.String)">(details)</a></p>
+                        <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/StreamsBuilder.html#table-java.lang.String(java.lang.String)">(details)</a></p>
                         <p>In the case of a KStream, the local KStream instance of every application instance will
                             be populated with data from only <strong>a subset</strong> of the partitions of the input topic.  Collectively, across
                             all application instances, all input topic partitions are read and processed.</p>
@@ -178,7 +178,7 @@
                     <td><p class="first">Reads the specified Kafka input topic into a <a class="reference internal" href="../concepts.html#streams-concepts-globalktable"><span class="std std-ref">GlobalKTable</span></a>.  The topic is
                         interpreted as a changelog stream, where records with the same key are interpreted as UPSERT aka INSERT/UPDATE
                         (when the record value is not <code class="docutils literal"><span class="pre">null</span></code>) or as DELETE (when the value is <code class="docutils literal"><span class="pre">null</span></code>) for that key.
-                        <a class="reference external" href="../javadocs/org/apache/kafka/streams/StreamsBuilder.html#globalTable-java.lang.String(java.lang.String)">(details)</a></p>
+                        <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/StreamsBuilder.html#globalTable-java.lang.String(java.lang.String)">(details)</a></p>
                         <p>In the case of a GlobalKTable, the local GlobalKTable instance of every application instance will
                             be populated with data from only <strong>a subset</strong> of the partitions of the input topic.  Collectively, across
                             all application instances, all input topic partitions are read and processed.</p>
@@ -250,7 +250,7 @@
                         </ul>
                     </td>
                         <td><p class="first">Branch (or split) a <code class="docutils literal"><span class="pre">KStream</span></code> based on the supplied predicates into one or more <code class="docutils literal"><span class="pre">KStream</span></code> instances.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#branch-org.apache.kafka.streams.kstream.Predicate...-">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#branch-org.apache.kafka.streams.kstream.Predicate...-">details</a>)</p>
                             <p>Predicates are evaluated in order.  A record is placed to one and only one output stream on the first match:
                                 if the n-th predicate evaluates to true, the record is placed to n-th stream.  If no predicate matches, the
                                 the record is dropped.</p>
@@ -278,8 +278,8 @@
                         </ul>
                     </td>
                         <td><p class="first">Evaluates a boolean function for each element and retains those for which the function returns true.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#filter-org.apache.kafka.streams.kstream.Predicate-">KStream details</a>,
-                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#filter-org.apache.kafka.streams.kstream.Predicate-">KTable details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#filter-org.apache.kafka.streams.kstream.Predicate-">KStream details</a>,
+                            <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KTable.html#filter-org.apache.kafka.streams.kstream.Predicate-">KTable details</a>)</p>
                             <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
 
 <span class="c1">// A filter that selects (keeps) only positive numbers</span>
@@ -305,8 +305,8 @@
                         </ul>
                     </td>
                         <td><p class="first">Evaluates a boolean function for each element and drops those for which the function returns true.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#filterNot-org.apache.kafka.streams.kstream.Predicate-">KStream details</a>,
-                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#filterNot-org.apache.kafka.streams.kstream.Predicate-">KTable details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#filterNot-org.apache.kafka.streams.kstream.Predicate-">KStream details</a>,
+                            <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KTable.html#filterNot-org.apache.kafka.streams.kstream.Predicate-">KTable details</a>)</p>
                             <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
 
 <span class="c1">// An inverse filter that discards any negative numbers or zero</span>
@@ -332,7 +332,7 @@
                     </td>
                         <td><p class="first">Takes one record and produces zero, one, or more records.  You can modify the record keys and values, including
                             their types.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#flatMap-org.apache.kafka.streams.kstream.KeyValueMapper-">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#flatMap-org.apache.kafka.streams.kstream.KeyValueMapper-">details</a>)</p>
                             <p><strong>Marks the stream for data re-partitioning:</strong>
                                 Applying a grouping or a join after <code class="docutils literal"><span class="pre">flatMap</span></code> will result in re-partitioning of the records.
                                 If possible use <code class="docutils literal"><span class="pre">flatMapValues</span></code> instead, which will not cause data re-partitioning.</p>
@@ -361,7 +361,7 @@
                     </td>
                         <td><p class="first">Takes one record and produces zero, one, or more records, while retaining the key of the original record.
                             You can modify the record values and the value type.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#flatMapValues-org.apache.kafka.streams.kstream.ValueMapper-">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#flatMapValues-org.apache.kafka.streams.kstream.ValueMapper-">details</a>)</p>
                             <p><code class="docutils literal"><span class="pre">flatMapValues</span></code> is preferable to <code class="docutils literal"><span class="pre">flatMap</span></code> because it will not cause data re-partitioning.  However, you
                                 cannot modify the key or key type like <code class="docutils literal"><span class="pre">flatMap</span></code> does.</p>
                             <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Split a sentence into words.</span>
@@ -381,7 +381,7 @@
                         </ul>
                     </td>
                         <td><p class="first"><strong>Terminal operation.</strong>  Performs a stateless action on each record.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#foreach-org.apache.kafka.streams.kstream.ForeachAction-">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#foreach-org.apache.kafka.streams.kstream.ForeachAction-">details</a>)</p>
                             <p>You would use <code class="docutils literal"><span class="pre">foreach</span></code> to cause <em>side effects</em> based on the input data (similar to <code class="docutils literal"><span class="pre">peek</span></code>) and then <em>stop</em>
                                 <em>further processing</em> of the input data (unlike <code class="docutils literal"><span class="pre">peek</span></code>, which is not a terminal operation).</p>
                             <p><strong>Note on processing guarantees:</strong> Any side effects of an action (such as writing to external systems) are not
@@ -410,7 +410,7 @@
                         </ul>
                     </td>
                         <td><p class="first">Groups the records by the existing key.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#groupByKey--">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#groupByKey--">details</a>)</p>
                             <p>Grouping is a prerequisite for <a class="reference internal" href="#streams-developer-guide-dsl-aggregating"><span class="std std-ref">aggregating a stream or a table</span></a>
                                 and ensures that data is properly partitioned (&#8220;keyed&#8221;) for subsequent operations.</p>
                             <p><strong>When to set explicit SerDes:</strong>
@@ -455,8 +455,8 @@
                         <td><p class="first">Groups the records by a <em>new</em> key, which may be of a different key type.
                             When grouping a table, you may also specify a new value and value type.
                             <code class="docutils literal"><span class="pre">groupBy</span></code> is a shorthand for <code class="docutils literal"><span class="pre">selectKey(...).groupByKey()</span></code>.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#groupBy-org.apache.kafka.streams.kstream.KeyValueMapper-">KStream details</a>,
-                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#groupBy-org.apache.kafka.streams.kstream.KeyValueMapper-">KTable details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#groupBy-org.apache.kafka.streams.kstream.KeyValueMapper-">KStream details</a>,
+                            <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KTable.html#groupBy-org.apache.kafka.streams.kstream.KeyValueMapper-">KTable details</a>)</p>
                             <p>Grouping is a prerequisite for <a class="reference internal" href="#streams-developer-guide-dsl-aggregating"><span class="std std-ref">aggregating a stream or a table</span></a>
                                 and ensures that data is properly partitioned (&#8220;keyed&#8221;) for subsequent operations.</p>
                             <p><strong>When to set explicit SerDes:</strong>
@@ -532,7 +532,7 @@
                         </ul>
                     </td>
                         <td><p class="first">Takes one record and produces one record.  You can modify the record key and value, including their types.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#map-org.apache.kafka.streams.kstream.KeyValueMapper-">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#map-org.apache.kafka.streams.kstream.KeyValueMapper-">details</a>)</p>
                             <p><strong>Marks the stream for data re-partitioning:</strong>
                                 Applying a grouping or a join after <code class="docutils literal"><span class="pre">map</span></code> will result in re-partitioning of the records.
                                 If possible use <code class="docutils literal"><span class="pre">mapValues</span></code> instead, which will not cause data re-partitioning.</p>
@@ -564,8 +564,8 @@
                     </td>
                         <td><p class="first">Takes one record and produces one record, while retaining the key of the original record.
                             You can modify the record value and the value type.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#mapValues-org.apache.kafka.streams.kstream.ValueMapper-">KStream details</a>,
-                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#mapValues-org.apache.kafka.streams.kstream.ValueMapper-">KTable details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#mapValues-org.apache.kafka.streams.kstream.ValueMapper-">KStream details</a>,
+                            <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KTable.html#mapValues-org.apache.kafka.streams.kstream.ValueMapper-">KTable details</a>)</p>
                             <p><code class="docutils literal"><span class="pre">mapValues</span></code> is preferable to <code class="docutils literal"><span class="pre">map</span></code> because it will not cause data re-partitioning.  However, it does not
                                 allow you to modify the key or key type like <code class="docutils literal"><span class="pre">map</span></code> does.</p>
                             <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
@@ -591,7 +591,7 @@
                         </ul>
                     </td>
                         <td><p class="first">Performs a stateless action on each record, and returns an unchanged stream.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#peek-org.apache.kafka.streams.kstream.ForeachAction-">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#peek-org.apache.kafka.streams.kstream.ForeachAction-">details</a>)</p>
                             <p>You would use <code class="docutils literal"><span class="pre">peek</span></code> to cause <em>side effects</em> based on the input data (similar to <code class="docutils literal"><span class="pre">foreach</span></code>) and <em>continue</em>
                                 <em>processing</em> the input data (unlike <code class="docutils literal"><span class="pre">foreach</span></code>, which is a terminal operation).  <code class="docutils literal"><span class="pre">peek</span></code> returns the input
                                 stream as-is;  if you need to modify the input stream, use <code class="docutils literal"><span class="pre">map</span></code> or <code class="docutils literal"><span class="pre">mapValues</span></code> instead.</p>
@@ -623,7 +623,7 @@
                     </td>
                         <td><p class="first"><strong>Terminal operation.</strong>  Prints the records to <code class="docutils literal"><span class="pre">System.out</span></code>.  See Javadocs for serde and <code class="docutils literal"><span class="pre">toString()</span></code>
                             caveats.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#print--">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#print--">details</a>)</p>
                             <p>Calling <code class="docutils literal"><span class="pre">print()</span></code> is the same as calling <code class="docutils literal"><span class="pre">foreach((key,</span> <span class="pre">value)</span> <span class="pre">-&gt;</span> <span class="pre">System.out.println(key</span> <span class="pre">+</span> <span class="pre">&quot;,</span> <span class="pre">&quot;</span> <span class="pre">+</span> <span class="pre">value))</span></code></p>
                             <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KStream</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">stream</span> <span class="o">=</span> <span class="o">...;</span>
 <span class="c1">// print to sysout</span>
@@ -641,7 +641,7 @@
                         </ul>
                     </td>
                         <td><p class="first">Assigns a new key &#8211; possibly of a new key type &#8211; to each record.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#selectKey-org.apache.kafka.streams.kstream.KeyValueMapper-">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#selectKey-org.apache.kafka.streams.kstream.KeyValueMapper-">details</a>)</p>
                             <p>Calling <code class="docutils literal"><span class="pre">selectKey(mapper)</span></code> is the same as calling <code class="docutils literal"><span class="pre">map((key,</span> <span class="pre">value)</span> <span class="pre">-&gt;</span> <span class="pre">mapper(key,</span> <span class="pre">value),</span> <span class="pre">value)</span></code>.</p>
                             <p><strong>Marks the stream for data re-partitioning:</strong>
                                 Applying a grouping or a join after <code class="docutils literal"><span class="pre">selectKey</span></code> will result in re-partitioning of the records.</p>
@@ -669,7 +669,7 @@
                         </ul>
                     </td>
                         <td><p class="first">Get the changelog stream of this table.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#toStream--">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KTable.html#toStream--">details</a>)</p>
                             <div class="last highlight-java"><div class="highlight"><pre><span></span><span class="n">KTable</span><span class="o">&lt;</span><span class="kt">byte</span><span class="o">[],</span> <span class="n">String</span><span class="o">&gt;</span> <span class="n">table</span> <span class="o">=</span> <span class="o">...;</span>
 
 <span class="c1">// Also, a variant of `toStream` exists that allows you</span>
@@ -773,8 +773,8 @@
                             <td><p class="first"><strong>Rolling aggregation.</strong> Aggregates the values of (non-windowed) records by the grouped key.
                                 Aggregating is a generalization of <code class="docutils literal"><span class="pre">reduce</span></code> and allows, for example, the aggregate value to have a different
                                 type than the input values.
-                                (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KGroupedStream.html">KGroupedStream details</a>,
-                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KGroupedTable.html">KGroupedTable details</a>)</p>
+                                (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KGroupedStream.html">KGroupedStream details</a>,
+                                <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KGroupedTable.html">KGroupedTable details</a>)</p>
                                 <p>When aggregating a <em>grouped stream</em>, you must provide an initializer (e.g., <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">=</span> <span class="pre">0</span></code>) and an &#8220;adder&#8221;
                                     aggregator (e.g., <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">+</span> <span class="pre">curValue</span></code>).  When aggregating a <em>grouped table</em>, you must provide a
                                     &#8220;subtractor&#8221; aggregator (think: <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">-</span> <span class="pre">oldValue</span></code>).</p>
@@ -876,8 +876,8 @@
                                 Aggregates the values of records, <a class="reference internal" href="#streams-developer-guide-dsl-windowing"><span class="std std-ref">per window</span></a>, by the grouped key.
                                 Aggregating is a generalization of <code class="docutils literal"><span class="pre">reduce</span></code> and allows, for example, the aggregate value to have a different
                                 type than the input values.
-                                (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/TimeWindowedKStream.html">TimeWindowedKStream details</a>,
-                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/SessionWindowedKStream.html">SessionWindowedKStream details</a>)</p>
+                                (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/TimeWindowedKStream.html">TimeWindowedKStream details</a>,
+                                <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/SessionWindowedKStream.html">SessionWindowedKStream details</a>)</p>
                                 <p>You must provide an initializer (e.g.,  <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">=</span> <span class="pre">0</span></code>), &#8220;adder&#8221; aggregator (e.g.,  <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">+</span> <span class="pre">curValue</span></code>),
                                     and a window.  When windowing based on sessions, you must additionally provide a &#8220;session merger&#8221; aggregator
                                     (e.g.,  <code class="docutils literal"><span class="pre">mergedAggValue</span> <span class="pre">=</span> <span class="pre">leftAggValue</span> <span class="pre">+</span> <span class="pre">rightAggValue</span></code>).</p>
@@ -971,8 +971,8 @@
                             </ul>
                         </td>
                             <td><p class="first"><strong>Rolling aggregation.</strong> Counts the number of records by the grouped key.
-                                (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KGroupedStream.html">KGroupedStream details</a>,
-                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KGroupedTable.html">KGroupedTable details</a>)</p>
+                                (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KGroupedStream.html">KGroupedStream details</a>,
+                                <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KGroupedTable.html">KGroupedTable details</a>)</p>
                                 <p>Several variants of <code class="docutils literal"><span class="pre">count</span></code> exist, see Javadocs for details.</p>
                                 <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KGroupedStream</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">groupedStream</span> <span class="o">=</span> <span class="o">...;</span>
 <span class="n">KGroupedTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">groupedTable</span> <span class="o">=</span> <span class="o">...;</span>
@@ -1002,8 +1002,8 @@
                         </td>
                             <td><p class="first"><strong>Windowed aggregation.</strong>
                                 Counts the number of records, <a class="reference internal" href="#streams-developer-guide-dsl-windowing"><span class="std std-ref">per window</span></a>, by the grouped key.
-                                (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/TimeWindowedKStream.html">TimeWindowedKStream details</a>,
-                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/SessionWindowedKStream.html">SessionWindowedKStream details</a>)</p>
+                                (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/TimeWindowedKStream.html">TimeWindowedKStream details</a>,
+                                <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/SessionWindowedKStream.html">SessionWindowedKStream details</a>)</p>
                                 <p>The windowed <code class="docutils literal"><span class="pre">count</span></code> turns a <code class="docutils literal"><span class="pre">TimeWindowedKStream&lt;K,</span> <span class="pre">V&gt;</span></code> or <code class="docutils literal"><span class="pre">SessionWindowedKStream&lt;K,</span> <span class="pre">V&gt;</span></code>
                                     into a windowed <code class="docutils literal"><span class="pre">KTable&lt;Windowed&lt;K&gt;,</span> <span class="pre">V&gt;</span></code>.</p>
                                 <p>Several variants of <code class="docutils literal"><span class="pre">count</span></code> exist, see Javadocs for details.</p>
@@ -1036,8 +1036,8 @@
                             <td><p class="first"><strong>Rolling aggregation.</strong> Combines the values of (non-windowed) records by the grouped key.
                                 The current record value is combined with the last reduced value, and a new reduced value is returned.
                                 The result value type cannot be changed, unlike <code class="docutils literal"><span class="pre">aggregate</span></code>.
-                                (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KGroupedStream.html">KGroupedStream details</a>,
-                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KGroupedTable.html">KGroupedTable details</a>)</p>
+                                (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KGroupedStream.html">KGroupedStream details</a>,
+                                <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KGroupedTable.html">KGroupedTable details</a>)</p>
                                 <p>When reducing a <em>grouped stream</em>, you must provide an &#8220;adder&#8221; reducer (e.g.,  <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">+</span> <span class="pre">curValue</span></code>).
                                     When reducing a <em>grouped table</em>, you must additionally provide a &#8220;subtractor&#8221; reducer (e.g.,
                                     <code class="docutils literal"><span class="pre">aggValue</span> <span class="pre">-</span> <span class="pre">oldValue</span></code>).</p>
@@ -1120,8 +1120,8 @@
                                 The current record value is combined with the last reduced value, and a new reduced value is returned.
                                 Records with <code class="docutils literal"><span class="pre">null</span></code> key or value are ignored.
                                 The result value type cannot be changed, unlike <code class="docutils literal"><span class="pre">aggregate</span></code>.
-                                (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/TimeWindowedKStream.html">TimeWindowedKStream details</a>,
-                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/SessionWindowedKStream.html">SessionWindowedKStream details</a>)</p>
+                                (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/TimeWindowedKStream.html">TimeWindowedKStream details</a>,
+                                <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/SessionWindowedKStream.html">SessionWindowedKStream details</a>)</p>
                                 <p>The windowed <code class="docutils literal"><span class="pre">reduce</span></code> turns a turns a <code class="docutils literal"><span class="pre">TimeWindowedKStream&lt;K,</span> <span class="pre">V&gt;</span></code> or a <code class="docutils literal"><span class="pre">SessionWindowedKStream&lt;K,</span> <span class="pre">V&gt;</span></code>
                                     into a windowed <code class="docutils literal"><span class="pre">KTable&lt;Windowed&lt;K&gt;,</span> <span class="pre">V&gt;</span></code>.</p>
                                 <p>Several variants of <code class="docutils literal"><span class="pre">reduce</span></code> exist, see Javadocs for details.</p>
@@ -1646,7 +1646,7 @@
                             </td>
                                 <td><p class="first">Performs an INNER JOIN of this stream with another stream.
                                     Even though this operation is windowed, the joined stream will be of type <code class="docutils literal"><span class="pre">KStream&lt;K,</span> <span class="pre">...&gt;</span></code> rather than <code class="docutils literal"><span class="pre">KStream&lt;Windowed&lt;K&gt;,</span> <span class="pre">...&gt;</span></code>.
-                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#join-org.apache.kafka.streams.kstream.KStream-org.apache.kafka.streams.kstream.ValueJoiner-org.apache.kafka.streams.kstream.JoinWindows-">(details)</a></p>
+                                    <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#join-org.apache.kafka.streams.kstream.KStream-org.apache.kafka.streams.kstream.ValueJoiner-org.apache.kafka.streams.kstream.JoinWindows-">(details)</a></p>
                                     <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
                                     <p><strong>Causes data re-partitioning of a stream if and only if the stream was marked for re-partitioning (if both are marked, both are re-partitioned).</strong></p>
                                     <p>Several variants of <code class="docutils literal"><span class="pre">join</span></code> exists, see the Javadocs for details.</p>
@@ -1705,7 +1705,7 @@
                             </td>
                                 <td><p class="first">Performs a LEFT JOIN of this stream with another stream.
                                     Even though this operation is windowed, the joined stream will be of type <code class="docutils literal"><span class="pre">KStream&lt;K,</span> <span class="pre">...&gt;</span></code> rather than <code class="docutils literal"><span class="pre">KStream&lt;Windowed&lt;K&gt;,</span> <span class="pre">...&gt;</span></code>.
-                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#leftJoin-org.apache.kafka.streams.kstream.KStream-org.apache.kafka.streams.kstream.ValueJoiner-org.apache.kafka.streams.kstream.JoinWindows-">(details)</a></p>
+                                    <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#leftJoin-org.apache.kafka.streams.kstream.KStream-org.apache.kafka.streams.kstream.ValueJoiner-org.apache.kafka.streams.kstream.JoinWindows-">(details)</a></p>
                                     <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
                                     <p><strong>Causes data re-partitioning of a stream if and only if the stream was marked for re-partitioning (if both are marked, both are re-partitioned).</strong></p>
                                     <p>Several variants of <code class="docutils literal"><span class="pre">leftJoin</span></code> exists, see the Javadocs for details.</p>
@@ -1767,7 +1767,7 @@
                             </td>
                                 <td><p class="first">Performs an OUTER JOIN of this stream with another stream.
                                     Even though this operation is windowed, the joined stream will be of type <code class="docutils literal"><span class="pre">KStream&lt;K,</span> <span class="pre">...&gt;</span></code> rather than <code class="docutils literal"><span class="pre">KStream&lt;Windowed&lt;K&gt;,</span> <span class="pre">...&gt;</span></code>.
-                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#outerJoin-org.apache.kafka.streams.kstream.KStream-org.apache.kafka.streams.kstream.ValueJoiner-org.apache.kafka.streams.kstream.JoinWindows-">(details)</a></p>
+                                    <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#outerJoin-org.apache.kafka.streams.kstream.KStream-org.apache.kafka.streams.kstream.ValueJoiner-org.apache.kafka.streams.kstream.JoinWindows-">(details)</a></p>
                                     <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
                                     <p><strong>Causes data re-partitioning of a stream if and only if the stream was marked for re-partitioning (if both are marked, both are re-partitioned).</strong></p>
                                     <p>Several variants of <code class="docutils literal"><span class="pre">outerJoin</span></code> exists, see the Javadocs for details.</p>
@@ -1828,7 +1828,7 @@
                             The semantics of the various stream-stream join variants are explained below.
                             To improve the readability of the table, assume that (1) all records have the same key (and thus the key in the table is omitted), (2) all records belong to a single join window, and (3) all records are processed in timestamp order.
                             The columns INNER JOIN, LEFT JOIN, and OUTER JOIN denote what is passed as arguments to the user-supplied
-                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/ValueJoiner.html">ValueJoiner</a> for the <code class="docutils literal"><span class="pre">join</span></code>, <code class="docutils literal"><span class="pre">leftJoin</span></code>, and
+                            <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/ValueJoiner.html">ValueJoiner</a> for the <code class="docutils literal"><span class="pre">join</span></code>, <code class="docutils literal"><span class="pre">leftJoin</span></code>, and
                             <code class="docutils literal"><span class="pre">outerJoin</span></code> methods, respectively, whenever a new input record is received on either side of the join.  An empty table
                             cell denotes that the <code class="docutils literal"><span class="pre">ValueJoiner</span></code> is not called at all.</p>
                         <table border="1" class="docutils">
@@ -1994,7 +1994,7 @@
                             </td>
                                 <td><p class="first">Performs an INNER JOIN of this table with another table.
                                     The result is an ever-updating KTable that represents the &#8220;current&#8221; result of the join.
-                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#join-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KTable.html#join-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
                                     <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
                                     <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
 <span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
@@ -2040,7 +2040,7 @@
                                 </ul>
                             </td>
                                 <td><p class="first">Performs a LEFT JOIN of this table with another table.
-                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#leftJoin-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KTable.html#leftJoin-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
                                     <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
                                     <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
 <span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
@@ -2089,7 +2089,7 @@
                                 </ul>
                             </td>
                                 <td><p class="first">Performs an OUTER JOIN of this table with another table.
-                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KTable.html#outerJoin-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KTable.html#outerJoin-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
                                     <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
                                     <div class="highlight-java"><div class="highlight"><pre><span></span><span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Long</span><span class="o">&gt;</span> <span class="n">left</span> <span class="o">=</span> <span class="o">...;</span>
 <span class="n">KTable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Double</span><span class="o">&gt;</span> <span class="n">right</span> <span class="o">=</span> <span class="o">...;</span>
@@ -2138,7 +2138,7 @@
                             The semantics of the various table-table join variants are explained below.
                             To improve the readability of the table, you can assume that (1) all records have the same key (and thus the key in the table is omitted) and that (2) all records are processed in timestamp order.
                             The columns INNER JOIN, LEFT JOIN, and OUTER JOIN denote what is passed as arguments to the user-supplied
-                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/ValueJoiner.html">ValueJoiner</a> for the <code class="docutils literal"><span class="pre">join</span></code>, <code class="docutils literal"><span class="pre">leftJoin</span></code>, and
+                            <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/ValueJoiner.html">ValueJoiner</a> for the <code class="docutils literal"><span class="pre">join</span></code>, <code class="docutils literal"><span class="pre">leftJoin</span></code>, and
                             <code class="docutils literal"><span class="pre">outerJoin</span></code> methods, respectively, whenever a new input record is received on either side of the join.  An empty table
                             cell denotes that the <code class="docutils literal"><span class="pre">ValueJoiner</span></code> is not called at all.</p>
                         <table border="1" class="docutils">
@@ -2302,7 +2302,7 @@
                                 </ul>
                             </td>
                                 <td><p class="first">Performs an INNER JOIN of this stream with the table, effectively doing a table lookup.
-                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#join-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#join-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
                                     <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
                                     <p><strong>Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.</strong></p>
                                     <p>Several variants of <code class="docutils literal"><span class="pre">join</span></code> exists, see the Javadocs for details.</p>
@@ -2355,7 +2355,7 @@
                                 </ul>
                             </td>
                                 <td><p class="first">Performs a LEFT JOIN of this stream with the table, effectively doing a table lookup.
-                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#leftJoin-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#leftJoin-org.apache.kafka.streams.kstream.KTable-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
                                     <p><strong>Data must be co-partitioned</strong>: The input data for both sides must be <a class="reference internal" href="#streams-developer-guide-dsl-joins-co-partitioning"><span class="std std-ref">co-partitioned</span></a>.</p>
                                     <p><strong>Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.</strong></p>
                                     <p>Several variants of <code class="docutils literal"><span class="pre">leftJoin</span></code> exists, see the Javadocs for details.</p>
@@ -2411,7 +2411,7 @@
                             To improve the readability of the table we assume that (1) all records have the same key (and thus we omit the key in
                             the table) and that (2) all records are processed in timestamp order.
                             The columns INNER JOIN and LEFT JOIN denote what is passed as arguments to the user-supplied
-                            <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/ValueJoiner.html">ValueJoiner</a> for the <code class="docutils literal"><span class="pre">join</span></code> and <code class="docutils literal"><span class="pre">leftJoin</span></code>
+                            <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/ValueJoiner.html">ValueJoiner</a> for the <code class="docutils literal"><span class="pre">join</span></code> and <code class="docutils literal"><span class="pre">leftJoin</span></code>
                             methods, respectively, whenever a new input record is received on either side of the join.  An empty table
                             cell denotes that the <code class="docutils literal"><span class="pre">ValueJoiner</span></code> is not called at all.</p>
                         <table border="1" class="docutils">
@@ -2573,7 +2573,7 @@
                                 </ul>
                             </td>
                                 <td><p class="first">Performs an INNER JOIN of this stream with the global table, effectively doing a table lookup.
-                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#join-org.apache.kafka.streams.kstream.GlobalKTable-org.apache.kafka.streams.kstream.KeyValueMapper-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#join-org.apache.kafka.streams.kstream.GlobalKTable-org.apache.kafka.streams.kstream.KeyValueMapper-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
                                     <p>The <code class="docutils literal"><span class="pre">GlobalKTable</span></code> is fully bootstrapped upon (re)start of a <code class="docutils literal"><span class="pre">KafkaStreams</span></code> instance, which means the table is fully populated with all the data in the underlying topic that is
                                         available at the time of the startup.  The actual data processing begins only once the bootstrapping has completed.</p>
                                     <p><strong>Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.</strong></p>
@@ -2627,7 +2627,7 @@
                                 </ul>
                             </td>
                                 <td><p class="first">Performs a LEFT JOIN of this stream with the global table, effectively doing a table lookup.
-                                    <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#leftJoin-org.apache.kafka.streams.kstream.GlobalKTable-org.apache.kafka.streams.kstream.KeyValueMapper-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
+                                    <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#leftJoin-org.apache.kafka.streams.kstream.GlobalKTable-org.apache.kafka.streams.kstream.KeyValueMapper-org.apache.kafka.streams.kstream.ValueJoiner-">(details)</a></p>
                                     <p>The <code class="docutils literal"><span class="pre">GlobalKTable</span></code> is fully bootstrapped upon (re)start of a <code class="docutils literal"><span class="pre">KafkaStreams</span></code> instance, which means the table is fully populated with all the data in the underlying topic that is
                                         available at the time of the startup.  The actual data processing begins only once the bootstrapping has completed.</p>
                                     <p><strong>Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.</strong></p>
@@ -2903,11 +2903,11 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res
                     </td>
                         <td><p class="first"><strong>Terminal operation.</strong>  Applies a <code class="docutils literal"><span class="pre">Processor</span></code> to each record.
                             <code class="docutils literal"><span class="pre">process()</span></code> allows you to leverage the <a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a> from the DSL.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#process-org.apache.kafka.streams.processor.ProcessorSupplier-java.lang.String...-">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#process-org.apache.kafka.streams.processor.ProcessorSupplier-java.lang.String...-">details</a>)</p>
                             <p>This is essentially equivalent to adding the <code class="docutils literal"><span class="pre">Processor</span></code> via <code class="docutils literal"><span class="pre">Topology#addProcessor()</span></code> to your
                                 <a class="reference internal" href="../concepts.html#streams-concepts-processor-topology"><span class="std std-ref">processor topology</span></a>.</p>
                             <p class="last">An example is available in the
-                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#process-org.apache.kafka.streams.processor.ProcessorSupplier-java.lang.String...-">javadocs</a>.</p>
+                                <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#process-org.apache.kafka.streams.processor.ProcessorSupplier-java.lang.String...-">javadocs</a>.</p>
                         </td>
                     </tr>
                     <tr class="row-odd"><td><p class="first"><strong>Transform</strong></p>
@@ -2917,7 +2917,7 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res
                     </td>
                         <td><p class="first">Applies a <code class="docutils literal"><span class="pre">Transformer</span></code> to each record.
                             <code class="docutils literal"><span class="pre">transform()</span></code> allows you to leverage the <a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a> from the DSL.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#transform-org.apache.kafka.streams.kstream.TransformerSupplier-java.lang.String...-">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#transform-org.apache.kafka.streams.kstream.TransformerSupplier-java.lang.String...-">details</a>)</p>
                             <p>Each input record is transformed into zero, one, or more output records (similar to the stateless <code class="docutils literal"><span class="pre">flatMap</span></code>).
                                 The <code class="docutils literal"><span class="pre">Transformer</span></code> must return <code class="docutils literal"><span class="pre">null</span></code> for zero output.
                                 You can modify the record&#8217;s key and value, including their types.</p>
@@ -2927,7 +2927,7 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res
                             <p><code class="docutils literal"><span class="pre">transform</span></code> is essentially equivalent to adding the <code class="docutils literal"><span class="pre">Transformer</span></code> via <code class="docutils literal"><span class="pre">Topology#addProcessor()</span></code> to your
                                 <a class="reference internal" href="../concepts.html#streams-concepts-processor-topology"><span class="std std-ref">processor topology</span></a>.</p>
                             <p class="last">An example is available in the
-                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#transform-org.apache.kafka.streams.kstream.TransformerSupplier-java.lang.String...-">javadocs</a>.
+                                <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#transform-org.apache.kafka.streams.kstream.TransformerSupplier-java.lang.String...-">javadocs</a>.
                                </p>
                         </td>
                     </tr>
@@ -2938,14 +2938,14 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res
                     </td>
                         <td><p class="first">Applies a <code class="docutils literal"><span class="pre">ValueTransformer</span></code> to each record, while retaining the key of the original record.
                             <code class="docutils literal"><span class="pre">transformValues()</span></code> allows you to leverage the <a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a> from the DSL.
-                            (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#transformValues-org.apache.kafka.streams.kstream.ValueTransformerSupplier-java.lang.String...-">details</a>)</p>
+                            (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#transformValues-org.apache.kafka.streams.kstream.ValueTransformerSupplier-java.lang.String...-">details</a>)</p>
                             <p>Each input record is transformed into exactly one output record (zero output records or multiple output records are not possible).
                                 The <code class="docutils literal"><span class="pre">ValueTransformer</span></code> may return <code class="docutils literal"><span class="pre">null</span></code> as the new value for a record.</p>
                             <p><code class="docutils literal"><span class="pre">transformValues</span></code> is preferable to <code class="docutils literal"><span class="pre">transform</span></code> because it will not cause data re-partitioning.</p>
                             <p><code class="docutils literal"><span class="pre">transformValues</span></code> is essentially equivalent to adding the <code class="docutils literal"><span class="pre">ValueTransformer</span></code> via <code class="docutils literal"><span class="pre">Topology#addProcessor()</span></code> to your
                                 <a class="reference internal" href="../concepts.html#streams-concepts-processor-topology"><span class="std std-ref">processor topology</span></a>.</p>
                             <p class="last">An example is available in the
-                                <a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#transformValues-org.apache.kafka.streams.kstream.ValueTransformerSupplier-java.lang.String...-">javadocs</a>.</p>
+                                <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#transformValues-org.apache.kafka.streams.kstream.ValueTransformerSupplier-java.lang.String...-">javadocs</a>.</p>
                         </td>
                     </tr>
                     </tbody>
@@ -3051,7 +3051,7 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res
                     </ul>
                 </td>
                     <td><p class="first"><strong>Terminal operation.</strong>  Write the records to a Kafka topic.
-                        (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#to(java.lang.String)">KStream details</a>)</p>
+                        (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#to(java.lang.String)">KStream details</a>)</p>
                         <p>When to provide serdes explicitly:</p>
                         <ul class="simple">
                             <li>If you do not specify SerDes explicitly, the default SerDes from the
@@ -3098,7 +3098,7 @@ t=5 (blue), which lead to a merge of sessions and an extension of a session, res
                 </td>
                     <td><p class="first">Write the records to a Kafka topic and create a new stream/table from that topic.
                         Essentially a shorthand for <code class="docutils literal"><span class="pre">KStream#to()</span></code> followed by <code class="docutils literal"><span class="pre">StreamsBuilder#stream()</span></code>, same for tables.
-                        (<a class="reference external" href="../javadocs/org/apache/kafka/streams/kstream/KStream.html#through(java.lang.String)">KStream details</a>)</p>
+                        (<a class="reference external" href="../../../javadoc/org/apache/kafka/streams/kstream/KStream.html#through(java.lang.String)">KStream details</a>)</p>
                         <p>When to provide SerDes explicitly:</p>
                         <ul class="simple">
                             <li>If you do not specify SerDes explicitly, the default SerDes from the
diff --git a/docs/streams/developer-guide/interactive-queries.html b/docs/streams/developer-guide/interactive-queries.html
index a61c288..3c3c20a 100644
--- a/docs/streams/developer-guide/interactive-queries.html
+++ b/docs/streams/developer-guide/interactive-queries.html
@@ -371,7 +371,7 @@ interactive queries</span></p>
                 <p>To enable remote state store discovery in a distributed Kafka Streams application, you must set the <a class="reference internal" href="config-streams.html#streams-developer-guide-required-configs"><span class="std std-ref">configuration property</span></a> in <code class="docutils literal"><span class="pre">StreamsConfig</span></code>.
                     The <code class="docutils literal"><span class="pre">application.server</span></code> property defines a unique <code class="docutils literal"><span class="pre">host:port</span></code> pair that points to the RPC endpoint of the respective instance of a Kafka Streams application.
                     The value of this configuration property will vary across the instances of your application.
-                    When this property is set, Kafka Streams will keep track of the RPC endpoint information for every instance of an application, its state stores, and assigned stream partitions through instances of <a class="reference external" href="../javadocs/org/apache/kafka/streams/state/StreamsMetadata.html">StreamsMetadata</a>.</p>
+                    When this property is set, Kafka Streams will keep track of the RPC endpoint information for every instance of an application, its state stores, and assigned stream partitions through instances of <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/state/StreamsMetadata.html">StreamsMetadata</a>.</p>
                 <div class="admonition tip">
                     <p><b>Tip</b></p>
                     <p class="last">Consider leveraging the exposed RPC endpoints of your application for further functionality, such as
@@ -417,7 +417,7 @@ interactive queries</span></p>
             </div>
             <div class="section" id="discovering-and-accessing-application-instances-and-their-local-state-stores">
                 <span id="streams-developer-guide-interactive-queries-discover-app-instances-and-stores"></span><h3><a class="toc-backref" href="#id10">Discovering and accessing application instances and their local state stores</a><a class="headerlink" href="#discovering-and-accessing-application-instances-and-their-local-state-stores" title="Permalink to this headline"></a></h3>
-                <p>The following methods return <a class="reference external" href="../javadocs/org/apache/kafka/streams/state/StreamsMetadata.html">StreamsMetadata</a> objects, which provide meta-information about application instances such as their RPC endpoint and locally available state stores.</p>
+                <p>The following methods return <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/state/StreamsMetadata.html">StreamsMetadata</a> objects, which provide meta-information about application instances such as their RPC endpoint and locally available state stores.</p>
                 <ul class="simple">
                     <li><code class="docutils literal"><span class="pre">KafkaStreams#allMetadata()</span></code>: find all instances of this application</li>
                     <li><code class="docutils literal"><span class="pre">KafkaStreams#allMetadataForStore(String</span> <span class="pre">storeName)</span></code>: find those applications instances that manage local instances of the state store &#8220;storeName&#8221;</li>
@@ -426,7 +426,7 @@ interactive queries</span></p>
                 </ul>
                 <div class="admonition attention">
                     <p class="first admonition-title">Attention</p>
-                    <p class="last">If <code class="docutils literal"><span class="pre">application.server</span></code> is not configured for an application instance, then the above methods will not find any <a class="reference external" href="../javadocs/org/apache/kafka/streams/state/StreamsMetadata.html">StreamsMetadata</a> for it.</p>
+                    <p class="last">If <code class="docutils literal"><span class="pre">application.server</span></code> is not configured for an application instance, then the above methods will not find any <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/state/StreamsMetadata.html">StreamsMetadata</a> for it.</p>
                 </div>
                 <p>For example, we can now find the <code class="docutils literal"><span class="pre">StreamsMetadata</span></code> for the state store named &#8220;word-count&#8221; that we defined in the
                     code example shown in the previous section:</p>
diff --git a/docs/streams/developer-guide/processor-api.html b/docs/streams/developer-guide/processor-api.html
index ecc5388..6719ad1 100644
--- a/docs/streams/developer-guide/processor-api.html
+++ b/docs/streams/developer-guide/processor-api.html
@@ -62,7 +62,7 @@
                     You can combine the convenience of the DSL with the power and flexibility of the Processor API as described in the
                     section <a class="reference internal" href="dsl-api.html#streams-developer-guide-dsl-process"><span class="std std-ref">Applying processors and transformers (Processor API integration)</span></a>.</p>
             </div>
-            <p>For a complete list of available API functionality, see the <a class="reference internal" href="../javadocs.html#streams-javadocs"><span class="std std-ref">Kafka Streams API docs</span></a>.</p>
+            <p>For a complete list of available API functionality, see the <a href="../../../javadoc/org/apache/kafka/streams/package-summary.html">Streams</a> API docs.</p>
         </div>
         <div class="section" id="defining-a-stream-processor">
             <span id="streams-developer-guide-stream-processor"></span><h2><a class="toc-backref" href="#id2">Defining a Stream Processor</a><a class="headerlink" href="#defining-a-stream-processor" title="Permalink to this headline"></a></h2>
@@ -204,7 +204,7 @@
                                 space.</li>
                             <li>RocksDB settings can be fine-tuned, see
                                 <a class="reference internal" href="config-streams.html#streams-developer-guide-rocksdb-config"><span class="std std-ref">RocksDB configuration</span></a>.</li>
-                            <li>Available <a class="reference external" href="../javadocs/org/apache/kafka/streams/state/Stores.PersistentKeyValueFactory.html">store variants</a>:
+                            <li>Available <a class="reference external" href="../../../javadoc/org/apache/kafka/streams/state/Stores.PersistentKeyValueFactory.html">store variants</a>:
                                 time window key-value store, session window key-value store.</li>
                         </ul>
                             <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Creating a persistent key-value store:</span>
diff --git a/docs/upgrade.html b/docs/upgrade.html
index ad665ad..128cedb 100644
--- a/docs/upgrade.html
+++ b/docs/upgrade.html
@@ -19,6 +19,77 @@
 
 <script id="upgrade-template" type="text/x-handlebars-template">
 
+<<<<<<< HEAD
+=======
+<h4><a id="upgrade_1_1_0" href="#upgrade_1_1_0">Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x, 0.11.0.x or 1.0.x to 1.1.x</a></h4>
+<p>Kafka 1.1.0 introduces wire protocol changes. By following the recommended rolling upgrade plan below,
+    you guarantee no downtime during the upgrade. However, please review the <a href="#upgrade_110_notable">notable changes in 1.1.0</a> before upgrading.
+</p>
+
+<p><b>For a rolling upgrade:</b></p>
+
+<ol>
+    <li> Update server.properties on all brokers and add the following properties. CURRENT_KAFKA_VERSION refers to the version you
+        are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use. If you have previously
+        overridden the message format version, you should keep its current value. Alternatively, if you are upgrading from a version prior
+        to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to match CURRENT_KAFKA_VERSION.
+        <ul>
+            <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0).</li>
+            <li>log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION  (See <a href="#upgrade_10_performance_impact">potential performance impact
+                following the upgrade</a> for the details on what this configuration does.)</li>
+        </ul>
+        If you are upgrading from 0.11.0.x and you have not overridden the message format, then you only need to override
+        the inter-broker protocol format.
+        <ul>
+            <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 0.8.2, 0.9.0, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0).</li>
+        </ul>
+    </li>
+    <li> Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. </li>
+    <li> Once the entire cluster is upgraded, bump the protocol version by editing <code>inter.broker.protocol.version</code> and setting it to 1.1.
+    <li> Restart the brokers one by one for the new protocol version to take effect. </li>
+    <li> If you have overridden the message format version as instructed above, then you need to do one more rolling restart to
+        upgrade it to its latest version. Once all (or most) consumers have been upgraded to 0.11.0 or later,
+        change log.message.format.version to 1.1 on each broker and restart them one by one. Note that the older Scala consumer
+        does not support the new message format introduced in 0.11, so to avoid the performance cost of down-conversion (or to
+        take advantage of <a href="#upgrade_11_exactly_once_semantics">exactly once semantics</a>), the newer Java consumer must be used.</li>
+</ol>
+
+<p><b>Additional Upgrade Notes:</b></p>
+
+<ol>
+    <li>If you are willing to accept downtime, you can simply take all the brokers down, update the code and start them back up. They will start
+        with the new protocol by default.</li>
+    <li>Bumping the protocol version and restarting can be done any time after the brokers are upgraded. It does not have to be immediately after.
+        Similarly for the message format version.</li>
+    <li>If you are using Java8 method references in your Kafka Streams code you might need to update your code to resolve method ambiguties.
+        Hot-swaping the jar-file only might not work.</li>
+</ol>
+
+<h5><a id="upgrade_110_notable" href="#upgrade_110_notable">Notable changes in 1.1.0</a></h5>
+<ul>
+    <li>The kafka artifact in Maven no longer depends on log4j or slf4j-log4j12. Similarly to the kafka-clients artifact, users
+        can now choose the logging back-end by including the appropriate slf4j module (slf4j-log4j12, logback, etc.). The release
+        tarball still includes log4j and slf4j-log4j12.</li>
+    <li><a href="https://cwiki.apache.org/confluence/x/uaBzB">KIP-225</a> changed the metric "records.lag" to use tags for topic and partition. The original version with the name format "{topic}-{partition}.records-lag" is deprecated and will be removed in 2.0.0.</li>
+    <li>Kafka Streams is more robust against broker communication errors. Instead of stopping the Kafka Streams client with a fatal exception,
+	Kafka Streams tries to self-heal and reconnect to the cluster. Using the new <code>AdminClient</code> you have better control of how often
+	Kafka Streams retries and can <a href="/{{version}}/documentation/streams/developer-guide/config-streams">configure</a>
+	fine-grained timeouts (instead of hard coded retries as in older version).</li>
+    <li>Kafka Streams rebalance time was reduced further making Kafka Streams more responsive.</li>
+    <li>Kafka Connect now supports message headers in both sink and source connectors, and to manipulate them via simple message transforms. Connectors must be changed to explicitly use them. A new <code>HeaderConverter</code> is introduced to control how headers are (de)serialized, and the new "SimpleHeaderConverter" is used by default to use string representations of values.</li>
+</ul>
+
+<h5><a id="upgrade_110_new_protocols" href="#upgrade_110_new_protocols">New Protocol Versions</a></h5>
+<ul></ul>
+
+<h5><a id="upgrade_110_streams" href="#upgrade_110_streams">Upgrading a 1.1.0 Kafka Streams Application</a></h5>
+<ul>
+    <li> Upgrading your Streams application from 1.0.0 to 1.1.0 does not require a broker upgrade.
+        A Kafka Streams 1.1.0 application can connect to 1.0, 0.11.0, 0.10.2 and 0.10.1 brokers (it is not possible to connect to 0.10.0 brokers though). </li>
+    <li> See <a href="/{{version}}/documentation/streams/upgrade-guide#streams_api_changes_110">Streams API changes in 1.1.0</a> for more details. </li>
+</ul>
+
+>>>>>>> 077fd9c... HOTFIX: Fix broken javadoc links on web docs(#4543)
 <h4><a id="upgrade_1_0_0" href="#upgrade_1_0_0">Upgrading from 0.8.x, 0.9.x, 0.10.0.x, 0.10.1.x, 0.10.2.x or 0.11.0.x to 1.0.0</a></h4>
 <p>Kafka 1.0.0 introduces wire protocol changes. By following the recommended rolling upgrade plan below,
     you guarantee no downtime during the upgrade. However, please review the <a href="#upgrade_100_notable">notable changes in 1.0.0</a> before upgrading.

-- 
To stop receiving notification emails like this one, please contact
guozhang@apache.org.

Mime
View raw message