kafka-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From guozh...@apache.org
Subject [2/7] kafka-site git commit: HOTFIX: broken /10 docs
Date Wed, 20 Dec 2017 23:51:46 GMT
http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1382654c/10/developer-guide/running-app.html
----------------------------------------------------------------------
diff --git a/10/developer-guide/running-app.html b/10/developer-guide/running-app.html
deleted file mode 100644
index fcd7b72..0000000
--- a/10/developer-guide/running-app.html
+++ /dev/null
@@ -1,197 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<script><!--#include virtual="../../js/templateData.js" --></script>
-
-<script id="content-template" type="text/x-handlebars-template">
-  <!-- h1>Developer Guide for Kafka Streams API</h1 -->
-  <div class="sub-nav-sticky">
-    <div class="sticky-top">
-      <!-- div style="height:35px">
-        <a href="/{{version}}/documentation/streams/">Introduction</a>
-        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
-        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
-        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
-        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
-      </div -->
-    </div>
-  </div>
-                
-  <div class="section" id="running-streams-applications">
-<span id="streams-developer-guide-execution"></span><h1>Running Streams Applications<a class="headerlink" href="#running-streams-applications" title="Permalink to this headline"></a></h1>
-<p>You can run Java applications that use the Kafka Streams library without any additional configuration or requirements.</p>
-<div class="contents local topic" id="table-of-contents">
-<p class="topic-title first">Table of Contents</p>
-<ul class="simple">
-<li><a class="reference internal" href="#starting-a-kafka-streams-application" id="id3">Starting a Kafka Streams application</a></li>
-<li><a class="reference internal" href="#elastic-scaling-of-your-application" id="id4">Elastic scaling of your application</a><ul>
-<li><a class="reference internal" href="#adding-capacity-to-your-application" id="id5">Adding capacity to your application</a></li>
-<li><a class="reference internal" href="#removing-capacity-from-your-application" id="id6">Removing capacity from your application</a></li>
-<li><a class="reference internal" href="#state-restoration-during-workload-rebalance" id="id7">State restoration during workload rebalance</a></li>
-<li><a class="reference internal" href="#determining-how-many-application-instances-to-run" id="id8">Determining how many application instances to run</a></li>
-</ul>
-</li>
-</ul>
-</div>
-      <div class="section" id="running-streams-applications">
-          <span id="streams-developer-guide-execution"></span><h1>Running Streams Applications<a class="headerlink" href="#running-streams-applications" title="Permalink to this headline"></a></h1>
-          <p>You can run Java applications that use the Kafka Streams library without any additional configuration or requirements. Kafka Streams
-              also provides the ability to receive notification of the various states of the application. The ability to monitor the runtime
-              status is discussed in <a class="reference internal" href="../monitoring.html#streams-monitoring"><span class="std std-ref">the monitoring guide</span></a>.</p>
-          <div class="contents local topic" id="table-of-contents">
-              <p class="topic-title first">Table of Contents</p>
-              <ul class="simple">
-                  <li><a class="reference internal" href="#starting-a-kafka-streams-application" id="id3">Starting a Kafka Streams application</a></li>
-                  <li><a class="reference internal" href="#elastic-scaling-of-your-application" id="id4">Elastic scaling of your application</a><ul>
-                      <li><a class="reference internal" href="#adding-capacity-to-your-application" id="id5">Adding capacity to your application</a></li>
-                      <li><a class="reference internal" href="#removing-capacity-from-your-application" id="id6">Removing capacity from your application</a></li>
-                      <li><a class="reference internal" href="#state-restoration-during-workload-rebalance" id="id7">State restoration during workload rebalance</a></li>
-                      <li><a class="reference internal" href="#determining-how-many-application-instances-to-run" id="id8">Determining how many application instances to run</a></li>
-                  </ul>
-                  </li>
-              </ul>
-          </div>
-          <div class="section" id="starting-a-kafka-streams-application">
-              <span id="streams-developer-guide-execution-starting"></span><h2><a class="toc-backref" href="#id3">Starting a Kafka Streams application</a><a class="headerlink" href="#starting-a-kafka-streams-application" title="Permalink to this headline"></a></h2>
-              <p>You can package your Java application as a fat JAR file and then start the application like this:</p>
-              <div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># Start the application in class `com.example.MyStreamsApp`</span>
-<span class="c1"># from the fat JAR named `path-to-app-fatjar.jar`.</span>
-$ java -cp path-to-app-fatjar.jar com.example.MyStreamsApp
-</pre></div>
-              </div>
-              <p>For more information about how you can package your application in this way, see the
-                  <a class="reference internal" href="../code-examples.html#streams-code-examples"><span class="std std-ref">Streams code examples</span></a>.</p>
-              <p>When you start your application you are launching a Kafka Streams instance of your application. You can run multiple
-                  instances of your application. A common scenario is that there are multiple instances of your application running in
-                  parallel. For more information, see <a class="reference internal" href="../architecture.html#streams-architecture-parallelism-model"><span class="std std-ref">Parallelism Model</span></a>.</p>
-              <p>When the application instance starts running, the defined processor topology will be initialized as one or more stream tasks.
-                  If the processor topology defines any state stores, these are also constructed during the initialization period. For
-                  more information, see the  <a class="reference internal" href="#streams-developer-guide-execution-scaling-state-restoration"><span class="std std-ref">State restoration during workload rebalance</span></a> section).</p>
-          </div>
-          <div class="section" id="elastic-scaling-of-your-application">
-              <span id="streams-developer-guide-execution-scaling"></span><h2><a class="toc-backref" href="#id4">Elastic scaling of your application</a><a class="headerlink" href="#elastic-scaling-of-your-application" title="Permalink to this headline"></a></h2>
-              <p>Kafka Streams makes your stream processing applications elastic and scalable.  You can add and remove processing capacity
-                  dynamically during application runtime without any downtime or data loss.  This makes your applications
-                  resilient in the face of failures and for allows you to perform maintenance as needed (e.g. rolling upgrades).</p>
-              <p>For more information about this elasticity, see the <a class="reference internal" href="../architecture.html#streams-architecture-parallelism-model"><span class="std std-ref">Parallelism Model</span></a> section. Kafka Streams
-                  leverages the Kafka group management functionality, which is built right into the <a class="reference external" href="https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol">Kafka wire protocol</a>. It is the foundation that enables the
-                  elasticity of Kafka Streams applications: members of a group coordinate and collaborate jointly on the consumption and
-                  processing of data in Kafka.  Additionally, Kafka Streams provides stateful processing and allows for fault-tolerant
-                  state in environments where application instances may come and go at any time.</p>
-              <div class="section" id="adding-capacity-to-your-application">
-                  <h3><a class="toc-backref" href="#id5">Adding capacity to your application</a><a class="headerlink" href="#adding-capacity-to-your-application" title="Permalink to this headline"></a></h3>
-                  <p>If you need more processing capacity for your stream processing application, you can simply start another instance of your stream processing application, e.g. on another machine, in order to scale out.  The instances of your application will become aware of each other and automatically begin to share the processing work.  More specifically, what will be handed over from the existing instances to the new instances is (some of) the stream tasks that have been run by the existing instances.  Moving stream tasks from one instance to another results in moving the processing work plus any internal state of these stream tasks (the state of a stream task will be re-created in the target instance by restoring the state from its corresponding changelog topic).</p>
-                  <p>The various instances of your application each run in their own JVM process, which means that each instance can leverage all the processing capacity that is available to their respective JVM process (minus the capacity that any non-Kafka-Streams part of your application may be using).  This explains why running additional instances will grant your application additional processing capacity.  The exact capacity you will be adding by running a new instance depends of course on the environment in which the new instance runs: available CPU cores, available main memory and Java heap space, local storage, network bandwidth, and so on.  Similarly, if you stop any of the running instances of your application, then you are removing and freeing up the respective processing capacity.</p>
-                  <div class="figure align-center" id="id1">
-                      <a class="reference internal image-reference" href="../../../images/streams-elastic-scaling-1.png"><img alt="../../../images/streams-elastic-scaling-1.png" src="../../../images/streams-elastic-scaling-1.png" style="width: 500pt; height: 400pt;" /></a>
-                      <p class="caption"><span class="caption-text">Before adding capacity: only a single instance of your Kafka Streams application is running.  At this point the corresponding Kafka consumer group of your application contains only a single member (this instance).  All data is being read and processed by this single instance.</span></p>
-                  </div>
-                  <div class="figure align-center" id="id2">
-                      <a class="reference internal image-reference" href="../../../images/streams-elastic-scaling-2.png"><img alt="../../../images/streams-elastic-scaling-2.png" src="../../../images/streams-elastic-scaling-2.png" style="width: 500pt; height: 400pt;" /></a>
-                      <p class="caption"><span class="caption-text">After adding capacity: now two additional instances of your Kafka Streams application are running, and they have automatically joined the application&#8217;s Kafka consumer group for a total of three current members. These three instances are automatically splitting the processing work between each other. The splitting is based on the Kafka topic partitions from which data is being read.</span></p>
-                  </div>
-              </div>
-              <div class="section" id="removing-capacity-from-your-application">
-                  <h3><a class="toc-backref" href="#id6">Removing capacity from your application</a><a class="headerlink" href="#removing-capacity-from-your-application" title="Permalink to this headline"></a></h3>
-                  <p>To remove processing capacity, you can stop running stream processing application instances (e.g., shut down two of
-                      the four instances), it will automatically leave the application’s consumer group, and the remaining instances of
-                      your application will automatically take over the processing work. The remaining instances take over the stream tasks that
-                      were run by the stopped instances.  Moving stream tasks from one instance to another results in moving the processing
-                      work plus any internal state of these stream tasks. The state of a stream task is recreated in the target instance
-                      from its changelog topic.</p>
-                  <div class="figure align-center">
-                      <a class="reference internal image-reference" href="../../../images/streams-elastic-scaling-3.png"><img alt="../../../images/streams-elastic-scaling-3.png" src="../../../images/streams-elastic-scaling-3.png" style="width: 500pt; height: 400pt;" /></a>
-                  </div>
-              </div>
-              <div class="section" id="state-restoration-during-workload-rebalance">
-                  <span id="streams-developer-guide-execution-scaling-state-restoration"></span><h3><a class="toc-backref" href="#id7">State restoration during workload rebalance</a><a class="headerlink" href="#state-restoration-during-workload-rebalance" title="Permalink to this headline"></a></h3>
-                  <p>When a task is migrated, the task processing state is fully restored before the application instance resumes
-                      processing. This guarantees the correct processing results. In Kafka Streams, state restoration is usually done by
-                      replaying the corresponding changelog topic to reconstruct the state store. To minimize changelog-based restoration
-                      latency by using replicated local state stores, you can specify <code class="docutils literal"><span class="pre">num.standby.replicas</span></code>. When a stream task is
-                      initialized or re-initialized on the application instance, its state store is restored like this:</p>
-                  <ul class="simple">
-                      <li>If no local state store exists, the changelog is replayed from the earliest to the current offset. This reconstructs the local state store to the most recent snapshot.</li>
-                      <li>If a local state store exists, the changelog is replayed from the previously checkpointed offset. The changes are applied and the state is restored to the most recent snapshot. This method takes less time because it is applying a smaller portion of the changelog.</li>
-                  </ul>
-                  <p>For more information, see <a class="reference internal" href="config-streams.html#streams-developer-guide-standby-replicas"><span class="std std-ref">Standby Replicas</span></a>.</p>
-              </div>
-              <div class="section" id="determining-how-many-application-instances-to-run">
-                  <h3><a class="toc-backref" href="#id8">Determining how many application instances to run</a><a class="headerlink" href="#determining-how-many-application-instances-to-run" title="Permalink to this headline"></a></h3>
-                  <p>The parallelism of a Kafka Streams application is primarily determined by how many partitions the input topics have. For
-                      example, if your application reads from a single topic that has ten partitions, then you can run up to ten instances
-                      of your applications. You can run further instances, but these will be idle.</p>
-                  <p>The number of topic partitions is the upper limit for the parallelism of your Kafka Streams application and for the
-                      number of running instances of your application.</p>
-                  <p>To achieve balanced workload processing across application instances and to prevent processing hotpots, you should
-                      distribute data and processing workloads:</p>
-                  <ul class="simple">
-                      <li>Data should be equally distributed across topic partitions. For example, if two topic partitions each have 1 million messages, this is better than a single partition with 2 million messages and none in the other.</li>
-                      <li>Processing workload should be equally distributed across topic partitions. For example, if the time to process messages varies widely, then it is better to spread the processing-intensive messages across partitions rather than storing these messages within the same partition.</li>
-                  </ul>
-</div>
-</div>
-</div>
-
-
-               </div>
-              </div>
-              <div class="pagination">
-                <a href="/{{version}}/documentation/streams/developer-guide/memory-mgmt" class="pagination__btn pagination__btn__prev">Previous</a>
-                <a href="/{{version}}/documentation/streams/developer-guide/manage-topics" class="pagination__btn pagination__btn__next">Next</a>
-              </div>
-                </script>
-
-                <!--#include virtual="../../../includes/_header.htm" -->
-                <!--#include virtual="../../../includes/_top.htm" -->
-                    <div class="content documentation documentation--current">
-                    <!--#include virtual="../../../includes/_nav.htm" -->
-                    <div class="right">
-                    <!--#include virtual="../../../includes/_docs_banner.htm" -->
-                    <ul class="breadcrumbs">
-                    <li><a href="/documentation">Documentation</a></li>
-                    <li><a href="/documentation/streams">Kafka Streams</a></li>
-                    <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
-                </ul>
-                <div class="p-content"></div>
-                    </div>
-                    </div>
-                    <!--#include virtual="../../../includes/_footer.htm" -->
-                    <script>
-                    $(function() {
-                        // Show selected style on nav item
-                        $('.b-nav__streams').addClass('selected');
-
-                        //sticky secondary nav
-                        var $navbar = $(".sub-nav-sticky"),
-                            y_pos = $navbar.offset().top,
-                            height = $navbar.height();
-
-                        $(window).scroll(function() {
-                            var scrollTop = $(window).scrollTop();
-
-                            if (scrollTop > y_pos - height) {
-                                $navbar.addClass("navbar-fixed")
-                            } else if (scrollTop <= y_pos) {
-                                $navbar.removeClass("navbar-fixed")
-                            }
-                        });
-
-                        // Display docs subnav items
-                        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
-                    });
-              </script>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1382654c/10/developer-guide/security.html
----------------------------------------------------------------------
diff --git a/10/developer-guide/security.html b/10/developer-guide/security.html
deleted file mode 100644
index ffbb8df..0000000
--- a/10/developer-guide/security.html
+++ /dev/null
@@ -1,176 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<script><!--#include virtual="../../js/templateData.js" --></script>
-
-<script id="content-template" type="text/x-handlebars-template">
-  <!-- h1>Developer Guide for Kafka Streams API</h1 -->
-  <div class="sub-nav-sticky">
-    <div class="sticky-top">
-      <!-- div style="height:35px">
-        <a href="/{{version}}/documentation/streams/">Introduction</a>
-        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
-        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
-        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
-        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
-      </div -->
-    </div>
-  </div>
-
-    <div class="section" id="streams-security">
-        <span id="streams-developer-guide-security"></span><h1>Streams Security<a class="headerlink" href="#streams-security" title="Permalink to this headline"></a></h1>
-        <div class="contents local topic" id="table-of-contents">
-            <p class="topic-title first">Table of Contents</p>
-            <ul class="simple">
-                <li><a class="reference internal" href="#required-acl-setting-for-secure-kafka-clusters" id="id1">Required ACL setting for secure Kafka clusters</a></li>
-                <li><a class="reference internal" href="#security-example" id="id2">Security example</a></li>
-            </ul>
-        </div>
-        <p>Kafka Streams natively integrates with the <a class="reference internal" href="../../kafka/security.html#kafka-security"><span class="std std-ref">Kafka&#8217;s security features</span></a> and supports all of the
-            client-side security features in Kafka.  Streams leverages the <a class="reference internal" href="../../clients/index.html#kafka-clients"><span class="std std-ref">Java Producer and Consumer API</span></a>.</p>
-        <p>To secure your Stream processing applications, configure the security settings in the corresponding Kafka producer
-            and consumer clients, and then specify the corresponding configuration settings in your Kafka Streams application.</p>
-        <p>Kafka supports cluster encryption and authentication, including a mix of authenticated and unauthenticated,
-            and encrypted and non-encrypted clients. Using security is optional.</p>
-        <p>Here a few relevant client-side security features:</p>
-        <dl class="docutils">
-            <dt>Encrypt data-in-transit between your applications and Kafka brokers</dt>
-            <dd>You can enable the encryption of the client-server communication between your applications and the Kafka brokers.
-                For example, you can configure your applications to always use encryption when reading and writing data to and from
-                Kafka. This is critical when reading and writing data across security domains such as internal network, public
-                internet, and partner networks.</dd>
-            <dt>Client authentication</dt>
-            <dd>You can enable client authentication for connections from your application to Kafka brokers. For example, you can
-                define that only specific applications are allowed to connect to your Kafka cluster.</dd>
-            <dt>Client authorization</dt>
-            <dd>You can enable client authorization of read and write operations by your applications. For example, you can define
-                that only specific applications are allowed to read from a Kafka topic.  You can also restrict write access to Kafka
-                topics to prevent data pollution or fraudulent activities.</dd>
-        </dl>
-        <p>For more information about the security features in Apache Kafka, see <a class="reference internal" href="../../kafka/security.html#kafka-security"><span class="std std-ref">Kafka Security</span></a>.</p>
-        <div class="section" id="required-acl-setting-for-secure-kafka-clusters">
-            <span id="streams-developer-guide-security-acls"></span><h2><a class="toc-backref" href="#id1">Required ACL setting for secure Kafka clusters</a><a class="headerlink" href="#required-acl-setting-for-secure-kafka-clusters" title="Permalink to this headline"></a></h2>
-            <p>When applications are run against a secured Kafka cluster, the principal running the application must have the ACL
-                <code class="docutils literal"><span class="pre">--cluster</span> <span class="pre">--operation</span> <span class="pre">Create</span></code> set so that the application has the permissions to create
-                <a class="reference internal" href="manage-topics.html#streams-developer-guide-topics-internal"><span class="std std-ref">internal topics</span></a>.</p>
-        </div>
-        <div class="section" id="security-example">
-            <span id="streams-developer-guide-security-example"></span><h2><a class="toc-backref" href="#id2">Security example</a><a class="headerlink" href="#security-example" title="Permalink to this headline"></a></h2>
-            <p>The purpose is to configure a Kafka Streams application to enable client authentication and encrypt data-in-transit when
-                communicating with its Kafka cluster.</p>
-            <p>This example assumes that the Kafka brokers in the cluster already have their security setup and that the necessary SSL
-                certificates are available to the application in the local filesystem locations. For example, if you are using Docker
-                then you must also include these SSL certificates in the correct locations within the Docker image.</p>
-            <p>The snippet below shows the settings to enable client authentication and SSL encryption for data-in-transit between your
-                Kafka Streams application and the Kafka cluster it is reading and writing from:</p>
-            <div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># Essential security settings to enable client authentication and SSL encryption</span>
-bootstrap.servers<span class="o">=</span>kafka.example.com:9093
-security.protocol<span class="o">=</span>SSL
-ssl.truststore.location<span class="o">=</span>/etc/security/tls/kafka.client.truststore.jks
-ssl.truststore.password<span class="o">=</span>test1234
-ssl.keystore.location<span class="o">=</span>/etc/security/tls/kafka.client.keystore.jks
-ssl.keystore.password<span class="o">=</span>test1234
-ssl.key.password<span class="o">=</span>test1234
-</pre></div>
-            </div>
-            <p>Configure these settings in the application for your <code class="docutils literal"><span class="pre">StreamsConfig</span></code> instance. These settings will encrypt any
-                data-in-transit that is being read from or written to Kafka, and your application will authenticate itself against the
-                Kafka brokers that it is communicating with. Note that this example does not cover client authorization.</p>
-            <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Code of your Java application that uses the Kafka Streams library</span>
-<span class="n">Properties</span> <span class="n">settings</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Properties</span><span class="o">();</span>
-<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">APPLICATION_ID_CONFIG</span><span class="o">,</span> <span class="s">&quot;secure-kafka-streams-app&quot;</span><span class="o">);</span>
-<span class="c1">// Where to find secure Kafka brokers.  Here, it&#39;s on port 9093.</span>
-<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">StreamsConfig</span><span class="o">.</span><span class="na">BOOTSTRAP_SERVERS_CONFIG</span><span class="o">,</span> <span class="s">&quot;kafka.example.com:9093&quot;</span><span class="o">);</span>
-<span class="c1">//</span>
-<span class="c1">// ...further non-security related settings may follow here...</span>
-<span class="c1">//</span>
-<span class="c1">// Security settings.</span>
-<span class="c1">// 1. These settings must match the security settings of the secure Kafka cluster.</span>
-<span class="c1">// 2. The SSL trust store and key store files must be locally accessible to the application.</span>
-<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">CommonClientConfigs</span><span class="o">.</span><span class="na">SECURITY_PROTOCOL_CONFIG</span><span class="o">,</span> <span class="s">&quot;SSL&quot;</span><span class="o">);</span>
-<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_TRUSTSTORE_LOCATION_CONFIG</span><span class="o">,</span> <span class="s">&quot;/etc/security/tls/kafka.client.truststore.jks&quot;</span><span class="o">);</span>
-<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_TRUSTSTORE_PASSWORD_CONFIG</span><span class="o">,</span> <span class="s">&quot;test1234&quot;</span><span class="o">);</span>
-<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_KEYSTORE_LOCATION_CONFIG</span><span class="o">,</span> <span class="s">&quot;/etc/security/tls/kafka.client.keystore.jks&quot;</span><span class="o">);</span>
-<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_KEYSTORE_PASSWORD_CONFIG</span><span class="o">,</span> <span class="s">&quot;test1234&quot;</span><span class="o">);</span>
-<span class="n">settings</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="n">SslConfigs</span><span class="o">.</span><span class="na">SSL_KEY_PASSWORD_CONFIG</span><span class="o">,</span> <span class="s">&quot;test1234&quot;</span><span class="o">);</span>
-<span class="n">StreamsConfig</span> <span class="n">streamsConfiguration</span> <span class="o">=</span> <span class="k">new</span> <span class="n">StreamsConfig</span><span class="o">(</span><span class="n">settings</span><span class="o">);</span>
-</pre></div>
-            </div>
-            <p>If you incorrectly configure a security setting in your application, it will fail at runtime, typically right after you
-                start it.  For example, if you enter an incorrect password for the <code class="docutils literal"><span class="pre">ssl.keystore.password</span></code> setting, an error message
-                similar to this would be logged and then the application would terminate:</p>
-            <div class="highlight-bash"><div class="highlight"><pre><span></span><span class="c1"># Misconfigured ssl.keystore.password</span>
-Exception in thread <span class="s2">&quot;main&quot;</span> org.apache.kafka.common.KafkaException: Failed to construct kafka producer
-<span class="o">[</span>...snip...<span class="o">]</span>
-Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException:
-   java.io.IOException: Keystore was tampered with, or password was incorrect
-<span class="o">[</span>...snip...<span class="o">]</span>
-Caused by: java.security.UnrecoverableKeyException: Password verification failed
-</pre></div>
-            </div>
-            <p>Monitor your Kafka Streams application log files for such error messages to spot any misconfigured applications quickly.</p>
-</div>
-</div>
-
-
-               </div>
-              </div>
-              <div class="pagination">
-                <a href="/{{version}}/documentation/streams/developer-guide/manage-topics" class="pagination__btn pagination__btn__prev">Previous</a>
-                <a href="/{{version}}/documentation/streams/developer-guide/app-reset-tool" class="pagination__btn pagination__btn__next">Next</a>
-              </div>
-                </script>
-
-                <!--#include virtual="../../../includes/_header.htm" -->
-                <!--#include virtual="../../../includes/_top.htm" -->
-                    <div class="content documentation documentation--current">
-                    <!--#include virtual="../../../includes/_nav.htm" -->
-                    <div class="right">
-                    <!--#include virtual="../../../includes/_docs_banner.htm" -->
-                    <ul class="breadcrumbs">
-                    <li><a href="/documentation">Documentation</a></li>
-                    <li><a href="/documentation/streams">Kafka Streams</a></li>
-                    <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
-                </ul>
-                <div class="p-content"></div>
-                    </div>
-                    </div>
-                    <!--#include virtual="../../../includes/_footer.htm" -->
-                    <script>
-                    $(function() {
-                        // Show selected style on nav item
-                        $('.b-nav__streams').addClass('selected');
-
-                        //sticky secondary nav
-                        var $navbar = $(".sub-nav-sticky"),
-                            y_pos = $navbar.offset().top,
-                            height = $navbar.height();
-
-                        $(window).scroll(function() {
-                            var scrollTop = $(window).scrollTop();
-
-                            if (scrollTop > y_pos - height) {
-                                $navbar.addClass("navbar-fixed")
-                            } else if (scrollTop <= y_pos) {
-                                $navbar.removeClass("navbar-fixed")
-                            }
-                        });
-
-                        // Display docs subnav items
-                        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
-                    });
-              </script>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1382654c/10/developer-guide/write-streams.html
----------------------------------------------------------------------
diff --git a/10/developer-guide/write-streams.html b/10/developer-guide/write-streams.html
deleted file mode 100644
index f884a1c..0000000
--- a/10/developer-guide/write-streams.html
+++ /dev/null
@@ -1,198 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<script><!--#include virtual="../../js/templateData.js" --></script>
-
-<script id="content-template" type="text/x-handlebars-template">
-  <!-- h1>Developer Guide for Kafka Streams API</h1 -->
-  <div class="sub-nav-sticky">
-    <!-- div class="sticky-top">
-      <div style="height:35px">
-        <a href="/{{version}}/documentation/streams/">Introduction</a>
-        <a class="active-menu-item" href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
-        <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
-        <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
-        <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
-      </div>
-    </div -->
-  </div>
-
-  <div class="section" id="writing-a-streams-application">
-    <span id="streams-write-app"></span><h1>Writing a Streams Application<a class="headerlink" href="#writing-a-streams-application" title="Permalink to this headline"></a></h1>
-      <p class="topic-title first">Table of Contents</p>
-      <ul class="simple">
-          <li><a class="reference internal" href="#libraries-and-maven-artifacts" id="id1">Libraries and Maven artifacts</a></li>
-          <li><a class="reference internal" href="#using-kafka-streams-within-your-application-code" id="id2">Using Kafka Streams within your application code</a></li>
-      </ul>
-    <p>Any Java application that makes use of the Kafka Streams library is considered a Kafka Streams application.
-      The computational logic of a Kafka Streams application is defined as a <a class="reference internal" href="../concepts.html#streams-concepts"><span class="std std-ref">processor topology</span></a>,
-      which is a graph of stream processors (nodes) and streams (edges).</p>
-    <p>You can define the processor topology with the Kafka Streams APIs:</p>
-    <dl class="docutils">
-      <dt><a class="reference internal" href="dsl-api.html#streams-developer-guide-dsl"><span class="std std-ref">Kafka Streams DSL</span></a></dt>
-      <dd>A high-level API that provides provides the most common data transformation operations such as <code class="docutils literal"><span class="pre">map</span></code>, <code class="docutils literal"><span class="pre">filter</span></code>, <code class="docutils literal"><span class="pre">join</span></code>, and <code class="docutils literal"><span class="pre">aggregations</span></code> out of the box. The DSL is the recommended starting point for developers new to Kafka Streams, and should cover many use cases and stream processing needs.</dd>
-      <dt><a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a></dt>
-      <dd>A low-level API that lets you add and connect processors as well as interact directly with state stores. The Processor API provides you with even more flexibility than the DSL but at the expense of requiring more manual work on the side of the application developer (e.g., more lines of code).</dd>
-    </dl>
-    <div class="section" id="using-kafka-streams-within-your-application-code">
-      <h2>Using Kafka Streams within your application code<a class="headerlink" href="#using-kafka-streams-within-your-application-code" title="Permalink to this headline"></a></h2>
-      <p>You can call Kafka Streams from anywhere in your application code, but usually these calls are made within the <code class="docutils literal"><span class="pre">main()</span></code> method of
-        your application, or some variant thereof.  The basic elements of defining a processing topology within your application
-        are described below.</p>
-      <p>First, you must create an instance of <code class="docutils literal"><span class="pre">KafkaStreams</span></code>.</p>
-      <ul class="simple">
-        <li>The first argument of the <code class="docutils literal"><span class="pre">KafkaStreams</span></code> constructor takes a topology (either <code class="docutils literal"><span class="pre">StreamsBuilder#build()</span></code> for the
-          <a class="reference internal" href="dsl-api.html#streams-developer-guide-dsl"><span class="std std-ref">DSL</span></a> or <code class="docutils literal"><span class="pre">Topology</span></code> for the
-          <a class="reference internal" href="processor-api.html#streams-developer-guide-processor-api"><span class="std std-ref">Processor API</span></a>) that is used to define a topology.</li>
-        <li>The second argument is an instance of <code class="docutils literal"><span class="pre">StreamsConfig</span></code>, which defines the configuration for this specific topology.</li>
-      </ul>
-      <p>Code example:</p>
-      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">org.apache.kafka.streams.KafkaStreams</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.StreamsConfig</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.kstream.StreamsBuilder</span><span class="o">;</span>
-<span class="kn">import</span> <span class="nn">org.apache.kafka.streams.processor.Topology</span><span class="o">;</span>
-
-<span class="c1">// Use the builders to define the actual processing topology, e.g. to specify</span>
-<span class="c1">// from which input topics to read, which stream operations (filter, map, etc.)</span>
-<span class="c1">// should be called, and so on.  We will cover this in detail in the subsequent</span>
-<span class="c1">// sections of this Developer Guide.</span>
-
-<span class="n">StreamsBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="o">...;</span>  <span class="c1">// when using the DSL</span>
-<span class="n">Topology</span> <span class="n">topology</span> <span class="o">=</span> <span class="n">builder</span><span class="o">.</span><span class="na">build</span><span class="o">();</span>
-<span class="c1">//</span>
-<span class="c1">// OR</span>
-<span class="c1">//</span>
-<span class="n">Topology</span> <span class="n">topology</span> <span class="o">=</span> <span class="o">...;</span> <span class="c1">// when using the Processor API</span>
-
-<span class="c1">// Use the configuration to tell your application where the Kafka cluster is,</span>
-<span class="c1">// which Serializers/Deserializers to use by default, to specify security settings,</span>
-<span class="c1">// and so on.</span>
-<span class="n">StreamsConfig</span> <span class="n">config</span> <span class="o">=</span> <span class="o">...;</span>
-
-<span class="n">KafkaStreams</span> <span class="n">streams</span> <span class="o">=</span> <span class="k">new</span> <span class="n">KafkaStreams</span><span class="o">(</span><span class="n">topology</span><span class="o">,</span> <span class="n">config</span><span class="o">);</span>
-</pre></div>
-      </div>
-      <p>At this point, internal structures are initialized, but the processing is not started yet.
-        You have to explicitly start the Kafka Streams thread by calling the <code class="docutils literal"><span class="pre">KafkaStreams#start()</span></code> method:</p>
-      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Start the Kafka Streams threads</span>
-<span class="n">streams</span><span class="o">.</span><span class="na">start</span><span class="o">();</span>
-</pre></div>
-      </div>
-      <p>If there are other instances of this stream processing application running elsewhere (e.g., on another machine), Kafka
-        Streams transparently re-assigns tasks from the existing instances to the new instance that you just started.
-        For more information, see <a class="reference internal" href="../architecture.html#streams-architecture-tasks"><span class="std std-ref">Stream Partitions and Tasks</span></a> and <a class="reference internal" href="../architecture.html#streams-architecture-threads"><span class="std std-ref">Threading Model</span></a>.</p>
-      <p>To catch any unexpected exceptions, you can set an <code class="docutils literal"><span class="pre">java.lang.Thread.UncaughtExceptionHandler</span></code> before you start the
-        application.  This handler is called whenever a stream thread is terminated by an unexpected exception:</p>
-      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Java 8+, using lambda expressions</span>
-<span class="n">streams</span><span class="o">.</span><span class="na">setUncaughtExceptionHandler</span><span class="o">((</span><span class="n">Thread</span> <span class="n">thread</span><span class="o">,</span> <span class="n">Throwable</span> <span class="n">throwable</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="o">{</span>
-  <span class="c1">// here you should examine the throwable/exception and perform an appropriate action!</span>
-<span class="o">});</span>
-
-
-<span class="c1">// Java 7</span>
-<span class="n">streams</span><span class="o">.</span><span class="na">setUncaughtExceptionHandler</span><span class="o">(</span><span class="k">new</span> <span class="n">Thread</span><span class="o">.</span><span class="na">UncaughtExceptionHandler</span><span class="o">()</span> <span class="o">{</span>
-  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">uncaughtException</span><span class="o">(</span><span class="n">Thread</span> <span class="n">thread</span><span class="o">,</span> <span class="n">Throwable</span> <span class="n">throwable</span><span class="o">)</span> <span class="o">{</span>
-    <span class="c1">// here you should examine the throwable/exception and perform an appropriate action!</span>
-  <span class="o">}</span>
-<span class="o">});</span>
-</pre></div>
-      </div>
-      <p>To stop the application instance, call the <code class="docutils literal"><span class="pre">KafkaStreams#close()</span></code> method:</p>
-      <div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Stop the Kafka Streams threads</span>
-<span class="n">streams</span><span class="o">.</span><span class="na">close</span><span class="o">();</span>
-</pre></div>
-      </div>
-      <p>To allow your application to gracefully shutdown in response to SIGTERM, it is recommended that you add a shutdown hook
-        and call <code class="docutils literal"><span class="pre">KafkaStreams#close</span></code>.</p>
-      <ul>
-        <li><p class="first">Here is a shutdown hook example in Java 8+:</p>
-          <blockquote>
-            <div><div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Add shutdown hook to stop the Kafka Streams threads.</span>
-<span class="c1">// You can optionally provide a timeout to `close`.</span>
-<span class="n">Runtime</span><span class="o">.</span><span class="na">getRuntime</span><span class="o">().</span><span class="na">addShutdownHook</span><span class="o">(</span><span class="k">new</span> <span class="n">Thread</span><span class="o">(</span><span class="n">streams</span><span class="o">::</span><span class="n">close</span><span class="o">));</span>
-</pre></div>
-            </div>
-            </div></blockquote>
-        </li>
-        <li><p class="first">Here is a shutdown hook example in Java 7:</p>
-          <blockquote>
-            <div><div class="highlight-java"><div class="highlight"><pre><span></span><span class="c1">// Add shutdown hook to stop the Kafka Streams threads.</span>
-<span class="c1">// You can optionally provide a timeout to `close`.</span>
-<span class="n">Runtime</span><span class="o">.</span><span class="na">getRuntime</span><span class="o">().</span><span class="na">addShutdownHook</span><span class="o">(</span><span class="k">new</span> <span class="n">Thread</span><span class="o">(</span><span class="k">new</span> <span class="n">Runnable</span><span class="o">()</span> <span class="o">{</span>
-  <span class="nd">@Override</span>
-  <span class="kd">public</span> <span class="kt">void</span> <span class="nf">run</span><span class="o">()</span> <span class="o">{</span>
-      <span class="n">streams</span><span class="o">.</span><span class="na">close</span><span class="o">();</span>
-  <span class="o">}</span>
-<span class="o">}));</span>
-</pre></div>
-            </div>
-            </div></blockquote>
-        </li>
-      </ul>
-      <p>After an application is stopped, Kafka Streams will migrate any tasks that had been running in this instance to available remaining
-        instances.</p>
-</div>
-</div>
-
-
-               </div>
-              </div>
-  <div class="pagination">
-    <a href="/{{version}}/documentation/streams/developer-guide/" class="pagination__btn pagination__btn__prev">Previous</a>
-    <a href="/{{version}}/documentation/streams/developer-guide/config-streams" class="pagination__btn pagination__btn__next">Next</a>
-  </div>
-</script>
-
-<!--#include virtual="../../../includes/_header.htm" -->
-<!--#include virtual="../../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
-  <!--#include virtual="../../../includes/_nav.htm" -->
-  <div class="right">
-    <!--#include virtual="../../../includes/_docs_banner.htm" -->
-    <ul class="breadcrumbs">
-      <li><a href="/documentation">Documentation</a></li>
-      <li><a href="/documentation/streams">Kafka Streams</a></li>
-      <li><a href="/documentation/streams/developer-guide/">Developer Guide</a></li>
-    </ul>
-    <div class="p-content"></div>
-  </div>
-</div>
-<!--#include virtual="../../../includes/_footer.htm" -->
-<script>
-    $(function() {
-        // Show selected style on nav item
-        $('.b-nav__streams').addClass('selected');
-
-        //sticky secondary nav
-        var $navbar = $(".sub-nav-sticky"),
-            y_pos = $navbar.offset().top,
-            height = $navbar.height();
-
-        $(window).scroll(function() {
-            var scrollTop = $(window).scrollTop();
-
-            if (scrollTop > y_pos - height) {
-                $navbar.addClass("navbar-fixed")
-            } else if (scrollTop <= y_pos) {
-                $navbar.removeClass("navbar-fixed")
-            }
-        });
-
-        // Display docs subnav items
-        $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
-    });
-</script>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1382654c/10/quickstart.html
----------------------------------------------------------------------
diff --git a/10/quickstart.html b/10/quickstart.html
index 92ab885..063fec0 100644
--- a/10/quickstart.html
+++ b/10/quickstart.html
@@ -14,88 +14,25 @@
  See the License for the specific language governing permissions and
  limitations under the License.
 -->
-<script><!--#include virtual="../js/templateData.js" --></script>
-
-<script id="content-template" type="text/x-handlebars-template">
-
-  <h1>Run Streams Demo Application</h1>
-  <div class="sub-nav-sticky">
-      <div class="sticky-top">
-        <div style="height:35px">
-          <a href="/{{version}}/documentation/streams/">Introduction</a>
-          <a href="/{{version}}/documentation/streams/developer-guide/">Developer Guide</a>
-          <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
-          <a class="active-menu-item" href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
-          <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
-        </div>
-      </div>
-  </div> 
-<p>
-  This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data. However, if you have already started Kafka and
-  ZooKeeper, feel free to skip the first two steps.
-</p>
-
-  <p>
- Kafka Streams is a client library for building mission-critical real-time applications and microservices,
-  where the input and/or output data is stored in Kafka clusters. Kafka Streams combines the simplicity of
-  writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's
-  server-side cluster technology to make these applications highly scalable, elastic, fault-tolerant, distributed,
- and much more.
-  </p>
-  <p>
-This quickstart example will demonstrate how to run a streaming application coded in this library. Here is the gist
-of the <code><a href="https://github.com/apache/kafka/blob/{{dotVersion}}/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountDemo.java">WordCountDemo</a></code> example code (converted to use Java 8 lambda expressions for easy reading).
-</p>
-<pre class="brush: java;">
-// Serializers/deserializers (serde) for String and Long types
-final Serde&lt;String&gt; stringSerde = Serdes.String();
-final Serde&lt;Long&gt; longSerde = Serdes.Long();
-
-// Construct a `KStream` from the input topic "streams-plaintext-input", where message values
-// represent lines of text (for the sake of this example, we ignore whatever may be stored
-// in the message keys).
-KStream&lt;String, String&gt; textLines = builder.stream("streams-plaintext-input",
-    Consumed.with(stringSerde, stringSerde);
-
-KTable&lt;String, Long&gt; wordCounts = textLines
-    // Split each text line, by whitespace, into words.
-    .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
-
-    // Group the text words as message keys
-    .groupBy((key, value) -> value)
-
-    // Count the occurrences of each word (message key).
-    .count()
 
-// Store the running counts as a changelog stream to the output topic.
-wordCounts.toStream().to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));
-</pre>
+<script><!--#include virtual="js/templateData.js" --></script>
 
+<script id="quickstart-template" type="text/x-handlebars-template">
 <p>
-It implements the WordCount
-algorithm, which computes a word occurrence histogram from the input text. However, unlike other WordCount examples
-you might have seen before that operate on bounded data, the WordCount demo application behaves slightly differently because it is
-designed to operate on an <b>infinite, unbounded stream</b> of data. Similar to the bounded variant, it is a stateful algorithm that
-tracks and updates the counts of words. However, since it must assume potentially
-unbounded input data, it will periodically output its current state and results while continuing to process more data
-because it cannot know when it has processed "all" the input data.
-</p>
-<p>
-  As the first step, we will start Kafka (unless you already have it started) and then we will
-  prepare input data to a Kafka topic, which will subsequently be processed by a Kafka Streams application.
+This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data.
+Since Kafka console scripts are different for Unix-based and Windows platforms, on Windows platforms use <code>bin\windows\</code> instead of <code>bin/</code>, and change the script extension to <code>.bat</code>.
 </p>
 
-<h4><a id="quickstart_streams_download" href="#quickstart_streams_download">Step 1: Download the code</a></h4>
+<h4><a id="quickstart_download" href="#quickstart_download">Step 1: Download the code</a></h4>
 
-<a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/{{fullDotVersion}}/kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz" title="Kafka downloads">Download</a> the {{fullDotVersion}} release and un-tar it.
-Note that there are multiple downloadable Scala versions and we choose to use the recommended version ({{scalaVersion}}) here:
+<a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/{{fullDotVersion}}/kafka_2.11-{{fullDotVersion}}.tgz" title="Kafka downloads">Download</a> the {{fullDotVersion}} release and un-tar it.
 
 <pre class="brush: bash;">
-&gt; tar -xzf kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz
-&gt; cd kafka_{{scalaVersion}}-{{fullDotVersion}}
+&gt; tar -xzf kafka_2.11-{{fullDotVersion}}.tgz
+&gt; cd kafka_2.11-{{fullDotVersion}}
 </pre>
 
-<h4><a id="quickstart_streams_startserver" href="#quickstart_streams_startserver">Step 2: Start the Kafka server</a></h4>
+<h4><a id="quickstart_startserver" href="#quickstart_startserver">Step 2: Start the server</a></h4>
 
 <p>
 Kafka uses <a href="https://zookeeper.apache.org/">ZooKeeper</a> so you need to first start a ZooKeeper server if you don't already have one. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node ZooKeeper instance.
@@ -115,273 +52,249 @@ Kafka uses <a href="https://zookeeper.apache.org/">ZooKeeper</a> so you need to
 ...
 </pre>
 
+<h4><a id="quickstart_createtopic" href="#quickstart_createtopic">Step 3: Create a topic</a></h4>
 
-<h4><a id="quickstart_streams_prepare" href="#quickstart_streams_prepare">Step 3: Prepare input topic and start Kafka producer</a></h4>
-
-<!--
-
+<p>Let's create a topic named "test" with a single partition and only one replica:</p>
 <pre class="brush: bash;">
-&gt; echo -e "all streams lead to kafka\nhello kafka streams\njoin kafka summit" > file-input.txt
+&gt; bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
 </pre>
-Or on Windows:
+
+<p>We can now see that topic if we run the list topic command:</p>
 <pre class="brush: bash;">
-&gt; echo all streams lead to kafka> file-input.txt
-&gt; echo hello kafka streams>> file-input.txt
-&gt; echo|set /p=join kafka summit>> file-input.txt
+&gt; bin/kafka-topics.sh --list --zookeeper localhost:2181
+test
 </pre>
+<p>Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to.</p>
 
--->
+<h4><a id="quickstart_send" href="#quickstart_send">Step 4: Send some messages</a></h4>
 
-Next, we create the input topic named <b>streams-plaintext-input</b> and the output topic named <b>streams-wordcount-output</b>:
+<p>Kafka comes with a command line client that will take input from a file or from standard input and send it out as messages to the Kafka cluster. By default, each line will be sent as a separate message.</p>
+<p>
+Run the producer and then type a few messages into the console to send to the server.</p>
 
 <pre class="brush: bash;">
-&gt; bin/kafka-topics.sh --create \
-    --zookeeper localhost:2181 \
-    --replication-factor 1 \
-    --partitions 1 \
-    --topic streams-plaintext-input
-Created topic "streams-plaintext-input".
+&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
+This is a message
+This is another message
 </pre>
 
-Note: we create the output topic with compaction enabled because the output stream is a changelog stream
-(cf. <a href="#anchor-changelog-output">explanation of application output</a> below).
-
-<pre class="brush: bash;">
-&gt; bin/kafka-topics.sh --create \
-    --zookeeper localhost:2181 \
-    --replication-factor 1 \
-    --partitions 1 \
-    --topic streams-wordcount-output \
-    --config cleanup.policy=compact
-Created topic "streams-wordcount-output".
-</pre>
+<h4><a id="quickstart_consume" href="#quickstart_consume">Step 5: Start a consumer</a></h4>
 
-The created topic can be described with the same <b>kafka-topics</b> tool:
+<p>Kafka also has a command line consumer that will dump out messages to standard output.</p>
 
 <pre class="brush: bash;">
-&gt; bin/kafka-topics.sh --zookeeper localhost:2181 --describe
-
-Topic:streams-plaintext-input	PartitionCount:1	ReplicationFactor:1	Configs:
-    Topic: streams-plaintext-input	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
-Topic:streams-wordcount-output	PartitionCount:1	ReplicationFactor:1	Configs:
-	Topic: streams-wordcount-output	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
+&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
+This is a message
+This is another message
 </pre>
+<p>
+If you have each of the above commands running in a different terminal then you should now be able to type messages into the producer terminal and see them appear in the consumer terminal.
+</p>
+<p>
+All of the command line tools have additional options; running the command with no arguments will display usage information documenting them in more detail.
+</p>
 
-<h4><a id="quickstart_streams_start" href="#quickstart_streams_start">Step 4: Start the Wordcount Application</a></h4>
-
-The following command starts the WordCount demo application:
+<h4><a id="quickstart_multibroker" href="#quickstart_multibroker">Step 6: Setting up a multi-broker cluster</a></h4>
 
+<p>So far we have been running against a single broker, but that's no fun. For Kafka, a single broker is just a cluster of size one, so nothing much changes other than starting a few more broker instances. But just to get feel for it, let's expand our cluster to three nodes (still all on our local machine).</p>
+<p>
+First we make a config file for each of the brokers (on Windows use the <code>copy</code> command instead):
+</p>
 <pre class="brush: bash;">
-&gt; bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo
+&gt; cp config/server.properties config/server-1.properties
+&gt; cp config/server.properties config/server-2.properties
 </pre>
 
 <p>
-The demo application will read from the input topic <b>streams-plaintext-input</b>, perform the computations of the WordCount algorithm on each of the read messages,
-and continuously write its current results to the output topic <b>streams-wordcount-output</b>.
-Hence there won't be any STDOUT output except log entries as the results are written back into in Kafka.
+Now edit these new files and set the following properties:
 </p>
+<pre class="brush: text;">
 
-Now we can start the console producer in a separate terminal to write some input data to this topic:
+config/server-1.properties:
+    broker.id=1
+    listeners=PLAINTEXT://:9093
+    log.dir=/tmp/kafka-logs-1
 
+config/server-2.properties:
+    broker.id=2
+    listeners=PLAINTEXT://:9094
+    log.dir=/tmp/kafka-logs-2
+</pre>
+<p>The <code>broker.id</code> property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each other's data.</p>
+<p>
+We already have Zookeeper and our single node started, so we just need to start the two new nodes:
+</p>
 <pre class="brush: bash;">
-&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input
+&gt; bin/kafka-server-start.sh config/server-1.properties &amp;
+...
+&gt; bin/kafka-server-start.sh config/server-2.properties &amp;
+...
 </pre>
 
-and inspect the output of the WordCount demo application by reading from its output topic with the console consumer in a separate terminal:
-
+<p>Now create a new topic with a replication factor of three:</p>
 <pre class="brush: bash;">
-&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
-    --topic streams-wordcount-output \
-    --from-beginning \
-    --formatter kafka.tools.DefaultMessageFormatter \
-    --property print.key=true \
-    --property print.value=true \
-    --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
-    --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
+&gt; bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
 </pre>
 
-
-<h4><a id="quickstart_streams_process" href="#quickstart_streams_process">Step 5: Process some data</a></h4>
-
-Now let's write some message with the console producer into the input topic <b>streams-plaintext-input</b> by entering a single line of text and then hit &lt;RETURN&gt;.
-This will send a new message to the input topic, where the message key is null and the message value is the string encoded text line that you just entered
-(in practice, input data for applications will typically be streaming continuously into Kafka, rather than being manually entered as we do in this quickstart):
-
+<p>Okay but now that we have a cluster how can we know which broker is doing what? To see that run the "describe topics" command:</p>
 <pre class="brush: bash;">
-&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input
-all streams lead to kafka
+&gt; bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
+Topic:my-replicated-topic	PartitionCount:1	ReplicationFactor:3	Configs:
+	Topic: my-replicated-topic	Partition: 0	Leader: 1	Replicas: 1,2,0	Isr: 1,2,0
 </pre>
-
+<p>Here is an explanation of output. The first line gives a summary of all the partitions, each additional line gives information about one partition. Since we have only one partition for this topic there is only one line.</p>
+<ul>
+  <li>"leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
+  <li>"replicas" is the list of nodes that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
+  <li>"isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.
+</ul>
+<p>Note that in my example node 1 is the leader for the only partition of the topic.</p>
 <p>
-This message will be processed by the Wordcount application and the following output data will be written to the <b>streams-wordcount-output</b> topic and printed by the console consumer:
+We can run the same command on the original topic we created to see where it is:
 </p>
-
 <pre class="brush: bash;">
-&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
-    --topic streams-wordcount-output \
-    --from-beginning \
-    --formatter kafka.tools.DefaultMessageFormatter \
-    --property print.key=true \
-    --property print.value=true \
-    --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
-    --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
-
-all	    1
-streams	1
-lead	1
-to	    1
-kafka	1
+&gt; bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
+Topic:test	PartitionCount:1	ReplicationFactor:1	Configs:
+	Topic: test	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
 </pre>
-
+<p>So there is no surprise there&mdash;the original topic has no replicas and is on server 0, the only server in our cluster when we created it.</p>
 <p>
-Here, the first column is the Kafka message key in <code>java.lang.String</code> format and represents a word that is being counted, and the second column is the message value in <code>java.lang.Long</code>format, representing the word's latest count.
+Let's publish a few messages to our new topic:
 </p>
+<pre class="brush: bash;">
+&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
+...
+my test message 1
+my test message 2
+^C
+</pre>
+<p>Now let's consume these messages:</p>
+<pre class="brush: bash;">
+&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
+...
+my test message 1
+my test message 2
+^C
+</pre>
 
-Now let's continue writing one more message with the console producer into the input topic <b>streams-plaintext-input</b>.
-Enter the text line "hello kafka streams" and hit &lt;RETURN&gt;.
-Your terminal should look as follows:
+<p>Now let's test out fault-tolerance. Broker 1 was acting as the leader so let's kill it:</p>
+<pre class="brush: bash;">
+&gt; ps aux | grep server-1.properties
+7564 ttys002    0:15.91 /System/Library/Frameworks/JavaVM.framework/Versions/1.8/Home/bin/java...
+&gt; kill -9 7564
+</pre>
 
+On Windows use:
 <pre class="brush: bash;">
-&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input
-all streams lead to kafka
-hello kafka streams
+&gt; wmic process where "caption = 'java.exe' and commandline like '%server-1.properties%'" get processid
+ProcessId
+6016
+&gt; taskkill /pid 6016 /f
 </pre>
 
-In your other terminal in which the console consumer is running, you will observe that the WordCount application wrote new output data:
+<p>Leadership has switched to one of the slaves and node 1 is no longer in the in-sync replica set:</p>
 
 <pre class="brush: bash;">
-&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
-    --topic streams-wordcount-output \
-    --from-beginning \
-    --formatter kafka.tools.DefaultMessageFormatter \
-    --property print.key=true \
-    --property print.value=true \
-    --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
-    --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
-
-all	    1
-streams	1
-lead	1
-to	    1
-kafka	1
-hello	1
-kafka	2
-streams	2
+&gt; bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
+Topic:my-replicated-topic	PartitionCount:1	ReplicationFactor:3	Configs:
+	Topic: my-replicated-topic	Partition: 0	Leader: 2	Replicas: 1,2,0	Isr: 2,0
+</pre>
+<p>But the messages are still available for consumption even though the leader that took the writes originally is down:</p>
+<pre class="brush: bash;">
+&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
+...
+my test message 1
+my test message 2
+^C
 </pre>
 
-Here the last printed lines <b>kafka 2</b> and <b>streams 2</b> indicate updates to the keys <b>kafka</b> and <b>streams</b> whose counts have been incremented from <b>1</b> to <b>2</b>.
-Whenever you write further input messages to the input topic, you will observe new messages being added to the <b>streams-wordcount-output</b> topic,
-representing the most recent word counts as computed by the WordCount application.
-Let's enter one final input text line "join kafka summit" and hit &lt;RETURN&gt; in the console producer to the input topic <b>streams-wordcount-input</b> before we wrap up this quickstart:
+
+<h4><a id="quickstart_kafkaconnect" href="#quickstart_kafkaconnect">Step 7: Use Kafka Connect to import/export data</a></h4>
+
+<p>Writing data from the console and writing it back to the console is a convenient place to start, but you'll probably want
+to use data from other sources or export data from Kafka to other systems. For many systems, instead of writing custom
+integration code you can use Kafka Connect to import or export data.</p>
+
+<p>Kafka Connect is a tool included with Kafka that imports and exports data to Kafka. It is an extensible tool that runs
+<i>connectors</i>, which implement the custom logic for interacting with an external system. In this quickstart we'll see
+how to run Kafka Connect with simple connectors that import data from a file to a Kafka topic and export data from a
+Kafka topic to a file.</p>
+
+<p>First, we'll start by creating some seed data to test with:</p>
 
 <pre class="brush: bash;">
-&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-wordcount-input
-all streams lead to kafka
-hello kafka streams
-join kafka summit
+&gt; echo -e "foo\nbar" > test.txt
+</pre>
+Or on Windows:
+<pre class="brush: bash;">
+&gt; echo foo> test.txt
+&gt; echo bar>> test.txt
 </pre>
 
-<a name="anchor-changelog-output"></a>
-The <b>streams-wordcount-output</b> topic will subsequently show the corresponding updated word counts (see last three lines):
+<p>Next, we'll start two connectors running in <i>standalone</i> mode, which means they run in a single, local, dedicated
+process. We provide three configuration files as parameters. The first is always the configuration for the Kafka Connect
+process, containing common configuration such as the Kafka brokers to connect to and the serialization format for data.
+The remaining configuration files each specify a connector to create. These files include a unique connector name, the connector
+class to instantiate, and any other configuration required by the connector.</p>
 
 <pre class="brush: bash;">
-&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
-    --topic streams-wordcount-output \
-    --from-beginning \
-    --formatter kafka.tools.DefaultMessageFormatter \
-    --property print.key=true \
-    --property print.value=true \
-    --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
-    --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
-
-all	    1
-streams	1
-lead	1
-to	    1
-kafka	1
-hello	1
-kafka	2
-streams	2
-join	1
-kafka	3
-summit	1
+&gt; bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
 </pre>
 
-As one can see, outputs of the Wordcount application is actually a continuous stream of updates, where each output record (i.e. each line in the original output above) is
-an updated count of a single word, aka record key such as "kafka". For multiple records with the same key, each later record is an update of the previous one.
-
 <p>
-The two diagrams below illustrate what is essentially happening behind the scenes.
-The first column shows the evolution of the current state of the <code>KTable&lt;String, Long&gt;</code> that is counting word occurrences for <code>count</code>.
-The second column shows the change records that result from state updates to the KTable and that are being sent to the output Kafka topic <b>streams-wordcount-output</b>.
+These sample configuration files, included with Kafka, use the default local cluster configuration you started earlier
+and create two connectors: the first is a source connector that reads lines from an input file and produces each to a Kafka topic
+and the second is a sink connector that reads messages from a Kafka topic and produces each as a line in an output file.
 </p>
 
-<img src="/{{version}}/images/streams-table-updates-02.png" style="float: right; width: 25%;">
-<img src="/{{version}}/images/streams-table-updates-01.png" style="float: right; width: 25%;">
-
-<p>
-First the text line "all streams lead to kafka" is being processed.
-The <code>KTable</code> is being built up as each new word results in a new table entry (highlighted with a green background), and a corresponding change record is sent to the downstream <code>KStream</code>.
-</p>
 <p>
-When the second text line "hello kafka streams" is processed, we observe, for the first time, that existing entries in the <code>KTable</code> are being updated (here: for the words "kafka" and for "streams"). And again, change records are being sent to the output topic.
+During startup you'll see a number of log messages, including some indicating that the connectors are being instantiated.
+Once the Kafka Connect process has started, the source connector should start reading lines from <code>test.txt</code> and
+producing them to the topic <code>connect-test</code>, and the sink connector should start reading messages from the topic <code>connect-test</code>
+and write them to the file <code>test.sink.txt</code>. We can verify the data has been delivered through the entire pipeline
+by examining the contents of the output file:
 </p>
+
+
+<pre class="brush: bash;">
+&gt; more test.sink.txt
+foo
+bar
+</pre>
+
 <p>
-And so on (we skip the illustration of how the third line is being processed). This explains why the output topic has the contents we showed above, because it contains the full record of changes.
+Note that the data is being stored in the Kafka topic <code>connect-test</code>, so we can also run a console consumer to see the
+data in the topic (or use custom consumer code to process it):
 </p>
 
+
+<pre class="brush: bash;">
+&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning
+{"schema":{"type":"string","optional":false},"payload":"foo"}
+{"schema":{"type":"string","optional":false},"payload":"bar"}
+...
+</pre>
+
+<p>The connectors continue to process data, so we can add data to the file and see it move through the pipeline:</p>
+
+<pre class="brush: bash;">
+&gt; echo Another line>> test.txt
+</pre>
+
+<p>You should see the line appear in the console consumer output and in the sink file.</p>
+
+<h4><a id="quickstart_kafkastreams" href="#quickstart_kafkastreams">Step 8: Use Kafka Streams to process data</a></h4>
+
 <p>
-Looking beyond the scope of this concrete example, what Kafka Streams is doing here is to leverage the duality between a table and a changelog stream (here: table = the KTable, changelog stream = the downstream KStream): you can publish every change of the table to a stream, and if you consume the entire changelog stream from beginning to end, you can reconstruct the contents of the table.
+  Kafka Streams is a client library for building mission-critical real-time applications and microservices,
+  where the input and/or output data is stored in Kafka clusters. Kafka Streams combines the simplicity of
+  writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's
+  server-side cluster technology to make these applications highly scalable, elastic, fault-tolerant, distributed,
+  and much more. This <a href="/{{version}}/documentation/streams/quickstart">quickstart example</a> will demonstrate how
+  to run a streaming application coded in this library. 
 </p>
 
-<h4><a id="quickstart_streams_stop" href="#quickstart_streams_stop">Step 6: Teardown the application</a></h4>
 
-<p>You can now stop the console consumer, the console producer, the Wordcount application, the Kafka broker and the ZooKeeper server in order via <b>Ctrl-C</b>.</p>
-
- <div class="pagination">
-        <a href="/{{version}}/documentation/streams" class="pagination__btn pagination__btn__prev">Previous</a>
-        <a href="/{{version}}/documentation/streams/tutorial" class="pagination__btn pagination__btn__next">Next</a>
-    </div>
 </script>
 
-<div class="p-quickstart-streams"></div>
-
-<!--#include virtual="../../includes/_header.htm" -->
-<!--#include virtual="../../includes/_top.htm" -->
-<div class="content documentation documentation--current">
-    <!--#include virtual="../../includes/_nav.htm" -->
-    <div class="right">
-        <!--#include virtual="../../includes/_docs_banner.htm" -->
-        <ul class="breadcrumbs">
-            <li><a href="/documentation">Documentation</a></li>
-            <li><a href="/documentation/streams">Kafka Streams</a></li>
-        </ul>
-        <div class="p-content"></div>
-    </div>
-</div>
-<!--#include virtual="../../includes/_footer.htm" -->
-<script>
-$(function() {
-  // Show selected style on nav item
-  $('.b-nav__streams').addClass('selected');
-
-
-     //sticky secondary nav
-    var $navbar = $(".sub-nav-sticky"),
-               y_pos = $navbar.offset().top,
-               height = $navbar.height();
-       
-           $(window).scroll(function() {
-               var scrollTop = $(window).scrollTop();
-           
-               if (scrollTop > y_pos - height) {
-                   $navbar.addClass("navbar-fixed")
-               } else if (scrollTop <= y_pos) {
-                   $navbar.removeClass("navbar-fixed")
-               }
-           });
-
-  // Display docs subnav items
-  $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
-});
-</script>
+<div class="p-quickstart"></div>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1382654c/10/streams/architecture.html
----------------------------------------------------------------------
diff --git a/10/streams/architecture.html b/10/streams/architecture.html
index efc01bd..35c9168 100644
--- a/10/streams/architecture.html
+++ b/10/streams/architecture.html
@@ -110,7 +110,7 @@
     <p>
         Kafka Streams builds on fault-tolerance capabilities integrated natively within Kafka. Kafka partitions are highly available and replicated; so when stream data is persisted to Kafka it is available
         even if the application fails and needs to re-process it. Tasks in Kafka Streams leverage the fault-tolerance capability
-        offered by the Kafka consumer client to handle failures.
+        offered by the <a href="https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client/">Kafka consumer client</a> to handle failures.
         If a task runs on a machine that fails, Kafka Streams automatically restarts the task in one of the remaining running instances of the application.
     </p>
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/1382654c/10/streams/core-concepts.html
----------------------------------------------------------------------
diff --git a/10/streams/core-concepts.html b/10/streams/core-concepts.html
index 1675c1f..c979ea0 100644
--- a/10/streams/core-concepts.html
+++ b/10/streams/core-concepts.html
@@ -20,16 +20,16 @@
 <script id="content-template" type="text/x-handlebars-template">
     <h1>Core Concepts</h1>
     <div class="sub-nav-sticky">
-      <div class="sticky-top">
-        <div style="height:35px">
-          <a href="/{{version}}/documentation/streams/">Introduction</a>
-          <a href="/{{version}}/documentation/streams/developer-guide/">Developer Guide</a>
-          <a class="active-menu-item" href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
-          <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
-          <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+        <div class="sticky-top">
+            <div style="height:35px">
+                <a href="/{{version}}/documentation/streams/">Introduction</a>
+                <a href="/{{version}}/documentation/streams/developer-guide">Developer Guide</a>
+                <a class="active-menu-item"  href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
+                <a href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
+                <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
+            </div>
         </div>
-      </div>
-  </div> 
+    </div>
     <p>
         Kafka Streams is a client library for processing and analyzing data stored in Kafka.
         It builds upon important stream processing concepts such as properly distinguishing between event time and processing time, windowing support, and simple yet efficient management and real-time querying of application state.
@@ -188,11 +188,11 @@
 <!--#include virtual="../../includes/_footer.htm" -->
 <script>
 $(function() {
-  // Show selected style on nav item
-  $('.b-nav__streams').addClass('selected');
-
+          // Show selected style on nav item
+          $('.b-nav__streams').addClass('selected');
 
-     //sticky secondary nav
+   
+          //sticky secondary nav
           var $navbar = $(".sub-nav-sticky"),
                y_pos = $navbar.offset().top,
                height = $navbar.height();
@@ -206,8 +206,7 @@ $(function() {
                    $navbar.removeClass("navbar-fixed")
                }
            });
-
-  // Display docs subnav items
-  $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+           // Display docs subnav items
+           $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
 });
 </script>


Mime
View raw message