pulsar-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] merlimat closed pull request #2837: Site Update for Pulsar 2.2.0 Release
Date Wed, 24 Oct 2018 23:23:16 GMT
merlimat closed pull request #2837: Site Update for Pulsar 2.2.0 Release
URL: https://github.com/apache/pulsar/pull/2837
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/site2/website/releases.json b/site2/website/releases.json
index 9e4e3284c2..8286738e05 100644
--- a/site2/website/releases.json
+++ b/site2/website/releases.json
@@ -1,4 +1,5 @@
 [
+  "2.2.0",
   "2.1.1-incubating",
   "2.1.0-incubating",
   "2.0.1-incubating",
diff --git a/site2/website/versioned_docs/version-2.2.0/adaptors-kafka.md b/site2/website/versioned_docs/version-2.2.0/adaptors-kafka.md
new file mode 100644
index 0000000000..fa6635de57
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/adaptors-kafka.md
@@ -0,0 +1,262 @@
+---
+id: version-2.2.0-adaptors-kafka
+title: Pulsar adaptor for Apache Kafka
+sidebar_label: Kafka client wrapper
+original_id: adaptors-kafka
+---
+
+
+Pulsar provides an easy option for applications that are currently written using the [Apache Kafka](http://kafka.apache.org) Java client API.
+
+## Using the Pulsar Kafka compatibility wrapper
+
+In an existing application, change the regular Kafka client dependency and replace it with the Pulsar Kafka wrapper. Remove:
+
+```xml
+<dependency>
+  <groupId>org.apache.kafka</groupId>
+  <artifactId>kakfa-clients</artifactId>
+  <version>0.10.2.1</version>
+</dependency>
+```
+
+Then include this dependency for the Pulsar Kafka wrapper:
+
+```xml
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-client-kafka</artifactId>
+  <version>{{pulsar:version}}</version>
+</dependency>
+```
+
+With the new dependency, the existing code should work without any changes. The only
+thing that needs to be adjusted is the configuration, to make sure to point the
+producers and consumers to Pulsar service rather than Kafka and to use a particular
+Pulsar topic.
+
+## Using the Pulsar Kafka compatibility wrapper together with existing kafka client.
+
+When migrating from Kafka to Pulsar, the application might have to use the original kafka client
+and the pulsar kafka wrapper together during migration. Then you should consider using the
+unshaded pulsar kafka client wrapper.
+
+```xml
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-client-kafka-original</artifactId>
+  <version>{{pulsar:version}}</version>
+</dependency>
+```
+
+When using this dependency, you need to construct producer using `org.apache.kafka.clients.producer.PulsarKafkaProducer`
+instead of `org.apache.kafka.clients.producer.KafkaProducer` and `org.apache.kafka.clients.producer.PulsarKafkaConsumer` for consumers.
+
+## Producer example
+
+```java
+// Topic needs to be a regular Pulsar topic
+String topic = "persistent://public/default/my-topic";
+
+Properties props = new Properties();
+// Point to a Pulsar service
+props.put("bootstrap.servers", "pulsar://localhost:6650");
+
+props.put("key.serializer", IntegerSerializer.class.getName());
+props.put("value.serializer", StringSerializer.class.getName());
+
+Producer<Integer, String> producer = new KafkaProducer<>(props);
+
+for (int i = 0; i < 10; i++) {
+    producer.send(new ProducerRecord<Integer, String>(topic, i, "hello-" + i));
+    log.info("Message {} sent successfully", i);
+}
+
+producer.close();
+```
+
+## Consumer example
+
+```java
+String topic = "persistent://public/default/my-topic";
+
+Properties props = new Properties();
+// Point to a Pulsar service
+props.put("bootstrap.servers", "pulsar://localhost:6650");
+props.put("group.id", "my-subscription-name");
+props.put("enable.auto.commit", "false");
+props.put("key.deserializer", IntegerDeserializer.class.getName());
+props.put("value.deserializer", StringDeserializer.class.getName());
+
+Consumer<Integer, String> consumer = new KafkaConsumer<>(props);
+consumer.subscribe(Arrays.asList(topic));
+
+while (true) {
+    ConsumerRecords<Integer, String> records = consumer.poll(100);
+    records.forEach(record -> {
+        log.info("Received record: {}", record);
+    });
+
+    // Commit last offset
+    consumer.commitSync();
+}
+```
+
+## Complete Examples
+
+You can find the complete producer and consumer examples
+[here](https://github.com/apache/pulsar/tree/master/pulsar-client-kafka-compat/pulsar-client-kafka-tests/src/test/java/org/apache/pulsar/client/kafka/compat/examples).
+
+## Compatibility matrix
+
+Currently the Pulsar Kafka wrapper supports most of the operations offered by the Kafka API.
+
+#### Producer
+
+APIs:
+
+| Producer Method                                                               | Supported | Notes                                                                    |
+|:------------------------------------------------------------------------------|:----------|:-------------------------------------------------------------------------|
+| `Future<RecordMetadata> send(ProducerRecord<K, V> record)`                    | Yes       | Currently no support for explicitly set the partition id when publishing |
+| `Future<RecordMetadata> send(ProducerRecord<K, V> record, Callback callback)` | Yes       |                                                                          |
+| `void flush()`                                                                | Yes       |                                                                          |
+| `List<PartitionInfo> partitionsFor(String topic)`                             | No        |                                                                          |
+| `Map<MetricName, ? extends Metric> metrics()`                                 | No        |                                                                          |
+| `void close()`                                                                | Yes       |                                                                          |
+| `void close(long timeout, TimeUnit unit)`                                     | Yes       |                                                                          |
+
+Properties:
+
+| Config property                         | Supported | Notes                                                                         |
+|:----------------------------------------|:----------|:------------------------------------------------------------------------------|
+| `acks`                                  | Ignored   | Durability and quorum writes are configured at the namespace level            |
+| `batch.size`                            | Ignored   |                                                                               |
+| `block.on.buffer.full`                  | Yes       | If true it will block producer, otherwise give error                          |
+| `bootstrap.servers`                     | Yes       | Needs to point to a single Pulsar service URL                                 |
+| `buffer.memory`                         | Ignored   |                                                                               |
+| `client.id`                             | Ignored   |                                                                               |
+| `compression.type`                      | Yes       | Allows `gzip` and `lz4`. No `snappy`.                                         |
+| `connections.max.idle.ms`               | Ignored   |                                                                               |
+| `interceptor.classes`                   | Ignored   |                                                                               |
+| `key.serializer`                        | Yes       |                                                                               |
+| `linger.ms`                             | Yes       | Controls the group commit time when batching messages                         |
+| `max.block.ms`                          | Ignored   |                                                                               |
+| `max.in.flight.requests.per.connection` | Ignored   | In Pulsar ordering is maintained even with multiple requests in flight        |
+| `max.request.size`                      | Ignored   |                                                                               |
+| `metric.reporters`                      | Ignored   |                                                                               |
+| `metrics.num.samples`                   | Ignored   |                                                                               |
+| `metrics.sample.window.ms`              | Ignored   |                                                                               |
+| `partitioner.class`                     | Ignored   |                                                                               |
+| `receive.buffer.bytes`                  | Ignored   |                                                                               |
+| `reconnect.backoff.ms`                  | Ignored   |                                                                               |
+| `request.timeout.ms`                    | Ignored   |                                                                               |
+| `retries`                               | Ignored   | Pulsar client retries with exponential backoff until the send timeout expires |
+| `send.buffer.bytes`                     | Ignored   |                                                                               |
+| `timeout.ms`                            | Ignored   |                                                                               |
+| `value.serializer`                      | Yes       |                                                                               |
+
+
+#### Consumer
+
+APIs:
+
+| Consumer Method                                                                                         | Supported | Notes |
+|:--------------------------------------------------------------------------------------------------------|:----------|:------|
+| `Set<TopicPartition> assignment()`                                                                      | No        |       |
+| `Set<String> subscription()`                                                                            | Yes       |       |
+| `void subscribe(Collection<String> topics)`                                                             | Yes       |       |
+| `void subscribe(Collection<String> topics, ConsumerRebalanceListener callback)`                         | No        |       |
+| `void assign(Collection<TopicPartition> partitions)`                                                    | No        |       |
+| `void subscribe(Pattern pattern, ConsumerRebalanceListener callback)`                                   | No        |       |
+| `void unsubscribe()`                                                                                    | Yes       |       |
+| `ConsumerRecords<K, V> poll(long timeoutMillis)`                                                        | Yes       |       |
+| `void commitSync()`                                                                                     | Yes       |       |
+| `void commitSync(Map<TopicPartition, OffsetAndMetadata> offsets)`                                       | Yes       |       |
+| `void commitAsync()`                                                                                    | Yes       |       |
+| `void commitAsync(OffsetCommitCallback callback)`                                                       | Yes       |       |
+| `void commitAsync(Map<TopicPartition, OffsetAndMetadata> offsets, OffsetCommitCallback callback)`       | Yes       |       |
+| `void seek(TopicPartition partition, long offset)`                                                      | Yes       |       |
+| `void seekToBeginning(Collection<TopicPartition> partitions)`                                           | Yes       |       |
+| `void seekToEnd(Collection<TopicPartition> partitions)`                                                 | Yes       |       |
+| `long position(TopicPartition partition)`                                                               | Yes       |       |
+| `OffsetAndMetadata committed(TopicPartition partition)`                                                 | Yes       |       |
+| `Map<MetricName, ? extends Metric> metrics()`                                                           | No        |       |
+| `List<PartitionInfo> partitionsFor(String topic)`                                                       | No        |       |
+| `Map<String, List<PartitionInfo>> listTopics()`                                                         | No        |       |
+| `Set<TopicPartition> paused()`                                                                          | No        |       |
+| `void pause(Collection<TopicPartition> partitions)`                                                     | No        |       |
+| `void resume(Collection<TopicPartition> partitions)`                                                    | No        |       |
+| `Map<TopicPartition, OffsetAndTimestamp> offsetsForTimes(Map<TopicPartition, Long> timestampsToSearch)` | No        |       |
+| `Map<TopicPartition, Long> beginningOffsets(Collection<TopicPartition> partitions)`                     | No        |       |
+| `Map<TopicPartition, Long> endOffsets(Collection<TopicPartition> partitions)`                           | No        |       |
+| `void close()`                                                                                          | Yes       |       |
+| `void close(long timeout, TimeUnit unit)`                                                               | Yes       |       |
+| `void wakeup()`                                                                                         | No        |       |
+
+Properties:
+
+| Config property                 | Supported | Notes                                                 |
+|:--------------------------------|:----------|:------------------------------------------------------|
+| `group.id`                      | Yes       | Maps to a Pulsar subscription name                    |
+| `max.poll.records`              | Ignored   |                                                       |
+| `max.poll.interval.ms`          | Ignored   | Messages are "pushed" from broker                     |
+| `session.timeout.ms`            | Ignored   |                                                       |
+| `heartbeat.interval.ms`         | Ignored   |                                                       |
+| `bootstrap.servers`             | Yes       | Needs to point to a single Pulsar service URL         |
+| `enable.auto.commit`            | Yes       |                                                       |
+| `auto.commit.interval.ms`       | Ignored   | With auto-commit, acks are sent immediately to broker |
+| `partition.assignment.strategy` | Ignored   |                                                       |
+| `auto.offset.reset`             | Ignored   |                                                       |
+| `fetch.min.bytes`               | Ignored   |                                                       |
+| `fetch.max.bytes`               | Ignored   |                                                       |
+| `fetch.max.wait.ms`             | Ignored   |                                                       |
+| `metadata.max.age.ms`           | Ignored   |                                                       |
+| `max.partition.fetch.bytes`     | Ignored   |                                                       |
+| `send.buffer.bytes`             | Ignored   |                                                       |
+| `receive.buffer.bytes`          | Ignored   |                                                       |
+| `client.id`                     | Ignored   |                                                       |
+
+
+## Custom Pulsar configurations
+
+You can configure Pulsar authentication provider directly from the Kafka properties.
+
+### Pulsar client properties:
+
+| Config property                        | Default | Notes                                                                                  |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.authentication.class`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-org.apache.pulsar.client.api.Authentication-)          |         | Configure to auth provider. Eg. `org.apache.pulsar.client.impl.auth.AuthenticationTls` |
+| [`pulsar.authentication.params.map`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.util.Map-)          |         | Map which represents parameters for the Authentication-Plugin |
+| [`pulsar.authentication.params.string`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setAuthentication-java.lang.String-java.lang.String-)          |         | String which represents parameters for the Authentication-Plugin, Eg. `key1:val1,key2:val2` |
+| [`pulsar.use.tls`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTls-boolean-)                       | `false` | Enable TLS transport encryption                                                        |
+| [`pulsar.tls.trust.certs.file.path`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsTrustCertsFilePath-java.lang.String-)   |         | Path for the TLS trust certificate store                                               |
+| [`pulsar.tls.allow.insecure.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setTlsAllowInsecureConnection-boolean-) | `false` | Accept self-signed certificates from brokers                                           |
+| [`pulsar.operation.timeout.ms`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setOperationTimeout-int-java.util.concurrent.TimeUnit-) | `30000` | General operations timeout |
+| [`pulsar.stats.interval.seconds`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setStatsInterval-long-java.util.concurrent.TimeUnit-) | `60` | Pulsar client lib stats printing interval |
+| [`pulsar.num.io.threads`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setIoThreads-int-) | `1` | Number of Netty IO threads to use |
+| [`pulsar.connections.per.broker`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConnectionsPerBroker-int-) | `1` | Max number of connection to open to each broker |
+| [`pulsar.use.tcp.nodelay`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setUseTcpNoDelay-boolean-) | `true` | TCP no-delay |
+| [`pulsar.concurrent.lookup.requests`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setConcurrentLookupRequest-int-) | `50000` | Max number of concurrent topic lookups |
+| [`pulsar.max.number.rejected.request.per.connection`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ClientConfiguration.html#setMaxNumberOfRejectedRequestPerConnection-int-) | `50` | Threshold of errors to forcefully close a connection |
+
+
+### Pulsar producer properties
+
+| Config property                        | Default | Notes                                                                                  |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.producer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setProducerName-java.lang.String-) | | Specify producer name |
+| [`pulsar.producer.initial.sequence.id`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setInitialSequenceId-long-) |  | Specify baseline for sequence id for this producer |
+| [`pulsar.producer.max.pending.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessages-int-) | `1000` | Set the max size of the queue holding the messages pending to receive an acknowledgment from the broker.  |
+| [`pulsar.producer.max.pending.messages.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setMaxPendingMessagesAcrossPartitions-int-) | `50000` | Set the number of max pending messages across all the partitions  |
+| [`pulsar.producer.batching.enabled`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingEnabled-boolean-) | `true` | Control whether automatic batching of messages is enabled for the producer |
+| [`pulsar.producer.batching.max.messages`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.html#setBatchingMaxMessages-int-) | `1000` | The maximum number of messages permitted in a batch |
+
+
+### Pulsar consumer Properties
+
+| Config property                        | Default | Notes                                                                                  |
+|:---------------------------------------|:--------|:---------------------------------------------------------------------------------------|
+| [`pulsar.consumer.name`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setConsumerName-java.lang.String-) | | Set the consumer name |
+| [`pulsar.consumer.receiver.queue.size`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setReceiverQueueSize-int-) | 1000 | Sets the size of the consumer receive queue |
+| [`pulsar.consumer.total.receiver.queue.size.across.partitions`](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setMaxTotalReceiverQueueSizeAcrossPartitions-int-) | 50000 | Set the max total receiver queue size across partitons |
+
diff --git a/site2/website/versioned_docs/version-2.2.0/adaptors-spark.md b/site2/website/versioned_docs/version-2.2.0/adaptors-spark.md
new file mode 100644
index 0000000000..83d27fd0e8
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/adaptors-spark.md
@@ -0,0 +1,67 @@
+---
+id: version-2.2.0-adaptors-spark
+title: Pulsar adaptor for Apache Spark
+sidebar_label: Apache Spark
+original_id: adaptors-spark
+---
+
+The Spark Streaming receiver for Pulsar is a custom receiver that enables Apache [Spark Streaming](https://spark.apache.org/streaming/) to receive data from Pulsar.
+
+An application can receive data in [Resilient Distributed Dataset](https://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) (RDD) format via the Spark Streaming Pulsar receiver and can process it in a variety of ways.
+
+## Prerequisites
+
+To use the receiver, include a dependency for the `pulsar-spark` library in your Java configuration.
+
+### Maven
+
+If you're using Maven, add this to your `pom.xml`:
+
+```xml
+<!-- in your <properties> block -->
+<pulsar.version>{{pulsar:version}}</pulsar.version>
+
+<!-- in your <dependencies> block -->
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-spark</artifactId>
+  <version>${pulsar.version}</version>
+</dependency>
+```
+
+### Gradle
+
+If you're using Gradle, add this to your `build.gradle` file:
+
+```groovy
+def pulsarVersion = "{{pulsar:version}}"
+
+dependencies {
+    compile group: 'org.apache.pulsar', name: 'pulsar-spark', version: pulsarVersion
+}
+```
+
+## Usage
+
+Pass an instance of `SparkStreamingPulsarReceiver` to the `receiverStream` method in `JavaStreamingContext`:
+
+```java
+SparkConf conf = new SparkConf().setMaster("local[*]").setAppName("pulsar-spark");
+JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(5));
+
+ClientConfiguration clientConf = new ClientConfiguration();
+ConsumerConfiguration consConf = new ConsumerConfiguration();
+String url = "pulsar://localhost:6650/";
+String topic = "persistent://public/default/topic1";
+String subs = "sub1";
+
+JavaReceiverInputDStream<byte[]> msgs = jssc
+        .receiverStream(new SparkStreamingPulsarReceiver(clientConf, consConf, url, topic, subs));
+```
+
+
+## Example
+
+You can find a complete example [here](https://github.com/apache/pulsar/tree/master/pulsar-spark/src/test/java/org/apache/pulsar/spark/example/SparkStreamingPulsarReceiverExample.java).
+In this example, the number of messages which contain the string "Pulsar" in received messages is counted.
+
diff --git a/site2/website/versioned_docs/version-2.2.0/adaptors-storm.md b/site2/website/versioned_docs/version-2.2.0/adaptors-storm.md
new file mode 100644
index 0000000000..775be93172
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/adaptors-storm.md
@@ -0,0 +1,105 @@
+---
+id: version-2.2.0-adaptors-storm
+title: Pulsar adaptor for Apache Storm
+sidebar_label: Apache Storm
+original_id: adaptors-storm
+---
+
+Pulsar Storm is an adaptor for integrating with [Apache Storm](http://storm.apache.org/) topologies. It provides core Storm implementations for sending and receiving data.
+
+An application can inject data into a Storm topology via a generic Pulsar spout, as well as consume data from a Storm topology via a generic Pulsar bolt.
+
+## Using the Pulsar Storm Adaptor
+
+Include dependency for Pulsar Storm Adaptor:
+
+```xml
+<dependency>
+  <groupId>org.apache.pulsar</groupId>
+  <artifactId>pulsar-storm</artifactId>
+  <version>${pulsar.version}</version>
+</dependency>
+```
+
+## Pulsar Spout
+
+The Pulsar Spout allows for the data published on a topic to be consumed by a Storm topology. It emits a Storm tuple based on the message received and the `MessageToValuesMapper` provided by the client.
+
+The tuples that fail to be processed by the downstream bolts will be re-injected by the spout with an exponential backoff, within a configurable timeout (the default is 60 seconds) or a configurable number of retries, whichever comes first, after which it is acknowledged by the consumer. Here's an example construction of a spout:
+
+```java
+// Configure a Pulsar Client
+ClientConfiguration clientConf = new ClientConfiguration();
+
+// Configure a Pulsar Consumer
+ConsumerConfiguration consumerConf = new ConsumerConfiguration();  
+
+@SuppressWarnings("serial")
+MessageToValuesMapper messageToValuesMapper = new MessageToValuesMapper() {
+
+    @Override
+    public Values toValues(Message msg) {
+        return new Values(new String(msg.getData()));
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+        // declare the output fields
+        declarer.declare(new Fields("string"));
+    }
+};
+
+// Configure a Pulsar Spout
+PulsarSpoutConfiguration spoutConf = new PulsarSpoutConfiguration();
+spoutConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650");
+spoutConf.setTopic("persistent://my-property/usw/my-ns/my-topic1");
+spoutConf.setSubscriptionName("my-subscriber-name1");
+spoutConf.setMessageToValuesMapper(messageToValuesMapper);
+
+// Create a Pulsar Spout
+PulsarSpout spout = new PulsarSpout(spoutConf, clientConf, consumerConf);
+```
+
+## Pulsar Bolt
+
+The Pulsar bolt allows data in a Storm topology to be published on a topic. It publishes messages based on the Storm tuple received and the `TupleToMessageMapper` provided by the client.
+
+A partitioned topic can also be used to publish messages on different topics. In the implementation of the `TupleToMessageMapper`, a "key" will need to be provided in the message which will send the messages with the same key to the same topic. Here's an example bolt:
+
+```java
+// Configure a Pulsar Client
+ClientConfiguration clientConf = new ClientConfiguration();
+
+// Configure a Pulsar Producer  
+ProducerConfiguration producerConf = new ProducerConfiguration();
+
+@SuppressWarnings("serial")
+TupleToMessageMapper tupleToMessageMapper = new TupleToMessageMapper() {
+
+    @Override
+    public Message toMessage(Tuple tuple) {
+        String receivedMessage = tuple.getString(0);
+        // message processing
+        String processedMsg = receivedMessage + "-processed";
+        return MessageBuilder.create().setContent(processedMsg.getBytes()).build();
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+        // declare the output fields
+    }
+};
+
+// Configure a Pulsar Bolt
+PulsarBoltConfiguration boltConf = new PulsarBoltConfiguration();
+boltConf.setServiceUrl("pulsar://broker.messaging.usw.example.com:6650");
+boltConf.setTopic("persistent://my-property/usw/my-ns/my-topic2");
+boltConf.setTupleToMessageMapper(tupleToMessageMapper);
+
+// Create a Pulsar Bolt
+PulsarBolt bolt = new PulsarBolt(boltConf, clientConf);
+```
+
+## Example
+
+You can find a complete example [here](https://github.com/apache/pulsar/tree/master/pulsar-storm/src/test/java/org/apache/pulsar/storm/example/StormExample.java).
diff --git a/site2/website/versioned_docs/version-2.2.0/client-libraries-cpp.md b/site2/website/versioned_docs/version-2.2.0/client-libraries-cpp.md
new file mode 100644
index 0000000000..89a6a4433d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/client-libraries-cpp.md
@@ -0,0 +1,184 @@
+---
+id: version-2.2.0-client-libraries-cpp
+title: The Pulsar C++ client
+sidebar_label: C++
+original_id: client-libraries-cpp
+---
+
+## Supported platforms
+
+The Pulsar C++ client has been successfully tested on **MacOS** and **Linux**.
+
+## Linux
+
+### Install
+
+> Since the 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can choose to download
+> and install those packages instead of building them yourself.
+
+#### RPM
+
+| Link | Crypto files |
+|------|--------------|
+| [client]({{pulsar:rpm:client}}) | [asc]({{pulsar:rpm:client}}.asc), [sha512]({{pulsar:rpm:client}}.sha512) |
+| [client-debuginfo]({{pulsar:rpm:client-debuginfo}}) | [asc]({{pulsar:rpm:client-debuginfo}}.asc),  [sha512]({{pulsar:rpm:client-debuginfo}}.sha512) |
+| [client-devel]({{pulsar:rpm:client-devel}}) | [asc]({{pulsar:rpm:client-devel}}.asc),  [sha512]({{pulsar:rpm:client-devel}}.sha512) |
+
+To install a RPM package, download the RPM packages and install them using the following command:
+
+```bash
+$ rpm -ivh apache-pulsar-client*.rpm
+```
+
+#### DEB
+
+| Link | Crypto files |
+|------|--------------|
+| [client]({{pulsar:deb:client}}) | [asc]({{pulsar:deb:client}}.asc), [sha512]({{pulsar:deb:client}}.sha512) |
+| [client-devel]({{pulsar:deb:client-devel}}) | [asc]({{pulsar:deb:client-devel}}.asc),  [sha512]({{pulsar:deb:client-devel}}.sha512) |
+
+To install a DEB package, download the DEB packages and install them using the following command:
+
+```bash
+$ apt-install apache-pulsar-client*.deb
+```
+
+### Build
+
+> If you want to build RPM and Debian packages off latest master, you can follow the instructions
+> below to do so. All the instructions are run at the root directory of your cloned Pulsar
+> repo.
+
+There are recipes that build RPM and Debian packages containing a
+statically linked `libpulsar.so` / `libpulsar.a` with all the required
+dependencies.
+
+To build the C++ library packages, first build the Java packages:
+
+```shell
+mvn install -DskipTests
+```
+
+#### RPM
+
+```shell
+pulsar-client-cpp/pkg/rpm/docker-build-rpm.sh
+```
+
+This will build the RPM inside a Docker container and it will leave the RPMs
+in `pulsar-client-cpp/pkg/rpm/RPMS/x86_64/`.
+
+| Package name | Content |
+|-----|-----|
+| pulsar-client | Shared library `libpulsar.so` |
+| pulsar-client-devel | Static library `libpulsar.a` and C++ and C headers |
+| pulsar-client-debuginfo | Debug symbols for `libpulsar.so` |
+
+#### Deb
+
+To build Debian packages:
+
+```shell
+pulsar-client-cpp/pkg/deb/docker-build-deb.sh
+```
+
+Debian packages will be created at `pulsar-client-cpp/pkg/deb/BUILD/DEB/`
+
+| Package name | Content |
+|-----|-----|
+| pulsar-client | Shared library `libpulsar.so` |
+| pulsar-client-dev | Static library `libpulsar.a` and C++ and C headers |
+
+## MacOS
+
+Use the [Homebrew](https://brew.sh/) supplied recipe to build the Pulsar
+client lib on MacOS.
+
+```shell
+brew install https://raw.githubusercontent.com/apache/pulsar/master/pulsar-client-cpp/homebrew/libpulsar.rb
+```
+
+If using Python 3 on MacOS, add the flag `--with-python3` to the above command.
+
+This will install the package with the library and headers.
+
+## Connection URLs
+
+
+To connect to Pulsar using client libraries, you need to specify a Pulsar protocol URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the pulsar URI scheme and have a default port of 6650. Here’s an example for localhost:
+
+```http
+pulsar://localhost:6650
+```
+
+A URL for a production Pulsar cluster may look something like this:
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you’re using TLS authentication, the URL will look like something like this:
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Consumer
+
+```c++
+Client client("pulsar://localhost:6650");
+
+Consumer consumer;
+Result result = client.subscribe("my-topic", "my-subscribtion-name", consumer);
+if (result != ResultOk) {
+    LOG_ERROR("Failed to subscribe: " << result);
+    return -1;
+}
+
+Message msg;
+
+while (true) {
+    consumer.receive(msg);
+    LOG_INFO("Received: " << msg
+            << "  with payload '" << msg.getDataAsString() << "'");
+
+    consumer.acknowledge(msg);
+}
+
+client.close();
+```
+
+
+## Producer
+
+```c++
+Client client("pulsar://localhost:6650");
+
+Producer producer;
+Result result = client.createProducer("my-topic", producer);
+if (result != ResultOk) {
+    LOG_ERROR("Error creating producer: " << result);
+    return -1;
+}
+
+// Publish 10 messages to the topic
+for (int i = 0; i < 10; i++){
+    Message msg = MessageBuilder().setContent("my-message").build();
+    Result res = producer.send(msg);
+    LOG_INFO("Message sent: " << res);
+}
+client.close();
+```
+
+## Authentication
+
+```cpp
+ClientConfiguration config = ClientConfiguration();
+config.setUseTls(true);
+config.setTlsTrustCertsFilePath("/path/to/cacert.pem");
+config.setTlsAllowInsecureConnection(false);
+config.setAuth(pulsar::AuthTls::create(
+            "/path/to/client-cert.pem", "/path/to/client-key.pem"););
+
+Client client("pulsar+ssl://my-broker.com:6651", config);
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/client-libraries-go.md b/site2/website/versioned_docs/version-2.2.0/client-libraries-go.md
new file mode 100644
index 0000000000..fbfb926611
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/client-libraries-go.md
@@ -0,0 +1,462 @@
+---
+id: version-2.2.0-client-libraries-go
+title: The Pulsar Go client
+sidebar_label: Go
+original_id: client-libraries-go
+---
+
+The Pulsar Go client can be used to create Pulsar [producers](#producers), [consumers](#consumers), and [readers](#readers) in Go (aka Golang).
+
+> #### API docs available as well
+> For standard API docs, consult the [Godoc](https://godoc.org/github.com/apache/pulsar/pulsar-client-go/pulsar).
+
+
+## Installation
+
+### Requirements
+
+Pulsar Go client library is based on the C++ client library. Follow
+the instructions for [C++ library](client-libraries-cpp.md) for installing the binaries
+through [RPM](client-libraries-cpp.md#rpm), [Deb](client-libraries-cpp.md#deb) or [Homebrew packages](client-libraries-cpp.md#macos).
+
+### Installing go package
+
+> #### Compatibility Warning
+> The version number of the Go client **must match** the version number of the Pulsar C++ client library.
+
+You can install the `pulsar` library locally using `go get`.  Note that `go get` doesn't support fetching a specific tag - it will always pull in master's version of the Go client.  You'll need a C++ client library that matches master.
+
+```bash
+$ go get -u github.com/apache/pulsar/pulsar-client-go/pulsar
+```
+
+Or you can use [dep](https://github.com/golang/dep) for managing the dependencies.
+
+```bash
+$ dep ensure -add github.com/apache/pulsar/pulsar-client-go/pulsar@v{{pulsar:version}}
+```
+
+Once installed locally, you can import it into your project:
+
+```go
+import "github.com/apache/pulsar/pulsar-client-go/pulsar"
+```
+
+## Connection URLs
+
+To connect to Pulsar using client libraries, you need to specify a [Pulsar protocol](developing-binary-protocol.md) URL.
+
+Pulsar protocol URLs are assigned to specific clusters, use the `pulsar` scheme and have a default port of 6650. Here's an example for `localhost`:
+
+```http
+pulsar://localhost:6650
+```
+
+A URL for a production Pulsar cluster may look something like this:
+
+```http
+pulsar://pulsar.us-west.example.com:6650
+```
+
+If you're using [TLS](security-tls-authentication.md) authentication, the URL will look like something like this:
+
+```http
+pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+## Creating a client
+
+In order to interact with Pulsar, you'll first need a `Client` object. You can create a client object using the `NewClient` function, passing in a `ClientOptions` object (more on configuration [below](#client-configuration)). Here's an example:
+
+
+```go
+import (
+    "log"
+    "runtime"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+        OperationTimeoutSeconds: 5,
+        MessageListenerThreads: runtime.NumCPU(),
+    })
+
+    if err != nil {
+        log.Fatalf("Could not instantiate Pulsar client: %v", err)
+    }
+}
+```
+
+The following configurable parameters are available for Pulsar clients:
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`URL` | The connection URL for the Pulsar cluster. See [above](#urls) for more info |
+`IOThreads` | The number of threads to use for handling connections to Pulsar [brokers](reference-terminology.md#broker) | 1
+`OperationTimeoutSeconds` | The timeout for some Go client operations (creating producers, subscribing to and unsubscribing from [topics](reference-terminology.md#topic)). Retries will occur until this threshold is reached, at which point the operation will fail. | 30
+`MessageListenerThreads` | The number of threads used by message listeners ([consumers](#consumers) and [readers](#readers)) | 1
+`ConcurrentLookupRequests` | The number of concurrent lookup requests that can be sent on each broker connection. Setting a maximum helps to keep from overloading brokers. You should set values over the default of 5000 only if the client needs to produce and/or subscribe to thousands of Pulsar topics. | 5000
+`Logger` | A custom logger implementation for the client (as a function that takes a log level, file path, line number, and message). All info, warn, and error messages will be routed to this function. | `nil`
+`TLSTrustCertsFilePath` | The file path for the trusted TLS certificate |
+`TLSAllowInsecureConnection` | Whether the client accepts untrusted TLS certificates from the broker | `false`
+`Authentication` | Configure the authentication provider. (default: no authentication). Example: `Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem")` | `nil`
+`StatsIntervalInSeconds` | The interval (in seconds) at which client stats are published | 60
+
+## Producers
+
+Pulsar producers publish messages to Pulsar topics. You can [configure](#producer-configuration) Go producers using a `ProducerOptions` object. Here's an example:
+
+```go
+producer, err := client.CreateProducer(pulsar.ProducerOptions{
+    Topic: "my-topic",
+})
+
+if err != nil {
+    log.Fatalf("Could not instantiate Pulsar producer: %v", err)
+}
+
+defer producer.Close()
+
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Hello, Pulsar"),
+}
+
+if err := producer.Send(msg); err != nil {
+    log.Fatalf("Producer could not send message: %v", err)
+}
+```
+
+> #### Blocking operation
+> When you create a new Pulsar producer, the operation will block (waiting on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Producer operations
+
+Pulsar Go producers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Fetches the producer's [topic](reference-terminology.md#topic)| `string`
+`Name()` | Fetchs the producer's name | `string`
+`Send(context.Context, ProducerMessage) error` | Publishes a [message](#messages) to the producer's topic. This call will block until the message is successfully acknowledged by the Pulsar broker, or an error will be thrown if the timeout set using the `SendTimeout` in the producer's [configuration](#producer-configuration) is exceeded. | `error`
+`SendAsync(context.Context, ProducerMessage, func(ProducerMessage, error))` | Publishes a [message](#messages) to the producer's topic asynchronously. The third argument is a callback function that specifies what happens either when the message is acknowledged or an error is thrown. |
+`Close()` | Closes the producer and releases all resources allocated to it. If `Close()` is called then no more messages will be accepted from the publisher. This method will block until all pending publish requests have been persisted by Pulsar. If an error is thrown, no pending writes will be retried. | `error`
+
+Here's a more involved example usage of a producer:
+
+```go
+import (
+    "context"
+    "fmt"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+        URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client to instantiate a producer
+    producer, err := client.CreateProducer(pulsar.ProducerOptions{
+        Topic: "my-topic",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    ctx := context.Background()
+
+    // Send 10 messages synchronously and 10 messages asynchronously
+    for i := 0; i < 10; i++ {
+        // Create a message
+        msg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("message-%d", i)),
+        }
+
+        // Attempt to send the message
+        if err := producer.Send(ctx, msg); err != nil {
+            log.Fatal(err)
+        }
+
+        // Create a different message to send asynchronously
+        asyncMsg := pulsar.ProducerMessage{
+            Payload: []byte(fmt.Sprintf("async-message-%d", i)),
+        }
+
+        // Attempt to send the message asynchronously and handle the response
+        producer.SendAsync(ctx, asyncMsg, func(msg pulsar.ProducerMessage, err error) {
+            if err != nil { log.Fatal(err) }
+
+            fmt.Printf("Message %s succesfully published", msg.ID())
+        })
+    }
+}
+```
+
+### Producer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) to which the producer will publish messages |
+`Name` | A name for the producer. If you don't explicitly assign a name, Pulsar will automatically generate a globally unique name that you can access later using the `Name()` method.  If you choose to explicitly assign a name, it will need to be unique across *all* Pulsar clusters, otherwise the creation operation will throw an error. |
+`SendTimeout` | When publishing a message to a topic, the producer will wait for an acknowledgment from the responsible Pulsar [broker](reference-terminology.md#broker). If a message is not acknowledged within the threshold set by this parameter, an error will be thrown. If you set `SendTimeout` to -1, the timeout will be set to infinity (and thus removed). Removing the send timeout is recommended when using Pulsar's [message de-duplication](cookbooks-deduplication.md) feature. | 30 seconds
+`MaxPendingMessages` | The maximum size of the queue holding pending messages (i.e. messages waiting to receive an acknowledgment from the [broker](reference-terminology.md#broker)). By default, when the queue is full all calls to the `Send` and `SendAsync` methods will fail *unless* `BlockIfQueueFull` is set to `true`. |
+`MaxPendingMessagesAcrossPartitions` | |
+`BlockIfQueueFull` | If set to `true`, the producer's `Send` and `SendAsync` methods will block when the outgoing message queue is full rather than failing and throwing an error (the size of that queue is dictated by the `MaxPendingMessages` parameter); if set to `false` (the default), `Send` and `SendAsync` operations will fail and throw a `ProducerQueueIsFullError` when the queue is full. | `false`
+`MessageRoutingMode` | The message routing logic (for producers on [partitioned topics](concepts-architecture-overview.md#partitioned-topics)). This logic is applied only when no key is set on messages. The available options are: round robin (`pulsar.RoundRobinDistribution`, the default), publishing all messages to a single partition (`pulsar.UseSinglePartition`), or a custom partitioning scheme (`pulsar.CustomPartition`). | `pulsar.RoundRobinDistribution`
+`HashingScheme` | The hashing function that determines the partition on which a particular message is published (partitioned topics only). The available options are: `pulsar.JavaStringHash` (the equivalent of `String.hashCode()` in Java), `pulsar.Murmur3_32Hash` (applies the [Murmur3](https://en.wikipedia.org/wiki/MurmurHash) hashing function), or `pulsar.BoostHash` (applies the hashing function from C++'s [Boost](https://www.boost.org/doc/libs/1_62_0/doc/html/hash.html) library) | `pulsar.JavaStringHash`
+`CompressionType` | The message data compression type used by the producer. The available options are [`LZ4`](https://github.com/lz4/lz4) and [`ZLIB`](https://zlib.net/). | No compression
+`MessageRouter` | By default, Pulsar uses a round-robin routing scheme for [partitioned topics](cookbooks-partitioned.md). The `MessageRouter` parameter enables you to specify custom routing logic via a function that takes the Pulsar message and topic metadata as an argument and returns an integer (where the ), i.e. a function signature of `func(Message, TopicMetadata) int`. |
+
+## Consumers
+
+Pulsar consumers subscribe to one or more Pulsar topics and listen for incoming messages produced on that topic/those topics. You can [configure](#consumer-configuration) Go consumers using a `ConsumerOptions` object. Here's a basic example that uses channels:
+
+```go
+msgChannel := make(chan pulsar.ConsumerMessage)
+
+consumerOpts := pulsar.ConsumerOptions{
+    Topic:            "my-topic",
+    SubscriptionName: "my-subscription-1",
+    Type:             pulsar.Exclusive,
+    MessageChannel:   msgChannel,
+}
+
+consumer, err := client.Subscribe(consumerOpts)
+
+if err != nil {
+    log.Fatalf("Could not establish subscription: %v", err)
+}
+
+defer consumer.Close()
+
+for cm := range channel {
+    msg := cm.Message
+
+    fmt.Printf("Message ID: %s", msg.ID())
+    fmt.Printf("Message value: %s", string(msg.Payload()))
+
+    consumer.Ack(msg)
+}
+```
+
+> #### Blocking operation
+> When you create a new Pulsar consumer, the operation will block (on a go channel) until either a producer is successfully created or an error is thrown.
+
+
+### Consumer operations
+
+Pulsar Go consumers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the consumer's [topic](reference-terminology.md#topic) | `string`
+`Subscription()` | Returns the consumer's subscription name | `string`
+`Unsubcribe()` | Unsubscribes the consumer from the assigned topic. Throws an error if the unsubscribe operation is somehow unsuccessful. | `error`
+`Receive(context.Context)` | Receives a single message from the topic. This method blocks until a message is available. | `(Message, error)`
+`Ack(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) | `error`
+`AckID(MessageID)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) a message to the Pulsar [broker](reference-terminology.md#broker) by message ID | `error`
+`AckCumulative(Message)` | [Acknowledges](reference-terminology.md#acknowledgment-ack) *all* the messages in the stream, up to and including the specified message. The `AckCumulative` method will block until the ack has been sent to the broker. After that, the messages will *not* be redelivered to the consumer. Cumulative acking can only be used with a [shared](concepts-messaging.md#shared) subscription type.
+`Close()` | Closes the consumer, disabling its ability to receive messages from the broker | `error`
+`RedeliverUnackedMessages()` | Redelivers *all* unacknowledged messages on the topic. In [failover](concepts-messaging.md#failover) mode, this request is ignored if the consumer isn't active on the specified topic; in [shared](concepts-messaging.md#shared) mode, redelivered messages are distributed across all consumers connected to the topic. **Note**: this is a *non-blocking* operation that doesn't throw an error. |
+
+#### Receive example
+
+Here's an example usage of a Go consumer that uses the `Receive()` method to process incoming messages:
+
+```go
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    // Use the client object to instantiate a consumer
+    consumer, err := client.Subscribe(pulsar.ConsumerOptions{
+        Topic:            "my-golang-topic",
+        SubscriptionName: "sub-1",
+        SubscriptionType: pulsar.Exclusive,
+    })
+
+    if err != nil { log.Fatal(err) }
+
+    defer consumer.Close()
+
+    ctx := context.Background()
+
+    // Listen indefinitely on the topic
+    for {
+        msg, err := consumer.Receive(ctx)
+        if err != nil { log.Fatal(err) }
+
+        // Do something with the message
+
+        consumer.Ack(msg)
+    }
+}
+```
+
+### Consumer configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the consumer will establish a subscription and listen for messages |
+`SubscriptionName` | The subscription name for this consumer |
+`Name` | The name of the consumer |
+`AckTimeout` | | 0
+`SubscriptionType` | Available options are `Exclusive`, `Shared`, and `Failover` | `Exclusive`
+`MessageChannel` | The Go channel used by the consumer. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the consumer's receiver queue, i.e. the number of messages that can be accumulated by the consumer before the application calls `Receive`. A value higher than the default of 1000 could increase consumer throughput, though at the expense of more memory utilization. | 1000
+`MaxTotalReceiverQueueSizeAcrossPartitions` |Set the max total receiver queue size across partitions. This setting will be used to reduce the receiver queue size for individual partitions if the total exceeds this value | 50000
+
+## Readers
+
+Pulsar readers process messages from Pulsar topics. Readers are different from consumers because with readers you need to explicitly specify which message in the stream you want to begin with (consumers, on the other hand, automatically begin with the most recent unacked message). You can [configure](#reader-configuration) Go readers using a `ReaderOptions` object. Here's an example:
+
+```go
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic: "my-golang-topic",
+    StartMessageId: pulsar.LatestMessage,
+})
+```
+
+> #### Blocking operation
+> When you create a new Pulsar reader, the operation will block (on a go channel) until either a reader is successfully created or an error is thrown.
+
+
+### Reader operations
+
+Pulsar Go readers have the following methods available:
+
+Method | Description | Return type
+:------|:------------|:-----------
+`Topic()` | Returns the reader's [topic](reference-terminology.md#topic) | `string`
+`Next(context.Context)` | Receives the next message on the topic (analogous to the `Receive` method for [consumers](#consumer-operations)). This method blocks until a message is available. | `(Message, error)`
+`Close()` | Closes the reader, disabling its ability to receive messages from the broker | `error`
+
+#### "Next" example
+
+Here's an example usage of a Go reader that uses the `Next()` method to process incoming messages:
+
+```go
+import (
+    "context"
+    "log"
+
+    "github.com/apache/pulsar/pulsar-client-go/pulsar"
+)
+
+func main() {
+    // Instantiate a Pulsar client
+    client, err := pulsar.NewClient(pulsar.ClientOptions{
+            URL: "pulsar://localhost:6650",
+    })
+
+    if err != nil { log.Fatalf("Could not create client: %v", err) }
+
+    // Use the client to instantiate a reader
+    reader, err := client.CreateReader(pulsar.ReaderOptions{
+        Topic:          "my-golang-topic",
+        StartMessageID: pulsar.EarliestMessage,
+    })
+
+    if err != nil { log.Fatalf("Could not create reader: %v", err) }
+
+    defer reader.Close()
+
+    ctx := context.Background()
+
+    // Listen on the topic for incoming messages
+    for {
+        msg, err := reader.Next(ctx)
+        if err != nil { log.Fatalf("Error reading from topic: %v", err) }
+
+        // Process the message
+    }
+}
+```
+
+In the example above, the reader begins reading from the earliest available message (specified by `pulsar.EarliestMessage`). The reader can also begin reading from the latest message (`pulsar.LatestMessage`) or some other message ID specified by bytes using the `DeserializeMessageID` function, which takes a byte array and returns a `MessageID` object. Here's an example:
+
+```go
+lastSavedId := // Read last saved message id from external store as byte[]
+
+reader, err := client.CreateReader(pulsar.ReaderOptions{
+    Topic:          "my-golang-topic",
+    StartMessageID: DeserializeMessageID(lastSavedId),
+})
+```
+
+### Reader configuration
+
+Parameter | Description | Default
+:---------|:------------|:-------
+`Topic` | The Pulsar [topic](reference-terminology.md#topic) on which the reader will establish a subscription and listen for messages |
+`Name` | The name of the reader |
+`StartMessageID` | THe initial reader position, i.e. the message at which the reader begins processing messages. The options are `pulsar.EarliestMessage` (the earliest available message on the topic), `pulsar.LatestMessage` (the latest available message on the topic), or a `MessageID` object for a position that isn't earliest or latest. |
+`MessageChannel` | The Go channel used by the reader. Messages that arrive from the Pulsar topic(s) will be passed to this channel. |
+`ReceiverQueueSize` | Sets the size of the reader's receiver queue, i.e. the number of messages that can be accumulated by the reader before the application calls `Next`. A value higher than the default of 1000 could increase reader throughput, though at the expense of more memory utilization. | 1000
+`SubscriptionRolePrefix` | The subscription role prefix. | `reader`
+
+## Messages
+
+The Pulsar Go client provides a `ProducerMessage` interface that you can use to construct messages to producer on Pulsar topics. Here's an example message:
+
+```go
+msg := pulsar.ProducerMessage{
+    Payload: []byte("Here is some message data"),
+    Key: "message-key",
+    Properties: map[string]string{
+        "foo": "bar",
+    },
+    EventTime: time.Now(),
+    ReplicationClusters: []string{"cluster1", "cluster3"},
+}
+
+if err := producer.send(msg); err != nil {
+    log.Fatalf("Could not publish message due to: %v", err)
+}
+```
+
+The following methods parameters are available for `ProducerMessage` objects:
+
+Parameter | Description
+:---------|:-----------
+`Payload` | The actual data payload of the message
+`Key` | The optional key associated with the message (particularly useful for things like topic compaction)
+`Properties` | A key-value map (both keys and values must be strings) for any application-specific metadata attached to the message
+`EventTime` | The timestamp associated with the message
+`ReplicationClusters` | The clusters to which this message will be replicated. Pulsar brokers handle message replication automatically; you should only change this setting if you want to override the broker default.
+
+## TLS encryption and authentication
+
+In order to use [TLS encryption](security-tls-transport.md), you'll need to configure your client to do so:
+
+ * Use `pulsar+ssl` URL type
+ * Set `TLSTrustCertsFilePath` to the path to the TLS certs used by your client and the Pulsar broker
+ * Configure `Authentication` option
+
+Here's an example:
+
+```go
+opts := pulsar.ClientOptions{
+    URL: "pulsar+ssl://my-cluster.com:6651",
+    TLSTrustCertsFilePath: "/path/to/certs/my-cert.csr",
+    Authentication: NewAuthenticationTLS("my-cert.pem", "my-key.pem"),
+}
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/client-libraries-python.md b/site2/website/versioned_docs/version-2.2.0/client-libraries-python.md
new file mode 100644
index 0000000000..3c847e951c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/client-libraries-python.md
@@ -0,0 +1,95 @@
+---
+id: version-2.2.0-client-libraries-python
+title: The Pulsar Python client
+sidebar_label: Python
+original_id: client-libraries-python
+---
+
+The Pulsar Python client library is a wrapper over the existing [C++ client library](client-libraries-cpp.md) and exposes all of the [same features](/api/cpp). You can find the code in the [`python` subdirectory](https://github.com/apache/pulsar/tree/master/pulsar-client-cpp/python) of the C++ client code.
+
+## Installation
+
+You can install the [`pulsar-client`](https://pypi.python.org/pypi/pulsar-client) library either via [PyPi](https://pypi.python.org/pypi), using [pip](#installation-using-pip), or by building the library from source.
+
+### Installation using pip
+
+To install the `pulsar-client` library as a pre-built package using the [pip](https://pip.pypa.io/en/stable/) package manager:
+
+```shell
+$ pip install pulsar-client=={{pulsar:version_number}}
+```
+
+Installation via PyPi is available for the following Python versions:
+
+Platform | Supported Python versions
+:--------|:-------------------------
+MacOS <br /> 10.11 (El Capitan) &mdash; 10.12 (Sierra) &mdash; 10.13 (High Sierra) | 2.7, 3.7
+Linux | 2.7, 3.3, 3.4, 3.5, 3.6, 3.7
+
+### Installing from source
+
+To install the `pulsar-client` library by building from source, follow [these instructions](client-libraries-cpp.md#compilation) and compile the Pulsar C++ client library. That will also build the Python binding for the library.
+
+To install the built Python bindings:
+
+```shell
+$ git clone https://github.com/apache/pulsar
+$ cd pulsar/pulsar-client-cpp/python
+$ sudo python setup.py install
+```
+
+## API Reference
+
+The complete Python API reference is available at [api/python](/api/python).
+
+## Examples
+
+Below you'll find a variety of Python code examples for the `pulsar-client` library.
+
+### Producer example
+
+This creates a Python producer for the `my-topic` topic and send 10 messages on that topic:
+
+```python
+import pulsar
+
+client = pulsar.Client('pulsar://localhost:6650')
+
+producer = client.create_producer('my-topic')
+
+for i in range(10):
+    producer.send(('Hello-%d' % i).encode('utf-8'))
+
+client.close()
+```
+
+### Consumer example
+
+This creates a consumer with the `my-subscription` subscription on the `my-topic` topic, listen for incoming messages, print the content and ID of messages that arrive, and acknowledge each message to the Pulsar broker:
+
+```python
+consumer = client.subscribe('my-topic', 'my-subscription')
+
+while True:
+    msg = consumer.receive()
+    print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
+    consumer.acknowledge(msg)
+
+client.close()
+```
+
+### Reader interface example
+
+You can use the Pulsar Python API to use the Pulsar [reader interface](concepts-clients.md#reader-interface). Here's an example:
+
+```python
+# MessageId taken from a previously fetched message
+msg_id = msg.message_id()
+
+reader = client.create_reader('my-topic', msg_id)
+
+while True:
+    msg = reader.receive()
+    print("Received message '{}' id='{}'".format(msg.data(), msg.message_id()))
+    # No acknowledgment
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/client-libraries-websocket.md b/site2/website/versioned_docs/version-2.2.0/client-libraries-websocket.md
new file mode 100644
index 0000000000..b78752046d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/client-libraries-websocket.md
@@ -0,0 +1,410 @@
+---
+id: version-2.2.0-client-libraries-websocket
+title: Pulsar's WebSocket API
+sidebar_label: WebSocket
+original_id: client-libraries-websocket
+---
+
+Pulsar's [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) API is meant to provide a simple way to interact with Pulsar using languages that do not have an official [client library](getting-started-clients.md). Through WebSockets you can publish and consume messages and use all the features available in the [Java](client-libraries-java.md), [Python](client-libraries-python.md), and [C++](client-libraries-cpp.md) client libraries.
+
+
+> You can use Pulsar's WebSocket API with any WebSocket client library. See examples for Python and Node.js [below](#client-examples).
+
+## Running the WebSocket service
+
+The standalone variant of Pulsar that we recommend using for [local development](getting-started-standalone.md) already has the WebSocket service enabled.
+
+In non-standalone mode, there are two ways to deploy the WebSocket service:
+
+* [embedded](#embedded-with-a-pulsar-broker) with a Pulsar broker
+* as a [separate component](#as-a-separate-component)
+
+### Embedded with a Pulsar broker
+
+In this mode, the WebSocket service will run within the same HTTP service that's already running in the broker. To enable this mode, set the [`webSocketServiceEnabled`](reference-configuration.md#broker-webSocketServiceEnabled) parameter in the [`conf/broker.conf`](reference-configuration.md#broker) configuration file in your installation.
+
+```properties
+webSocketServiceEnabled=true
+```
+
+### As a separate component
+
+In this mode, the WebSocket service will be run from a Pulsar [broker](reference-terminology.md#broker) as a separate service. Configuration for this mode is handled in the [`conf/websocket.conf`](reference-configuration.md#websocket) configuration file. You'll need to set *at least* the following parameters:
+
+* [`globalZookeeperServers`](reference-configuration.md#websocket-globalZookeeperServers)
+* [`webServicePort`](reference-configuration.md#websocket-webServicePort)
+* [`clusterName`](reference-configuration.md#websocket-clusterName)
+
+Here's an example:
+
+```properties
+globalZookeeperServers=zk1:2181,zk2:2181,zk3:2181
+webServicePort=8080
+clusterName=my-cluster
+```
+
+### Starting the broker
+
+When the configuration is set, you can start the service using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) tool:
+
+```shell
+$ bin/pulsar-daemon start websocket
+```
+
+## API Reference
+
+Pulsar's WebSocket API offers three endpoints for [producing](#producer-endpoint) messages, [consuming](#consumer-endpoint) messages and [reading](#reader-endpoint) messages.
+
+All exchanges via the WebSocket API use JSON.
+
+### Producer endpoint
+
+The producer endpoint requires you to specify a tenant, namespace, and topic in the URL:
+
+```http
+ws://broker-service-url:8080/ws/v2/producer/persistent/:tenant/:namespace/:topic 
+```
+
+##### Query param
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`sendTimeoutMillis` | long | no | Send timeout (default: 30 secs)
+`batchingEnabled` | boolean | no | Enable batching of messages (default: false)
+`batchingMaxMessages` | int | no | Maximum number of messages permitted in a batch (default: 1000)
+`maxPendingMessages` | int | no | Set the max size of the internal-queue holding the messages (default: 1000)
+`batchingMaxPublishDelay` | long | no | Time period within which the messages will be batched (default: 10ms)
+`messageRoutingMode` | string | no | Message [routing mode](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/ProducerConfiguration.MessageRoutingMode.html) for the partitioned producer: `SinglePartition`, `RoundRobinPartition`
+`compressionType` | string | no | Compression [type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/CompressionType.html): `LZ4`, `ZLIB`
+`producerName` | string | no | Specify the name for the producer. Pulsar will enforce only one producer with same name can be publishing on a topic
+`initialSequenceId` | long | no | Set the baseline for the sequence ids for messages published by the producer.
+`hashingScheme` | string | no | [Hashing function](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ProducerConfiguration.HashingScheme.html) to use when publishing on a partitioned topic: `JavaStringHash`, `Murmur3_32Hash`
+
+
+#### Publishing a message
+
+```json
+{
+  "payload": "SGVsbG8gV29ybGQ=",
+  "properties": {"key1": "value1", "key2": "value2"},
+  "context": "1"
+}
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`payload` | string | yes | Base-64 encoded payload
+`properties` | key-value pairs | no | Application-defined properties
+`context` | string | no | Application-defined request identifier
+`key` | string | no | For partitioned topics, decides which partition to use
+`replicationClusters` | array | no | Restrict replication to this list of [clusters](reference-terminology.md#cluster), specified by name
+
+
+##### Example success response
+
+```json
+{
+   "result": "ok",
+   "messageId": "CAAQAw==",
+   "context": "1"
+ }
+```
+##### Example failure response
+
+```json
+ {
+   "result": "send-error:3",
+   "errorMsg": "Failed to de-serialize from JSON",
+   "context": "1"
+ }
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`result` | string | yes | `ok` if successful or an error message if unsuccessful
+`messageId` | string | yes | Message ID assigned to the published message
+`context` | string | no | Application-defined request identifier
+
+
+### Consumer endpoint
+
+The consumer endpoint requires you to specify a tenant, namespace, and topic, as well as a subscription, in the URL:
+
+```http
+ws://broker-service-url:8080/ws/v2/consumer/persistent/:tenant/:namespace/:topic/:subscription
+```
+
+##### Query param
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`ackTimeoutMillis` | long | no | Set the timeout for unacked messages (default: 0)
+`subscriptionType` | string | no | [Subscription type](https://pulsar.apache.org/api/client/index.html?org/apache/pulsar/client/api/SubscriptionType.html): `Exclusive`, `Failover`, `Shared`
+`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000)
+`consumerName` | string | no | Consumer name
+`priorityLevel` | int | no | Define a [priority](http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/ConsumerConfiguration.html#setPriorityLevel-int-) for the consumer
+
+##### Receiving messages
+
+Server will push messages on the WebSocket session:
+
+```json
+{
+  "messageId": "CAAQAw==",
+  "payload": "SGVsbG8gV29ybGQ=",
+  "properties": {"key1": "value1", "key2": "value2"},
+  "publishTime": "2016-08-30 16:45:57.785"
+}
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`messageId` | string | yes | Message ID
+`payload` | string | yes | Base-64 encoded payload
+`publishTime` | string | yes | Publish timestamp
+`properties` | key-value pairs | no | Application-defined properties
+`key` | string | no |  Original routing key set by producer
+
+#### Acknowledging the message
+
+Consumer needs to acknowledge the successful processing of the message to
+have the Pulsar broker delete it.
+
+```json
+{
+  "messageId": "CAAQAw=="
+}
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`messageId`| string | yes | Message ID of the processed message
+
+
+### Reader endpoint
+
+The reader endpoint requires you to specify a tenant, namespace, and topic in the URL:
+
+```http
+ws://broker-service-url:8080/ws/v2/reader/persistent/:tenant/:namespace/:topic
+```
+
+##### Query param
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`readerName` | string | no | Reader name
+`receiverQueueSize` | int | no | Size of the consumer receive queue (default: 1000)
+`messageId` | int or enum | no | Message ID to start from, `earliest` or `latest` (default: `latest`)
+
+##### Receiving messages
+
+Server will push messages on the WebSocket session:
+
+```json
+{
+  "messageId": "CAAQAw==",
+  "payload": "SGVsbG8gV29ybGQ=",
+  "properties": {"key1": "value1", "key2": "value2"},
+  "publishTime": "2016-08-30 16:45:57.785"
+}
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`messageId` | string | yes | Message ID
+`payload` | string | yes | Base-64 encoded payload
+`publishTime` | string | yes | Publish timestamp
+`properties` | key-value pairs | no | Application-defined properties
+`key` | string | no |  Original routing key set by producer
+
+#### Acknowledging the message
+
+**In WebSocket**, Reader needs to acknowledge the successful processing of the message to
+have the Pulsar WebSocket service update the number of pending messages.
+If you don't send acknowledgements, Pulsar WebSocket service will stop sending messages after reaching the pendingMessages limit.
+
+```json
+{
+  "messageId": "CAAQAw=="
+}
+```
+
+Key | Type | Required? | Explanation
+:---|:-----|:----------|:-----------
+`messageId`| string | yes | Message ID of the processed message
+
+
+### Error codes
+
+In case of error the server will close the WebSocket session using the
+following error codes:
+
+Error Code | Error Message
+:----------|:-------------
+1 | Failed to create producer
+2 | Failed to subscribe
+3 | Failed to deserialize from JSON
+4 | Failed to serialize to JSON
+5 | Failed to authenticate client
+6 | Client is not authorized
+7 | Invalid payload encoding
+8 | Unknown error
+
+> The application is responsible for re-establishing a new WebSocket session after a backoff period.
+
+## Client examples
+
+Below you'll find code examples for the Pulsar WebSocket API in [Python](#python) and [Node.js](#nodejs).
+
+### Python
+
+This example uses the [`websocket-client`](https://pypi.python.org/pypi/websocket-client) package. You can install it using [pip](https://pypi.python.org/pypi/pip):
+
+```shell
+$ pip install websocket-client
+```
+
+You can also download it from [PyPI](https://pypi.python.org/pypi/websocket-client).
+
+#### Python producer
+
+Here's an example Python producer that sends a simple message to a Pulsar [topic](reference-terminology.md#topic):
+
+```python
+import websocket, base64, json
+
+TOPIC = 'ws://localhost:8080/ws/producer/persistent/public/default/my-topic'
+
+ws = websocket.create_connection(TOPIC)
+
+# Send one message as JSON
+ws.send(json.dumps({
+    'payload' : base64.b64encode('Hello World'),
+    'properties': {
+        'key1' : 'value1',
+        'key2' : 'value2'
+    },
+    'context' : 5
+}))
+
+response =  json.loads(ws.recv())
+if response['result'] == 'ok':
+    print 'Message published successfully'
+else:
+    print 'Failed to publish message:', response
+ws.close()
+```
+
+#### Python consumer
+
+Here's an example Python consumer that listens on a Pulsar topic and prints the message ID whenever a message arrives:
+
+```python
+import websocket, base64, json
+
+TOPIC = 'ws://localhost:8080/ws/v2/consumer/persistent/public/default/my-topic/my-sub'
+
+ws = websocket.create_connection(TOPIC)
+
+while True:
+    msg = json.loads(ws.recv())
+    if not msg: break
+
+    print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))
+
+    # Acknowledge successful processing
+    ws.send(json.dumps({'messageId' : msg['messageId']}))
+
+ws.close()
+```
+
+#### Python reader
+
+Here's an example Python reader that listens on a Pulsar topic and prints the message ID whenever a message arrives:
+
+```python
+import websocket, base64, json
+
+TOPIC = 'ws://localhost:8080/ws/v2/reader/persistent/public/default/my-topic'
+
+ws = websocket.create_connection(TOPIC)
+
+while True:
+    msg = json.loads(ws.recv())
+    if not msg: break
+
+    print "Received: {} - payload: {}".format(msg, base64.b64decode(msg['payload']))
+
+    # Acknowledge successful processing
+    ws.send(json.dumps({'messageId' : msg['messageId']}))
+
+ws.close()
+```
+
+### Node.js
+
+This example uses the [`ws`](https://websockets.github.io/ws/) package. You can install it using [npm](https://www.npmjs.com/):
+
+```shell
+$ npm install ws
+```
+
+#### Node.js producer
+
+Here's an example Node.js producer that sends a simple message to a Pulsar topic:
+
+```javascript
+var WebSocket = require('ws'),
+    topic = "ws://localhost:8080/ws/v2/producer/persistent/my-tenant/my-ns/my-topic1",
+    ws = new WebSocket(topic);
+
+var message = {
+  "payload" : new Buffer("Hello World").toString('base64'),
+  "properties": {
+    "key1" : "value1",
+    "key2" : "value2"
+  },
+  "context" : "1"
+};
+
+ws.on('open', function() {
+  // Send one message
+  ws.send(JSON.stringify(message));
+});
+
+ws.on('message', function(message) {
+  console.log('received ack: %s', message);
+});
+```
+
+#### Node.js consumer
+
+Here's an example Node.js consumer that listens on the same topic used by the producer above:
+
+```javascript
+var WebSocket = require('ws'),
+    topic = "ws://localhost:8080/ws/v2/consumer/persistent/my-tenant/my-ns/my-topic1/my-sub",
+    ws = new WebSocket(topic);
+
+ws.on('message', function(message) {
+    var receiveMsg = JSON.parse(message);
+    console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString());
+    var ackMsg = {"messageId" : receiveMsg.messageId};
+    ws.send(JSON.stringify(ackMsg));
+});
+```
+
+#### NodeJS reader
+```javascript
+var WebSocket = require('ws'),
+    topic = "ws://localhost:8080/ws/v2/reader/persistent/my-tenant/my-ns/my-topic1",
+    ws = new WebSocket(topic);
+
+ws.on('message', function(message) {
+    var receiveMsg = JSON.parse(message);
+    console.log('Received: %s - payload: %s', message, new Buffer(receiveMsg.payload, 'base64').toString());
+    var ackMsg = {"messageId" : receiveMsg.messageId};
+    ws.send(JSON.stringify(ackMsg));
+});
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/cookbooks-tiered-storage.md b/site2/website/versioned_docs/version-2.2.0/cookbooks-tiered-storage.md
new file mode 100644
index 0000000000..406d56452e
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/cookbooks-tiered-storage.md
@@ -0,0 +1,221 @@
+---
+id: version-2.2.0-cookbooks-tiered-storage
+title: Tiered Storage
+sidebar_label: Tiered Storage
+original_id: cookbooks-tiered-storage
+---
+
+Pulsar's **Tiered Storage** feature allows older backlog data to be offloaded to long term storage, thereby freeing up space in BookKeeper and reducing storage costs. This cookbook walks you through using tiered storage in your Pulsar cluster.
+
+Tiered storage currently uses [Apache Jclouds](https://jclouds.apache.org) to supports
+[Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/)(GCS for short)
+for long term storage. With Jclouds, it is easy to add support for more
+[cloud storage providers](https://jclouds.apache.org/reference/providers/#blobstore-providers) in the future.
+
+## When should I use Tiered Storage?
+
+Tiered storage should be used when you have a topic for which you want to keep a very long backlog for a long time. For example, if you have a topic containing user actions which you use to train your recommendation systems, you may want to keep that data for a long time, so that if you change your recommendation algorithm you can rerun it against your full user history.
+
+## The offloading mechanism
+
+A topic in Pulsar is backed by a log, known as a managed ledger. This log is composed of an ordered list of segments. Pulsar only every writes to the final segment of the log. All previous segments are sealed. The data within the segment is immutable. This is known as a segment oriented architecture.
+
+![Tiered storage](assets/pulsar-tiered-storage.png "Tiered Storage")
+
+The Tiered Storage offloading mechanism takes advantage of this segment oriented architecture. When offloading is requested, the segments of the log are copied, one-by-one, to tiered storage. All segments of the log, apart from the segment currently being written to can be offloaded.
+
+On the broker, the administrator must configure the bucket and credentials for the cloud storage service.
+The configured bucket must exist before attempting to offload. If it does not exist, the offload operation will fail.
+
+Pulsar uses multi-part objects to upload the segment data. It is possible that a broker could crash while uploading the data.
+We recommend you add a life cycle rule your bucket to expire incomplete multi-part upload after a day or two to avoid
+getting charged for incomplete uploads.
+
+## Configuring the offload driver
+
+Offloading is configured in ```broker.conf```. 
+
+At a minimum, the administrator must configure the driver, the bucket and the authenticating credentials.
+There is also some other knobs to configure, like the bucket region, the max block size in backed storage, etc.
+
+Currently we support driver of types:
+
+- `aws-s3`: [Simple Cloud Storage Service](https://aws.amazon.com/s3/)
+- `google-cloud-storage`: [Google Cloud Storage](https://cloud.google.com/storage/)
+
+> Driver names are case-insensitive for driver's name. There is a third driver type, `s3`, which is identical to `aws-s3`,
+> though it requires that you specify an endpoint url using `s3ManagedLedgerOffloadServiceEndpoint`. This is useful if
+> using a S3 compatible data store, other than AWS.
+
+```conf
+managedLedgerOffloadDriver=aws-s3
+```
+
+### "aws-s3" Driver configuration
+
+#### Bucket and Region
+
+Buckets are the basic containers that hold your data.
+Everything that you store in Cloud Storage must be contained in a bucket.
+You can use buckets to organize your data and control access to your data,
+but unlike directories and folders, you cannot nest buckets.
+
+```conf
+s3ManagedLedgerOffloadBucket=pulsar-topic-offload
+```
+
+Bucket Region is the region where bucket located. Bucket Region is not a required
+but a recommended configuration. If it is not configured, It will use the default region.
+
+With AWS S3, the default region is `US East (N. Virginia)`. Page
+[AWS Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) contains more information.
+
+```conf
+s3ManagedLedgerOffloadRegion=eu-west-3
+```
+
+#### Authentication with AWS
+
+To be able to access AWS S3, you need to authenticate with AWS S3.
+Pulsar does not provide any direct means of configuring authentication for AWS S3,
+but relies on the mechanisms supported by the
+[DefaultAWSCredentialsProviderChain](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html).
+
+Once you have created a set of credentials in the AWS IAM console, they can be configured in a number of ways.
+
+1. Set the environment variables **AWS_ACCESS_KEY_ID** and **AWS_SECRET_ACCESS_KEY** in ```conf/pulsar_env.sh```.
+
+```bash
+export AWS_ACCESS_KEY_ID=ABC123456789
+export AWS_SECRET_ACCESS_KEY=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c
+```
+
+> \"export\" is important so that the variables are made available in the environment of spawned processes.
+
+
+2. Add the Java system properties *aws.accessKeyId* and *aws.secretKey* to **PULSAR_EXTRA_OPTS** in `conf/pulsar_env.sh`.
+
+```bash
+PULSAR_EXTRA_OPTS="${PULSAR_EXTRA_OPTS} ${PULSAR_MEM} ${PULSAR_GC} -Daws.accessKeyId=ABC123456789 -Daws.secretKey=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.maxCapacity.default=1000 -Dio.netty.recycler.linkCapacity=1024"
+```
+
+3. Set the access credentials in ```~/.aws/credentials```.
+
+```conf
+[default]
+aws_access_key_id=ABC123456789
+aws_secret_access_key=ded7db27a4558e2ea8bbf0bf37ae0e8521618f366c
+```
+
+If you are running in EC2 you can also use instance profile credentials, provided through the EC2 metadata service, but that is out of scope for this cookbook.
+
+> The broker must be rebooted for credentials specified in pulsar_env to take effect.
+
+#### Configuring the size of block read/write
+
+Pulsar also provides some knobs to configure the size of requests sent to AWS S3.
+
+- ```s3ManagedLedgerOffloadMaxBlockSizeInBytes```  configures the maximum size of
+  a "part" sent during a multipart upload. This cannot be smaller than 5MB. Default is 64MB.
+- ```s3ManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for
+  each individual read when reading back data from AWS S3. Default is 1MB.
+
+In both cases, these should not be touched unless you know what you are doing.
+
+### "google-cloud-storage" Driver configuration
+
+Buckets are the basic containers that hold your data. Everything that you store in
+Cloud Storage must be contained in a bucket. You can use buckets to organize your data and
+control access to your data, but unlike directories and folders, you cannot nest buckets.
+
+```conf
+gcsManagedLedgerOffloadBucket=pulsar-topic-offload
+```
+
+Bucket Region is the region where bucket located. Bucket Region is not a required but
+a recommended configuration. If it is not configured, It will use the default region.
+
+Regarding GCS, buckets are default created in the `us multi-regional location`, 
+page [Bucket Locations](https://cloud.google.com/storage/docs/bucket-locations) contains more information.
+
+```conf
+gcsManagedLedgerOffloadRegion=europe-west3
+```
+
+#### Authentication with GCS
+
+The administrator needs to configure `gcsManagedLedgerOffloadServiceAccountKeyFile` in `broker.conf`
+for the broker to be able to access the GCS service. `gcsManagedLedgerOffloadServiceAccountKeyFile` is
+a Json file, containing the GCS credentials of a service account.
+[Service Accounts section of this page](https://support.google.com/googleapi/answer/6158849) contains
+more information of how to create this key file for authentication. More information about google cloud IAM
+is available [here](https://cloud.google.com/storage/docs/access-control/iam).
+
+Usually these are the steps to create the authentication file:
+1. Open the API Console Credentials page.
+2. If it's not already selected, select the project that you're creating credentials for.
+3. To set up a new service account, click New credentials and then select Service account key.
+4. Choose the service account to use for the key.
+5. Download the service account's public/private key as a JSON file that can be loaded by a Google API client library.
+
+```conf
+gcsManagedLedgerOffloadServiceAccountKeyFile="/Users/hello/Downloads/project-804d5e6a6f33.json"
+```
+
+#### Configuring the size of block read/write
+
+Pulsar also provides some knobs to configure the size of requests sent to GCS.
+
+- ```gcsManagedLedgerOffloadMaxBlockSizeInBytes``` configures the maximum size of a "part" sent
+  during a multipart upload. This cannot be smaller than 5MB. Default is 64MB.
+- ```gcsManagedLedgerOffloadReadBufferSizeInBytes``` configures the block size for each individual
+  read when reading back data from GCS. Default is 1MB.
+
+In both cases, these should not be touched unless you know what you are doing.
+
+## Configuring offload to run automatically
+
+Namespace policies can be configured to offload data automatically once a threshold is reached. The threshold is based on the size of data that the topic has stored on the pulsar cluster. Once the topic reaches the threshold, an offload operation will be triggered. Setting a negative value to the threshold will disable automatic offloading. Setting the threshold to 0 will cause the broker to offload data as soon as it possiby can.
+
+```bash
+$ bin/pulsar-admin namespaces set-offload-threshold --size 10M my-tenant/my-namespace
+```
+
+> Automatic offload runs when a new segment is added to a topic log. If you set the threshold on a namespace, but few messages are being produced to the topic, offload will not until the current segment is full.
+
+
+## Triggering offload manually
+
+Offloading can manually triggered through a REST endpoint on the Pulsar broker. We provide a CLI which will call this rest endpoint for you.
+
+When triggering offload, you must specify the maximum size, in bytes, of backlog which will be retained locally on the bookkeeper. The offload mechanism will offload segments from the start of the topic backlog until this condition is met.
+
+```bash
+$ bin/pulsar-admin topics offload --size-threshold 10M my-tenant/my-namespace/topic1
+Offload triggered for persistent://my-tenant/my-namespace/topic1 for messages before 2:0:-1
+```
+
+The command to triggers an offload will not wait until the offload operation has completed. To check the status of the offload, use offload-status.
+
+```bash
+$ bin/pulsar-admin topics offload-status my-tenant/my-namespace/topic1
+Offload is currently running
+```
+
+To wait for offload to complete, add the -w flag.
+
+```bash
+$ bin/pulsar-admin topics offload-status -w my-tenant/my-namespace/topic1
+Offload was a success
+```
+
+If there is an error offloading, the error will be propagated to the offload-status command.
+
+```bash
+$ bin/pulsar-admin topics offload-status persistent://public/default/topic1                                                                                                       
+Error in offload
+null
+
+Reason: Error offloading: org.apache.bookkeeper.mledger.ManagedLedgerException: java.util.concurrent.CompletionException: com.amazonaws.services.s3.model.AmazonS3Exception: Anonymous users cannot initiate multipart uploads.  Please authenticate. (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 798758DE3F1776DF; S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=), S3 Extended Request ID: dhBFz/lZm1oiG/oBEepeNlhrtsDlzoOhocuYMpKihQGXe6EG8puRGOkK6UwqzVrMXTWBxxHcS+g=
+````
+
diff --git a/site2/website/versioned_docs/version-2.2.0/deploy-aws.md b/site2/website/versioned_docs/version-2.2.0/deploy-aws.md
new file mode 100644
index 0000000000..eea38571da
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/deploy-aws.md
@@ -0,0 +1,227 @@
+---
+id: version-2.2.0-deploy-aws
+title: Deploying a Pulsar cluster on AWS using Terraform and Ansible
+sidebar_label: Amazon Web Services
+original_id: deploy-aws
+---
+
+> For instructions on deploying a single Pulsar cluster manually rather than using Terraform and Ansible, see [Deploying a Pulsar cluster on bare metal](deploy-bare-metal.md). For instructions on manually deploying a multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare metal](deploy-bare-metal-multi-cluster.md).
+
+One of the easiest ways to get a Pulsar [cluster](reference-terminology.md#cluster) running on [Amazon Web Services](https://aws.amazon.com/) (AWS) is to use the the [Terraform](https://terraform.io) infrastructure provisioning tool and the [Ansible](https://www.ansible.com) server automation tool. Terraform can create the resources necessary to run the Pulsar cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security infrastructure, etc.---while Ansible can install and run Pulsar on the provisioned resources.
+
+## Requirements and setup
+
+In order install a Pulsar cluster on AWS using Terraform and Ansible, you'll need:
+
+* An [AWS account](https://aws.amazon.com/account/) and the [`aws`](https://aws.amazon.com/cli/) command-line tool
+* Python and [pip](https://pip.pypa.io/en/stable/)
+* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) tool, which enables Ansible to use Terraform artifacts
+
+You'll also need to make sure that you're currently logged into your AWS account via the `aws` tool:
+
+```bash
+$ aws configure
+```
+
+## Installation
+
+You can install Ansible on Linux or macOS using pip.
+
+```bash
+$ pip install ansible
+```
+
+You can install Terraform using the instructions [here](https://www.terraform.io/intro/getting-started/install.html).
+
+You'll also need to have the Terraform and Ansible configurations for Pulsar locally on your machine. They're contained in Pulsar's [GitHub repository](https://github.com/apache/pulsar), which you can fetch using Git:
+
+```bash
+$ git clone https://github.com/apache/pulsar
+$ cd pulsar/deployment/terraform-ansible/aws
+```
+
+## SSH setup
+
+> If you already have an SSH key and would like to use it, you skip generating the SSH keys and update `private_key_file` setting
+> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` file.
+>
+> For example, if you already had a private SSH key in `~/.ssh/pulsar_aws` and a public key in `~/.ssh/pulsar_aws.pub`,
+> you can do followings:
+>
+> 1. update `ansible.cfg` with following values:
+>
+> ```shell
+> private_key_file=~/.ssh/pulsar_aws
+> ```
+>
+> 2. update `terraform.tfvars` with following values:
+>
+> ```shell
+> public_key_path=~/.ssh/pulsar_aws.pub
+> ```
+
+In order to create the necessary AWS resources using Terraform, you'll need to create an SSH key. To create a private SSH key in `~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`:
+
+```bash
+$ ssh-keygen -t rsa
+```
+
+Do *not* enter a passphrase (hit **Enter** when prompted instead). To verify that a key has been created:
+
+```bash
+$ ls ~/.ssh
+id_rsa               id_rsa.pub
+```
+
+## Creating AWS resources using Terraform
+
+To get started building AWS resources with Terraform, you'll need to install all Terraform dependencies:
+
+```bash
+$ terraform init
+# This will create a .terraform folder
+```
+
+Once you've done that, you can apply the default Terraform configuration:
+
+```bash
+$ terraform apply
+```
+
+You should then see this prompt:
+
+```bash
+Do you want to perform these actions?
+  Terraform will perform the actions described above.
+  Only 'yes' will be accepted to approve.
+
+  Enter a value:
+```
+
+Type `yes` and hit **Enter**. Applying the configuration could take several minutes. When it's finished, you should see `Apply complete!` along with some other information, including the number of resources created.
+
+### Applying a non-default configuration
+
+You can apply a non-default Terraform configuration by changing the values in the `terraform.tfvars` file. The following variables are available:
+
+Variable name | Description | Default
+:-------------|:------------|:-------
+`public_key_path` | The path of the public key that you've generated. | `~/.ssh/id_rsa.pub`
+`region` | The AWS region in which the Pulsar cluster will run | `us-west-2`
+`availability_zone` | The AWS availability zone in which the Pulsar cluster will run | `us-west-2a`
+`aws_ami` | The [Amazon Machine Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that will be used by the cluster | `ami-9fa343e7`
+`num_zookeeper_nodes` | The number of [ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3
+`num_bookie_nodes` | The number of bookies that will run in the cluster | 3
+`num_broker_nodes` | The number of Pulsar brokers that will run in the cluster | 2
+`num_proxy_nodes` | The number of Pulsar proxies that will run in the cluster | 1
+`base_cidr_block` | The root [CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that will be used by network assets for the cluster | `10.0.0.0/16`
+`instance_types` | The EC2 instance types to be used. This variable is a map with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | `t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` (Brokers/Proxies)
+
+### What is installed
+
+When you run the Ansible playbook, the following AWS resources will be used:
+
+* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index). By default, that includes:
+  * 3 small VMs for ZooKeeper ([t2.small](https://www.ec2instances.info/?selected=t2.small) instances)
+  * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) ([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances)
+  * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances)
+  * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) ([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances)
+* An EC2 [security group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html)
+* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security
+* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from the outside world
+* A [route table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html) for the Pulsar cluster's VPC
+* A [subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) for the VPC
+
+All EC2 instances for the cluster will run in the [us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) region.
+
+### Fetching your Pulsar connection URL
+
+When you apply the Terraform configuration by running `terraform apply`, Terraform will output a value for the `pulsar_service_url`. It should look something like this:
+
+```
+pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650
+```
+
+You can fetch that value at any time by running `terraform output pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even though the filename doesn't reflect that):
+
+```bash
+$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value
+```
+
+### Destroying your cluster
+
+At any point, you can destroy all AWS resources associated with your cluster using Terraform's `destroy` command:
+
+```bash
+$ terraform destroy
+```
+
+## Setup Disks
+
+Before you run the Pulsar playbook, you want to mount the disks to the correct directories on those bookie nodes.
+Since different type of machines would have different disk layout, if you change the `instance_types` in your terraform
+config, you need to update the task defined in `setup-disk.yaml` file.
+
+To setup disks on bookie nodes, use this command:
+
+```bash
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  setup-disk.yaml
+```
+
+After running this command, the disks will be mounted under `/mnt/journal` as journal disk, and `/mnt/storage` as ledger disk.
+It is important to run this command only once! If you attempt to run this command again after you have run Pulsar playbook,
+it might be potentially erase your disks again and cause the bookies to fail to start up.
+
+## Running the Pulsar playbook
+
+Once you've created the necessary AWS resources using Terraform, you can install and run Pulsar on the Terraform-created EC2 instances using Ansible. To do so, use this command:
+
+```bash
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  ../deploy-pulsar.yaml
+```
+
+If you've created a private SSH key at a location different from `~/.ssh/id_rsa`, you can specify the different location using the `--private-key` flag:
+
+```bash
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  --private-key="~/.ssh/some-non-default-key" \
+  ../deploy-pulsar.yaml
+```
+
+## Accessing the cluster
+
+You can now access your running Pulsar using the unique Pulsar connection URL for your cluster, which you can obtain using the instructions [above](#fetching-your-pulsar-connection-url).
+
+For a quick demonstration of accessing the cluster, we can use the Python client for Pulsar and the Python shell. First, install the Pulsar Python module using pip:
+
+```bash
+$ pip install pulsar-client
+```
+
+Now, open up the Python shell using the `python` command:
+
+```bash
+$ python
+```
+
+Once in the shell, run the following:
+
+```python
+>>> import pulsar
+>>> client = pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650')
+# Make sure to use your connection URL
+>>> producer = client.create_producer('persistent://public/default/test-topic')
+>>> producer.send('Hello world')
+>>> client.close()
+```
+
+If all of these commands are successful, your cluster can now be used by Pulsar clients!
+
diff --git a/site2/website/versioned_docs/version-2.2.0/deploy-bare-metal-multi-cluster.md b/site2/website/versioned_docs/version-2.2.0/deploy-bare-metal-multi-cluster.md
new file mode 100644
index 0000000000..bf6f0fb78c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/deploy-bare-metal-multi-cluster.md
@@ -0,0 +1,409 @@
+---
+id: version-2.2.0-deploy-bare-metal-multi-cluster
+title: Deploying a multi-cluster on bare metal
+sidebar_label: Bare metal multi-cluster
+original_id: deploy-bare-metal-multi-cluster
+---
+
+> ### Tips
+>
+> 1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you're interested in experimenting with
+> Pulsar or using it in a startup or on a single team, we recommend opting for a single cluster. For instructions on deploying a single cluster,
+> see the guide [here](deploy-bare-metal.md).
+>
+> 2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
+> package and make sure it is installed under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you
+> have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md).
+
+A Pulsar *instance* consists of multiple Pulsar clusters working in unison. Clusters can be distributed across data centers or geographical regions and can replicate amongst themselves using [geo-replication](administration-geo.md). Deploying a multi-cluster Pulsar instance involves the following basic steps:
+
+* Deploying two separate [ZooKeeper](#deploying-zookeeper) quorums: a [local](#deploying-local-zookeeper) quorum for each cluster in the instance and a [configuration store](#configuration-store) quorum for instance-wide tasks
+* Initializing [cluster metadata](#cluster-metadata-initialization) for each cluster
+* Deploying a [BookKeeper cluster](#deploying-bookkeeper) of bookies in each Pulsar cluster
+* Deploying [brokers](#deploying-brokers) in each Pulsar cluster
+
+If you're deploying a single Pulsar cluster, see the [Clusters and Brokers](getting-started-standalone.md#starting-the-cluster) guide.
+
+> #### Running Pulsar locally or on Kubernetes?
+> This guide shows you how to deploy Pulsar in production in a non-Kubernetes. If you'd like to run a standalone Pulsar cluster on a single machine for development purposes, see the [Setting up a local cluster](getting-started-standalone.md) guide. If you're looking to run Pulsar on [Kubernetes](https://kubernetes.io), see the [Pulsar on Kubernetes](deploy-kubernetes.md) guide, which includes sections on running Pulsar on Kubernetes on [Google Kubernetes Engine](deploy-kubernetes#pulsar-on-google-kubernetes-engine) and on [Amazon Web Services](deploy-kubernetes#pulsar-on-amazon-web-services).
+
+## System requirement
+Pulsar is currently available for **MacOS** and **Linux**. In order to use Pulsar, you'll need to install [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html).
+
+## Installing Pulsar
+
+To get started running Pulsar, download a binary tarball release in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} binary release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget 'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-{{pulsar:version}}/apache-pulsar-{{pulsar:version}}-bin.tar.gz' -O apache-pulsar-{{pulsar:version}}-bin.tar.gz
+  ```
+
+Once the tarball is downloaded, untar it and `cd` into the resulting directory:
+
+```bash
+$ tar xvfz apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+```
+
+## What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | Pulsar's [command-line tools](reference-cli-tools.md), such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md)
+`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
+`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md)
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar
+`licenses` | License files, in `.txt` form, for various components of the Pulsar codebase
+
+These directories will be created once you begin running Pulsar:
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory used by ZooKeeper and BookKeeper
+`instances` | Artifacts created for [Pulsar Functions](functions-overview.md)
+`logs` | Logs created by the installation
+
+
+## Deploying ZooKeeper
+
+Each Pulsar instance relies on two separate ZooKeeper quorums.
+
+* [Local ZooKeeper](#deploying-local-zookeeper) operates at the cluster level and provides cluster-specific configuration management and coordination. Each Pulsar cluster needs to have a dedicated ZooKeeper cluster.
+* [Global ZooKeeper](#deploying-global-zookeeper) operates at the instance level and provides configuration management for the entire system (and thus across clusters). The global ZooKeeper quorum can be provided by an independent cluster of machines or by the same machines used by local ZooKeeper.
+
+### Deploying local ZooKeeper
+
+ZooKeeper manages a variety of essential coordination- and configuration-related tasks for Pulsar.
+
+Deploying a Pulsar instance requires you to stand up one local ZooKeeper cluster *per Pulsar cluster*. 
+
+To begin, add all ZooKeeper servers to the quorum configuration specified in the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a `server.N` line for each node in the cluster to the configuration, where `N` is the number of the ZooKeeper node. Here's an example for a three-node cluster:
+
+```properties
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+```
+
+On each host, you need to specify the ID of the node in each node's `myid` file, which is in each server's `data/zookeeper` folder by default (this can be changed via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed info on `myid` and more.
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this:
+
+```shell
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+```
+
+On `zk2.us-west.example.com` the command would be `echo 2 > data/zookeeper/myid` and so on.
+
+Once each server has been added to the `zookeeper.conf` configuration and has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+$ bin/pulsar-daemon start zookeeper
+```
+
+### Deploying the configuration store 
+
+The ZooKeeper cluster configured and started up in the section above is a *local* ZooKeeper cluster used to manage a single Pulsar cluster. In addition to a local cluster, however, a full Pulsar instance also requires a configuration store for handling some instance-level configuration and coordination tasks.
+
+If you're deploying a [single-cluster](#single-cluster-pulsar-instance) instance, then you will not need a separate cluster for the configuration store. If, however, you're deploying a [multi-cluster](#multi-cluster-pulsar-instance) instance, then you should stand up a separate ZooKeeper cluster for configuration tasks.
+
+#### Single-cluster Pulsar instance
+
+If your Pulsar instance will consist of just one cluster, then you can deploy a configuration store on the same machines as the local ZooKeeper quorum but running on different TCP ports.
+
+To deploy a ZooKeeper configuration store in a single-cluster instance, add the same ZooKeeper servers used by the local quorom to the configuration file in [`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) using the same method for [local ZooKeeper](#local-zookeeper), but make sure to use a different port (2181 is the default for ZooKeeper). Here's an example that uses port 2184 for a three-node ZooKeeper cluster:
+
+```properties
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+```
+
+As before, create the `myid` files for each server on `data/global-zookeeper/myid`.
+
+#### Multi-cluster Pulsar instance
+
+When deploying a global Pulsar instance, with clusters distributed across different geographical regions, the global ZooKeeper serves as a highly available and strongly consistent metadata store that can tolerate failures and partitions spanning whole regions.
+
+The key here is to make sure the ZK quorum members are spread across at least 3
+regions and that other regions are running as observers.
+
+Again, given the very low expected load on the global ZooKeeper servers, we can
+share the same hosts used for the local ZooKeeper quorum.
+
+For example, let's assume a Pulsar instance with the following clusters `us-west`,
+`us-east`, `us-central`, `eu-central`, `ap-south`. Also let's assume, each cluster
+will have its own local ZK servers named such as
+
+```
+zk[1-3].${CLUSTER}.example.com
+```
+
+In this scenario we want to pick the quorum participants from few clusters and
+let all the others be ZK observers. For example, to form a 7 servers quorum, we
+can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`.
+
+This will guarantee that writes to global ZooKeeper will be possible even if one
+of these regions is unreachable.
+
+The ZK configuration in all the servers will look like:
+
+```properties
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+server.4=zk1.us-central.example.com:2185:2186
+server.5=zk2.us-central.example.com:2185:2186
+server.6=zk3.us-central.example.com:2185:2186:observer
+server.7=zk1.us-east.example.com:2185:2186
+server.8=zk2.us-east.example.com:2185:2186
+server.9=zk3.us-east.example.com:2185:2186:observer
+server.10=zk1.eu-central.example.com:2185:2186:observer
+server.11=zk2.eu-central.example.com:2185:2186:observer
+server.12=zk3.eu-central.example.com:2185:2186:observer
+server.13=zk1.ap-south.example.com:2185:2186:observer
+server.14=zk2.ap-south.example.com:2185:2186:observer
+server.15=zk3.ap-south.example.com:2185:2186:observer
+```
+
+Additionally, ZK observers will need to have:
+
+```properties
+peerType=observer
+```
+
+##### Starting the service
+
+Once your global ZooKeeper configuration is in place, you can start up the service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon)
+
+```shell
+$ bin/pulsar-daemon start global-zookeeper
+```
+
+## Cluster metadata initialization
+
+Once you've set up the cluster-specific ZooKeeper and configuration store quorums for your instance, there is some metadata that needs to be written to ZooKeeper for each cluster in your instance. **It only needs to be written once**.
+
+You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. Here's an example:
+
+```shell
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster us-west \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2184 \
+  --web-service-url http://pulsar.us-west.example.com:8080/ \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/
+```
+
+As you can see from the example above, the following needs to be specified:
+
+* The name of the cluster
+* The local ZooKeeper connection string for the cluster
+* The configuration store connection string for the entire instance
+* The web service URL for the cluster
+* A broker service URL enabling interaction with the [brokers](reference-terminology.md#broker) in the cluster
+
+If you're using [TLS](security-tls-transport.md), you'll also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster.
+
+Make sure to run `initialize-cluster-metadata` for each cluster in your instance.
+
+## Deploying BookKeeper
+
+BookKeeper provides [persistent message storage](concepts-architecture-overview.md#persistent-storage) for Pulsar.
+
+Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper cluster shares a local ZooKeeper quorum with the Pulsar cluster.
+
+### Configuring bookies
+
+BookKeeper bookies can be configured using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important aspect of configuring each bookie is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set to the connection string for the Pulsar cluster's local ZooKeeper.
+
+### Starting up bookies
+
+You can start up a bookie in two ways: in the foreground or as a background daemon.
+
+To start up a bookie in the foreground, use the [`bookeeper`](reference-cli-tools.md#bookkeeper)
+
+```shell
+$ bin/pulsar-daemon start bookie
+```
+
+You can verify that the bookie is working properly using the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell):
+
+```shell
+$ bin/bookkeeper shell bookiesanity
+```
+
+This will create a new ledger on the local bookie, write a few entries, read them back and finally delete the ledger.
+
+### Hardware considerations
+
+Bookie hosts are responsible for storing message data on disk. In order for bookies to provide optimal performance, it's essential that they have a suitable hardware configuration. There are two key dimensions to bookie hardware capacity:
+
+* Disk I/O capacity read/write
+* Storage capacity
+
+Message entries written to bookies are always synced to disk before returning an acknowledgement to the Pulsar broker. To ensure low write latency, BookKeeper is
+designed to use multiple devices:
+
+* A **journal** to ensure durability. For sequential writes, it's critical to have fast [fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts. Typically, small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms.
+* A **ledger storage device** is where data is stored until all consumers have acknowledged the message. Writes will happen in the background, so write I/O is not a big concern. Reads will happen sequentially most of the time and the backlog is drained only in case of consumer drain. To store large amounts of data, a typical configuration will involve multiple HDDs with a RAID controller.
+
+
+
+## Deploying brokers
+
+Once you've set up ZooKeeper, initialized cluster metadata, and spun up BookKeeper bookies, you can deploy brokers.
+
+### Broker configuration
+
+Brokers can be configured using the [`conf/broker.conf`](reference-configuration.md#broker) configuration file.
+
+The most important element of broker configuration is ensuring that each broker is aware of its local ZooKeeper quorum as well as the global ZooKeeper quorum. Make sure that you set the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) parameter to reflect the local quorum and the [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameter to reflect the configuration store quorum (although you'll need to specify only those ZooKeeper servers located in the same cluster).
+
+You also need to specify the name of the [cluster](reference-terminology.md#cluster) to which the broker belongs using the [`clusterName`](reference-configuration.md#broker-clusterName) parameter.
+
+Here's an example configuration:
+
+```properties
+# Local ZooKeeper servers
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+# Configuration store quorum connection string.
+configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+
+clusterName=us-west
+```
+
+### Broker hardware
+
+Pulsar brokers do not require any special hardware since they don't use the local disk. Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) are recommended since the software can take full advantage of that.
+
+### Starting the broker service
+
+You can start a broker in the background using [nohup](https://en.wikipedia.org/wiki/Nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+$ bin/pulsar-daemon start broker
+```
+
+You can also start brokers in the foreground using [`pulsar broker`](reference-cli-tools.md#pulsar-broker):
+
+```shell
+$ bin/pulsar broker
+```
+
+## Service discovery
+
+[Clients](getting-started-clients.md) connecting to Pulsar brokers need to be able to communicate with an entire Pulsar instance using a single URL. Pulsar provides a built-in service discovery mechanism that you can set up using the instructions [immediately below](#service-discovery-setup).
+
+You can also use your own service discovery system if you'd like. If you use your own system, there is just one requirement: when a client performs an HTTP request to an [endpoint](reference-configuration.md) for a Pulsar cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to be redirected to *some* active broker in the desired cluster, whether via DNS, an HTTP or IP redirect, or some other means.
+
+> #### Service discovery already provided by many scheduling systems
+> Many large-scale deployment systems, such as [Kubernetes](deploy-kubernetes), have service discovery systems built in. If you're running Pulsar on such a system, you may not need to provide your own service discovery mechanism.
+
+
+### Service discovery setup
+
+The service discovery mechanism included with Pulsar maintains a list of active brokers, stored in ZooKeeper, and supports lookup using HTTP and also Pulsar's [binary protocol](developing-binary-protocol.md).
+
+To get started setting up Pulsar's built-in service discovery, you need to change a few parameters in the [`conf/discovery.conf`](reference-configuration.md#service-discovery) configuration file. Set the [`zookeeperServers`](reference-configuration.md#service-discovery-zookeeperServers) parameter to the cluster's ZooKeeper quorum connection string and the [`configurationStoreServers`](reference-configuration.md#service-discovery-configurationStoreServers) setting to the [configuration
+store](reference-terminology.md#configuration-store) quorum connection string.
+
+```properties
+# Zookeeper quorum connection string
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+# Global configuration store connection string
+configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+```
+
+To start the discovery service:
+
+```shell
+$ bin/pulsar-daemon start discovery
+```
+
+
+
+## Admin client and verification
+
+At this point your Pulsar instance should be ready to use. You can now configure client machines that can serve as [administrative clients](admin-api-overview.md) for each cluster. You can use the [`conf/client.conf`](reference-configuration.md#client) configuration file to configure admin clients.
+
+The most important thing is that you point the [`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the correct service URL for the cluster:
+
+```properties
+serviceUrl=http://pulsar.us-west.example.com:8080/
+```
+
+## Provisioning new tenants
+
+Pulsar was built as a fundamentally multi-tenant system.
+
+To allow a new tenant to use the system, we need to create a new one. You can create a new tenant using the [`pulsar-admin`](reference-pulsar-admin.md#tenants-create) CLI tool:
+
+```shell
+$ bin/pulsar-admin tenants create test-tentant \
+  --allowed-clusters us-west \
+  --admin-roles test-admin-role
+```
+
+This will allow users who identify with role `test-admin-role` to administer the configuration for the tenant `test` which will only be allowed to use the cluster `us-west`. From now on, this tenant will be able to self-manage its resources.
+
+Once a tenant has been created, you will need to create [namespaces](reference-terminology.md#namespace) for topics within that tenant.
+
+The first step is to create a namespace. A namespace is an administrative unit that can contain many topics. A common practice is to create a namespace for each different use case from a single tenant.
+
+```shell
+$ bin/pulsar-admin namespaces create test-tenant/ns1
+```
+
+##### Testing producer and consumer
+
+Everything is now ready to send and receive messages. The quickest way to test
+the system is through the `pulsar-perf` client tool.
+
+Let's use a topic in the namespace we just created. Topics are automatically
+created the first time a producer or a consumer tries to use them.
+
+The topic name in this case could be:
+
+```http
+persistent://test-tenant/ns1/my-topic
+```
+
+Start a consumer that will create a subscription on the topic and will wait
+for messages:
+
+```shell
+$ bin/pulsar-perf consume persistent://test-tenant/us-west/ns1/my-topic
+```
+
+Start a producer that publishes messages at a fixed rate and report stats every
+10 seconds:
+
+```shell
+$ bin/pulsar-perf produce persistent://test-tenant/us-west/ns1/my-topic
+```
+
+To report the topic stats:
+
+```shell
+$ bin/pulsar-admin persistent stats persistent://test-tenant/us-west/ns1/my-topic
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/deploy-bare-metal.md b/site2/website/versioned_docs/version-2.2.0/deploy-bare-metal.md
new file mode 100644
index 0000000000..bf0a711507
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/deploy-bare-metal.md
@@ -0,0 +1,357 @@
+---
+id: version-2.2.0-deploy-bare-metal
+title: Deploying a cluster on bare metal
+sidebar_label: Bare metal
+original_id: deploy-bare-metal
+---
+
+
+> ### Tips
+>
+> 1. Single-cluster Pulsar installations should be sufficient for all but the most ambitious use cases. If you're interested in experimenting with
+> Pulsar or using it in a startup or on a single team, we recommend opting for a single cluster. If you do need to run a multi-cluster Pulsar instance,
+> however, see the guide [here](deploy-bare-metal-multi-cluster.md).
+>
+> 2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
+> package and make sure it is installed under `connectors` directory in the pulsar directory on every broker node or on every function-worker node if you
+> have run a separate cluster of function workers for [Pulsar Functions](functions-overview.md).
+
+Deploying a Pulsar cluster involves doing the following (in order):
+
+* Deploying a [ZooKeeper](#deploying-a-zookeeper-cluster) cluster (optional)
+* Initializing [cluster metadata](#initializing-cluster-metadata)
+* Deploying a [BookKeeper](#deploying-a-bookkeeper-cluster) cluster
+* Deploying one or more Pulsar [brokers](#deploying-pulsar-brokers)
+
+## Preparation
+
+### Requirements
+
+> If you already have an existing zookeeper cluster and would like to reuse it, you don't need to prepare the machines
+> for running ZooKeeper.
+
+To run Pulsar on bare metal, you will need:
+
+* At least 6 Linux machines or VMs
+  * 3 running [ZooKeeper](https://zookeeper.apache.org)
+  * 3 running a Pulsar broker, and a [BookKeeper](https://bookkeeper.apache.org) bookie
+* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name covering all of the Pulsar broker hosts
+
+Each machine in your cluster will need to have [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or higher installed.
+
+Here's a diagram showing the basic setup:
+
+![alt-text](assets/pulsar-basic-setup.png)
+
+In this diagram, connecting clients need to be able to communicate with the Pulsar cluster using a single URL, in this case `pulsar-cluster.acme.com`, that abstracts over all of the message-handling brokers. Pulsar message brokers run on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on ZooKeeper.
+
+### Hardware considerations
+
+When deploying a Pulsar cluster, we have some basic recommendations that you should keep in mind when capacity planning.
+
+#### ZooKeeper
+
+For machines running ZooKeeper, we recommend using lighter-weight machines or VMs. Pulsar uses ZooKeeper only for periodic coordination- and configuration-related tasks, *not* for basic operations. If you're running Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a [t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) instance would likely suffice.
+
+#### Bookies & Brokers
+
+For machines running a bookie and a Pulsar broker, we recommend using more powerful machines. For an AWS deployment, for example, [i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/) instances may be appropriate. On those machines we also recommend:
+
+* Fast CPUs and 10Gbps [NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar brokers)
+* Small and fast [solid-state drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a [RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed write cache (for BookKeeper bookies)
+
+## Installing the Pulsar binary package
+
+> You'll need to install the Pulsar binary package on *each machine in the cluster*, including machines running [ZooKeeper](#deploying-a-zookeeper-cluster) and [BookKeeper](#deploying-a-bookkeeper-cluster).
+
+To get started deploying a Pulsar cluster on bare metal, you'll need to download a binary tarball release in one of the following ways:
+
+* By clicking on the link directly below, which will automatically trigger a download:
+  * <a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} binary release</a>
+* From the Pulsar [downloads page](pulsar:download_page_url)
+* From the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest) on [GitHub](https://github.com)
+* Using [wget](https://www.gnu.org/software/wget):
+
+```bash
+$ wget pulsar:binary_release_url
+```
+
+Once you've downloaded the tarball, untar it and `cd` into the resulting directory:
+
+```bash
+$ tar xvzf apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+```
+
+The untarred directory contains the following subdirectories:
+
+Directory | Contains
+:---------|:--------
+`bin` | Pulsar's [command-line tools](reference-cli-tools.md), such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md)
+`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
+`data` | The data storage directory used by ZooKeeper and BookKeeper.
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar.
+`logs` | Logs created by the installation.
+
+## Installing Builtin Connectors (optional)
+
+> Since release `2.1.0-incubating`, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
+> If you would like to enable those `builtin` connectors, you can follow the instructions as below; otherwise you can
+> skip this section for now.
+
+To get started using builtin connectors, you'll need to download the connectors tarball release on every broker node in
+one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:connector_release_url" download>Pulsar IO Connectors {{pulsar:version}} release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:connector_release_url
+  ```
+
+Once the tarball is downloaded, in the pulsar directory, untar the io-connectors package and copy the connectors as `connectors`
+in the pulsar directory:
+
+```bash
+$ tar xvfz apache-pulsar-io-connectors-{{pulsar:version}}-bin.tar.gz
+
+// you will find a directory named `apache-pulsar-io-connectors-{{pulsar:version}}` in the pulsar directory
+// then copy the connectors
+
+$ mv apache-pulsar-io-connectors-{{pulsar:version}}/connectors connectors
+
+$ ls connectors
+pulsar-io-aerospike-{{pulsar:version}}.nar
+pulsar-io-cassandra-{{pulsar:version}}.nar
+pulsar-io-kafka-{{pulsar:version}}.nar
+pulsar-io-kinesis-{{pulsar:version}}.nar
+pulsar-io-rabbitmq-{{pulsar:version}}.nar
+pulsar-io-twitter-{{pulsar:version}}.nar
+...
+```
+
+## Deploying a ZooKeeper cluster
+
+> If you already have an exsiting zookeeper cluster and would like to use it, you can skip this section.
+
+[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar cluster you'll need to deploy ZooKeeper first (before all other components). We recommend deploying a 3-node ZooKeeper cluster. Pulsar does not make heavy use of ZooKeeper, so more lightweight machines or VMs should suffice for running ZooKeeper.
+
+To begin, add all ZooKeeper servers to the configuration specified in [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar directory you created [above](#installing-the-pulsar-binary-package)). Here's an example:
+
+```properties
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+```
+
+On each host, you need to specify the ID of the node in each node's `myid` file, which is in each server's `data/zookeeper` folder by default (this can be changed via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup) in the ZooKeeper documentation for detailed info on `myid` and more.
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set the `myid` value like this:
+
+```bash
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+```
+
+On `zk2.us-west.example.com` the command would be `echo 2 > data/zookeeper/myid` and so on.
+
+Once each server has been added to the `zookeeper.conf` configuration and has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the background, using nohup) with the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start zookeeper
+```
+
+## Initializing cluster metadata
+
+Once you've deployed ZooKeeper for your cluster, there is some metadata that needs to be written to ZooKeeper for each cluster in your instance. It only needs to be written **once**.
+
+You can initialize this metadata using the [`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata) command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This command can be run on any machine in your ZooKeeper cluster. Here's an example:
+
+```shell
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster pulsar-cluster-1 \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2181 \
+  --web-service-url http://pulsar.us-west.example.com:8080 \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443 \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650 \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651
+```
+
+As you can see from the example above, the following needs to be specified:
+
+Flag | Description
+:----|:-----------
+`--cluster` | A name for the cluster
+`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This connection string only needs to include *one* machine in the ZooKeeper cluster.
+`--configuration-store` | The configuration store connection string for the entire instance. As with the `--zookeeper` flag, this connection string only needs to include *one* machine in the ZooKeeper cluster.
+`--web-service-url` | The web service URL for the cluster, plus a port. This URL should be a standard DNS name. The default port is 8080 (we don't recommend using a different port).
+`--web-service-url-tls` | If you're using [TLS](security-tls-transport.md), you'll also need to specify a TLS web service URL for the cluster. The default port is 8443 (we don't recommend using a different port).
+`--broker-service-url` | A broker service URL enabling interaction with the brokers in the cluster. This URL should use the same DNS name as the web service URL but should use the `pulsar` scheme instead. The default port is 6650 (we don't recommend using a different port).
+`--broker-service-url-tls` | If you're using [TLS](security-tls-transport.md), you'll also need to specify a TLS web service URL for the cluster as well as a TLS broker service URL for the brokers in the cluster. The default port is 6651 (we don't recommend using a different port).
+
+## Deploying a BookKeeper cluster
+
+[BookKeeper](https://bookkeeper.apache.org) handles all persistent data storage in Pulsar. You will need to deploy a cluster of BookKeeper bookies to use Pulsar. We recommend running a **3-bookie BookKeeper cluster**.
+
+BookKeeper bookies can be configured using the [`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration file. The most important step in configuring bookies for our purposes here is ensuring that the [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the connection string for the ZooKeeper cluster. Here's an example:
+
+```properties
+zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+```
+
+Once you've appropriately modified the `zkServers` parameter, you can provide any other configuration modifications you need. You can find a full listing of the available BookKeeper configuration parameters [here](reference-configuration.md#bookkeeper), although we would recommend consulting the [BookKeeper documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for a more in-depth guide.
+
+> ##### NOTES
+>
+> Since Pulsar 2.1.0 release, Pulsar introduces [stateful function](functions-state.md) for Pulsar Functions. If you would like to enable that feature,
+> you need to enable table service on BookKeeper by setting following setting in `conf/bookkeeper.conf` file.
+>
+> ```conf
+> extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent
+> ```
+
+Once you've applied the desired configuration in `conf/bookkeeper.conf`, you can start up a bookie on each of your BookKeeper hosts. You can start up each bookie either in the background, using [nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground.
+
+To start the bookie in the background, use the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start bookie
+```
+
+To start the bookie in the foreground:
+
+```bash
+$ bin/bookkeeper bookie
+```
+
+You can verify that a bookie is working properly by running the `bookiesanity` command for the [BookKeeper shell](reference-cli-tools.md#shell) on it:
+
+```bash
+$ bin/bookkeeper shell bookiesanity
+```
+
+This will create an ephemeral BookKeeper ledger on the local bookie, write a few entries, read them back, and finally delete the ledger.
+
+After you have started all the bookies, you can use `simpletest` command for [BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to
+verify all the bookies in the cluster are up running.
+
+```bash
+$ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum <num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
+```
+
+This command will create a `num-bookies` sized ledger on the cluster, write a few entries, and finally delete the ledger.
+
+
+## Deploying Pulsar brokers
+
+Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. Brokers handle Pulsar messages and provide Pulsar's administrative interface. We recommend running **3 brokers**, one for each machine that's already running a BookKeeper bookie.
+
+### Configuring Brokers
+
+The most important element of broker configuration is ensuring that that each broker is aware of the ZooKeeper cluster that you've deployed. Make sure that the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) and [`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers) parameters. In this case, since we only have 1 cluster and no configuration store setup, the `configurationStoreServers` will point to the same `zookeeperServers`.
+
+```properties
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+```
+
+You also need to specify the cluster name (matching the name that you provided when [initializing the cluster's metadata](#initializing-cluster-metadata):
+
+```properties
+clusterName=pulsar-cluster-1
+```
+
+### Enabling Pulsar Functions (optional)
+
+If you want to enable [Pulsar Functions](functions-overview.md), you can follow the instructions as below:
+
+1. Edit `conf/broker.conf` to enable function worker, by setting `functionsWorkerEnabled` to `true`.
+
+    ```conf
+    functionsWorkerEnabled=true
+    ```
+
+2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the cluster name that you provided when [initializing the cluster's metadata](#initializing-cluster-metadata). 
+
+    ```conf
+    pulsarFunctionsCluster=pulsar-cluster-1
+    ```
+
+### Starting Brokers
+
+You can then provide any other configuration changes that you'd like in the [`conf/broker.conf`](reference-configuration.md#broker) file. Once you've decided on a configuration, you can start up the brokers for your Pulsar cluster. Like ZooKeeper and BookKeeper, brokers can be started either in the foreground or in the background, using nohup.
+
+You can start a broker in the foreground using the [`pulsar broker`](reference-cli-tools.md#pulsar-broker) command:
+
+```bash
+$ bin/pulsar broker
+```
+
+You can start a broker in the background using the [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+$ bin/pulsar-daemon start broker
+```
+
+Once you've succesfully started up all the brokers you intend to use, your Pulsar cluster should be ready to go!
+
+## Connecting to the running cluster
+
+Once your Pulsar cluster is up and running, you should be able to connect with it using Pulsar clients. One such client is the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included with the Pulsar binary package. The `pulsar-client` tool can publish messages to and consume messages from Pulsar topics and thus provides a simple way to make sure that your cluster is runnning properly.
+
+To use the `pulsar-client` tool, first modify the client configuration file in [`conf/client.conf`](reference-configuration.md#client) in your binary package. You'll need to change the values for `webServiceUrl` and `brokerServiceUrl`, substituting `localhost` (which is the default), with the DNS name that you've assigned to your broker/bookie hosts. Here's an example:
+
+```properties
+webServiceUrl=http://us-west.example.com:8080/
+brokerServiceurl=pulsar://us-west.example.com:6650/
+```
+
+Once you've done that, you can publish a message to Pulsar topic:
+
+```bash
+$ bin/pulsar-client produce \
+  persistent://public/default/test \
+  -n 1 \
+  -m "Hello, Pulsar"
+```
+
+> You may need to use a different cluster name in the topic if you specified a cluster name different from `pulsar-cluster-1`.
+
+This will publish a single message to the Pulsar topic.
+
+## Running Functions
+
+> If you have [enabled](#enabling-pulsar-functions-optional) Pulsar Functions, you can also tryout pulsar functions now.
+
+Create a ExclamationFunction `exclamation`.
+
+```bash
+bin/pulsar-admin functions create \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --inputs persistent://public/default/exclamation-input \
+  --output persistent://public/default/exclamation-output \
+  --tenant public \
+  --namespace default \
+  --name exclamation
+```
+
+Check if the function is running as expected by [triggering](functions-deploying.md#triggering-pulsar-functions) the function.
+
+```bash
+bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello world"
+```
+
+You will see output as below:
+
+```shell
+hello world!
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/deploy-dcos.md b/site2/website/versioned_docs/version-2.2.0/deploy-dcos.md
new file mode 100644
index 0000000000..8c00b33609
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/deploy-dcos.md
@@ -0,0 +1,183 @@
+---
+id: version-2.2.0-deploy-dcos
+title: Deploying Pulsar on DC/OS
+sidebar_label: DC/OS
+original_id: deploy-dcos
+---
+
+> ### Tips
+>
+> If you want to enable all builtin [Pulsar IO](io-overview.md) connectors in your Pulsar deployment, you can choose to use `apachepulsar/pulsar-all` image instead of
+> `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+[DC/OS](https://dcos.io/) (the <strong>D</strong>ata<strong>C</strong>enter <strong>O</strong>perating <strong>S</strong>ystem) is a distributed operating system used for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool created and maintained by [Mesosphere](https://mesosphere.com/).
+
+Apache Pulsar is available as a [Marathon Application Group](https://mesosphere.github.io/marathon/docs/application-groups.html), which runs multiple applications as manageable sets.
+
+## Prerequisites
+
+In order to run Pulsar on DC/OS, you will need the following:
+
+* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher
+* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least three agent nodes
+* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed
+* The [`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json) configuration file from the Pulsar GitHub repo.
+
+  ```bash
+  $ curl -O https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json
+  ```
+
+Each node in the DC/OS-managed Mesos cluster must have at least:
+
+* 4 CPU
+* 4 GB of memory
+* 60 GB of total persistent disk
+
+Alternatively, you can change the configuration in `PulsarGroups.json` according to match your DC/OS cluster's resources.
+
+## Deploy Pulsar using the DC/OS command interface
+
+You can deploy Pulsar on DC/OS using this command:
+
+```bash
+$ dcos marathon group add PulsarGroups.json
+```
+
+This command will deploy Docker container instances in three groups, which together comprise a Pulsar cluster:
+
+* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node and 1 [bookie recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) instance)
+* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node and 1 admin instance)
+* 1 [Prometheus](http://prometheus.io/) instance and 1 [Grafana](https://grafana.com/) instance
+
+
+> When running DC/OS, a ZooKeeper cluster is already running at `master.mesos:2181`, thus there's no need to install or start up ZooKeeper separately.
+
+After executing the `dcos` command above, click on the **Services** tab in the DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can access at [http://m1.dcos](http://m1.dcos) in this example. You should see several applications in the process of deploying.
+
+![DC/OS command executed](assets/dcos_command_execute.png)
+
+![DC/OS command executed2](assets/dcos_command_execute2.png)
+
+## The BookKeeper group
+
+To monitor the status of the BookKeeper cluster deployment, click on the **bookkeeper** group in the parent **pulsar** group.
+
+![DC/OS bookkeeper status](assets/dcos_bookkeeper_status.png)
+
+At this point, 3 [bookies](reference-terminology.md#bookie) should be shown as green, which means that they have been deployed successfully and are now running.
+ 
+![DC/OS bookkeeper running](assets/dcos_bookkeeper_run.png)
+ 
+You can also click into each bookie instance to get more detailed info, such as the bookie running log.
+
+![DC/OS bookie log](assets/dcos_bookie_log.png)
+
+To display information about the BookKeeper in ZooKeeper, you can visit [http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, there are 3 bookies under the `available` directory.
+
+![DC/OS bookkeeper in zk](assets/dcos_bookkeeper_in_zookeeper.png)
+
+## The Pulsar broker Group
+
+Similar to the BookKeeper group above, click into the **brokers** to check the status of the Pulsar brokers.
+
+![DC/OS broker status](assets/dcos_broker_status.png)
+
+![DC/OS broker running](assets/dcos_broker_run.png)
+
+You can also click into each broker instance to get more detailed info, such as the broker running log.
+
+![DC/OS broker log](assets/dcos_broker_log.png)
+
+Broker cluster information in Zookeeper is also available through the web UI. In this example, you can see that that the `loadbalance` and `managed-ledgers` directories have been created.
+
+![DC/OS broker in zk](assets/dcos_broker_in_zookeeper.png)
+
+## Monitor Group
+
+The **monitory** group consists of Prometheus and Grafana.
+
+![DC/OS monitor status](assets/dcos_monitor_status.png)
+
+### Prometheus
+
+Click into the instance of `prom` to get the endpoint of Prometheus, which is `192.168.65.121:9090` in this example.
+
+![DC/OS prom endpoint](assets/dcos_prom_endpoint.png)
+
+If you click that endpoint, you'll see the Prometheus dashboard. The [http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets) URL will display all the bookies and brokers.
+
+![DC/OS prom targets](assets/dcos_prom_targets.png)
+
+### Grafana
+
+Click into `grafana` to get the endpoint for Grafana, which is `192.168.65.121:3000` in this example.
+ 
+![DC/OS grafana endpoint](assets/dcos_grafana_endpoint.png)
+
+If you click that endpoint, you can access the Grafana dashbaord.
+
+![DC/OS grafana targets](assets/dcos_grafana_dashboard.png)
+
+## Run a simple Pulsar consumer and producer on DC/OS
+
+Now that we have a fully deployed Pulsar cluster, we can run a simple consumer and producer to show Pulsar on DC/OS in action.
+
+### Download and prepare the Pulsar Java tutorial
+
+There's a [Pulsar Java tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo that you can clone. This repo contains a simple Pulsar consumer and producer (more info can be found in the repo's `README` file).
+
+```bash
+$ git clone https://github.com/streamlio/pulsar-java-tutorial
+```
+
+Change the `SERVICE_URL` from `pulsar://localhost:6650` to `pulsar://a1.dcos:6650` in both [`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java) and [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java).
+The `pulsar://a1.dcos:6650` endpoint is for the broker service. Endpoint details for each broker instance can be fetched from the DC/OS GUI. `a1.dcos` is a DC/OS client agent, which runs a broker. This can also be replaced by the client agent IP address.
+
+Now, change the message number from 10 to 10000000 in main method of [`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java) so that it will produce more messages.
+
+Now compile the project code using command:
+
+```bash
+$ mvn clean package
+```
+
+### Run the consumer and producer
+
+Execute this command to run the consumer:
+
+```bash
+$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial"
+```
+
+Execute this command to run the producer:
+
+```bash
+$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial"
+```
+
+You can see the producer producing messages and the consumer consuming messages through the DC/OS GUI.
+
+![DC/OS pulsar producer](assets/dcos_producer.png)
+
+![DC/OS pulsar consumer](assets/dcos_consumer.png)
+
+### View Grafana metric output
+
+While the producer and consumer are running, you can access running metrics information from Grafana.
+
+![DC/OS pulsar dashboard](assets/dcos_metrics.png)
+
+
+## Uninstall Pulsar
+
+You can shut down and uninstall the `pulsar` application from DC/OS at any time in two ways:
+
+1. Using the DC/OS GUI, you can choose **Delete** at the right end of Pulsar group.
+
+    ![DC/OS pulsar uninstall](assets/dcos_uninstall.png)
+
+2. You can use the following command:
+
+    ```bash
+    $ dcos marathon group remove /pulsar
+    ```
diff --git a/site2/website/versioned_docs/version-2.2.0/developing-binary-protocol.md b/site2/website/versioned_docs/version-2.2.0/developing-binary-protocol.md
new file mode 100644
index 0000000000..2ede951817
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/developing-binary-protocol.md
@@ -0,0 +1,553 @@
+---
+id: version-2.2.0-develop-binary-protocol
+title: Pulsar binary protocol specification
+sidebar_label: Binary protocol
+original_id: develop-binary-protocol
+---
+
+Pulsar uses a custom binary protocol for communications between producers/consumers and brokers. This protocol is designed to support required features, such as acknowledgements and flow control, while ensuring maximum transport and implementation efficiency.
+
+Clients and brokers exchange *commands* with each other. Commands are formatted as binary [protocol buffer](https://developers.google.com/protocol-buffers/) (aka *protobuf*) messages. The format of protobuf commands is specified in the [`PulsarApi.proto`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto) file and also documented in the [Protobuf interface](#protobuf-interface) section below.
+
+> ### Connection sharing
+> Commands for different producers and consumers can be interleaved and sent through the same connection without restriction.
+
+All commands associated with Pulsar's protocol are contained in a
+[`BaseCommand`](#pulsar.proto.BaseCommand) protobuf message that includes a [`Type`](#pulsar.proto.Type) [enum](https://developers.google.com/protocol-buffers/docs/proto#enum) with all possible subcommands as optional fields. `BaseCommand` messages can specify only one subcommand.
+
+## Framing
+
+Since protobuf doesn't provide any sort of message frame, all messages in the Pulsar protocol are prepended with a 4-byte field that specifies the size of the frame. The maximum allowable size of a single frame is 5 MB.
+
+The Pulsar protocol allows for two types of commands:
+
+1. **Simple commands** that do not carry a message payload.
+2. **Payload commands** that bear a payload that is used when publishing or delivering messages. In payload commands, the protobuf command data is followed by protobuf [metadata](#message-metadata) and then the payload, which is passed in raw format outside of protobuf. All sizes are passed as 4-byte unsigned big endian integers.
+
+> Message payloads are passed in raw format rather than protobuf format for efficiency reasons.
+
+### Simple commands
+
+Simple (payload-free) commands have this basic structure:
+
+| Component   | Description                                                                             | Size (in bytes) |
+|:------------|:----------------------------------------------------------------------------------------|:----------------|
+| totalSize   | The size of the frame, counting everything that comes after it (in bytes)               | 4               |
+| commandSize | The size of the protobuf-serialized command                                             | 4               |
+| message     | The protobuf message serialized in a raw binary format (rather than in protobuf format) |                 |
+
+### Payload commands
+
+Payload commands have this basic structure:
+
+| Component    | Description                                                                                 | Size (in bytes) |
+|:-------------|:--------------------------------------------------------------------------------------------|:----------------|
+| totalSize    | The size of the frame, counting everything that comes after it (in bytes)                   | 4               |
+| commandSize  | The size of the protobuf-serialized command                                                 | 4               |
+| message      | The protobuf message serialized in a raw binary format (rather than in protobuf format)     |                 |
+| magicNumber  | A 2-byte byte array (`0x0e01`) identifying the current format                               | 2               |
+| checksum     | A [CRC32-C checksum](http://www.evanjones.ca/crc32c.html) of everything that comes after it | 4               |
+| metadataSize | The size of the message [metadata](#message-metadata)                                       | 4               |
+| metadata     | The message [metadata](#message-metadata) stored as a binary protobuf message               |                 |
+| payload      | Anything left in the frame is considered the payload and can include any sequence of bytes  |                 |
+
+## Message metadata
+
+Message metadata is stored alongside the application-specified payload as a serialized protobuf message. Metadata is created by the producer and passed on unchanged to the consumer.
+
+| Field                                | Description                                                                                                                                                                                                                                               |
+|:-------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `producer_name`                      | The name of the producer that published the message                                                                                                                                                                                         |
+| `sequence_id`                        | The sequence ID of the message, assigned by producer}                                                                                                                                                                                        |
+| `publish_time`                       | The publish timestamp in Unix time (i.e. as the number of milliseconds since January 1st, 1970 in UTC)                                                                                                                                                    |
+| `properties`                         | A sequence of key/value pairs (using the [`KeyValue`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/proto/PulsarApi.proto#L32) message). These are application-defined keys and values with no special meaning to Pulsar. |
+| `replicated_from` *(optional)*       | Indicates that the message has been replicated and specifies the name of the [cluster](reference-terminology.md#cluster) where the message was originally published                                                                                                             |
+| `partition_key` *(optional)*         | While publishing on a partition topic, if the key is present, the hash of the key is used to determine which partition to choose                                                                                                                          |
+| `compression` *(optional)*           | Signals that payload has been compressed and with which compression library                                                                                                                                                                               |
+| `uncompressed_size` *(optional)*     | If compression is used, the producer must fill the uncompressed size field with the original payload size                                                                                                                                                 |
+| `num_messages_in_batch` *(optional)* | If this message is really a [batch](#batch-messages) of multiple entries, this field must be set to the number of messages in the batch                                                                                                                   |
+
+### Batch messages
+
+When using batch messages, the payload will be containing a list of entries,
+each of them with its individual metadata, defined by the `SingleMessageMetadata`
+object.
+
+
+For a single batch, the payload format will look like this:
+
+
+| Field         | Description                                                 |
+|:--------------|:------------------------------------------------------------|
+| metadataSizeN | The size of the single message metadata serialized Protobuf |
+| metadataN     | Single message metadata                                     |
+| payloadN      | Message payload passed by application                       |
+
+Each metadata field looks like this;
+
+| Field                      | Description                                             |
+|:---------------------------|:--------------------------------------------------------|
+| properties                 | Application-defined properties                          |
+| partition key *(optional)* | Key to indicate the hashing to a particular partition   |
+| payload_size               | Size of the payload for the single message in the batch |
+
+When compression is enabled, the whole batch will be compressed at once.
+
+## Interactions
+
+### Connection establishment
+
+After opening a TCP connection to a broker, typically on port 6650, the client
+is responsible to initiate the session.
+
+![Connect interaction](assets/binary-protocol-connect.png)
+
+After receiving a `Connected` response from the broker, the client can
+consider the connection ready to use. Alternatively, if the broker doesn't
+validate the client authentication, it will reply with an `Error` command and
+close the TCP connection.
+
+Example:
+
+```protobuf
+message CommandConnect {
+  "client_version" : "Pulsar-Client-Java-v1.15.2",
+  "auth_method_name" : "my-authentication-plugin",
+  "auth_data" : "my-auth-data",
+  "protocol_version" : 6
+}
+```
+
+Fields:
+ * `client_version` → String based identifier. Format is not enforced
+ * `auth_method_name` → *(optional)* Name of the authentication plugin if auth
+   enabled
+ * `auth_data` → *(optional)* Plugin specific authentication data
+ * `protocol_version` → Indicates the protocol version supported by the
+   client. Broker will not send commands introduced in newer revisions of the
+   protocol. Broker might be enforcing a minimum version
+
+```protobuf
+message CommandConnected {
+  "server_version" : "Pulsar-Broker-v1.15.2",
+  "protocol_version" : 6
+}
+```
+
+Fields:
+ * `server_version` → String identifier of broker version
+ * `protocol_version` → Protocol version supported by the broker. Client
+   must not attempt to send commands introduced in newer revisions of the
+   protocol
+
+### Keep Alive
+
+To identify prolonged network partitions between clients and brokers or cases
+in which a machine crashes without interrupting the TCP connection on the remote
+end (eg: power outage, kernel panic, hard reboot...), we have introduced a
+mechanism to probe for the availability status of the remote peer.
+
+Both clients and brokers are sending `Ping` commands periodically and they will
+close the socket if a `Pong` response is not received within a timeout (default
+used by broker is 60s).
+
+A valid implementation of a Pulsar client is not required to send the `Ping`
+probe, though it is required to promptly reply after receiving one from the
+broker in order to prevent the remote side from forcibly closing the TCP connection.
+
+
+### Producer
+
+In order to send messages, a client needs to establish a producer. When creating
+a producer, the broker will first verify that this particular client is
+authorized to publish on the topic.
+
+Once the client gets confirmation of the producer creation, it can publish
+messages to the broker, referring to the producer id negotiated before.
+
+![Producer interaction](assets/binary-protocol-producer.png)
+
+##### Command Producer
+
+```protobuf
+message CommandProducer {
+  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
+  "producer_id" : 1,
+  "request_id" : 1
+}
+```
+
+Parameters:
+ * `topic` → Complete topic name to where you want to create the producer on
+ * `producer_id` → Client generated producer identifier. Needs to be unique
+    within the same connection
+ * `request_id` → Identifier for this request. Used to match the response with
+    the originating request. Needs to be unique within the same connection
+ * `producer_name` → *(optional)* If a producer name is specified, the name will
+    be used, otherwise the broker will generate a unique name. Generated
+    producer name is guaranteed to be globally unique. Implementations are
+    expected to let the broker generate a new producer name when the producer
+    is initially created, then reuse it when recreating the producer after
+    reconnections.
+
+The broker will reply with either `ProducerSuccess` or `Error` commands.
+
+##### Command ProducerSuccess
+
+```protobuf
+message CommandProducerSuccess {
+  "request_id" :  1,
+  "producer_name" : "generated-unique-producer-name"
+}
+```
+
+Parameters:
+ * `request_id` → Original id of the `CreateProducer` request
+ * `producer_name` → Generated globally unique producer name or the name
+    specified by the client, if any.
+
+##### Command Send
+
+Command `Send` is used to publish a new message within the context of an
+already existing producer. This command is used in a frame that includes command
+as well as message payload, for which the complete format is specified in the
+[payload commands](#payload-commands) section.
+
+```protobuf
+message CommandSend {
+  "producer_id" : 1,
+  "sequence_id" : 0,
+  "num_messages" : 1
+}
+```
+
+Parameters:
+ * `producer_id` → id of an existing producer
+ * `sequence_id` → each message has an associated sequence id which is expected
+   to be implemented with a counter starting at 0. The `SendReceipt` that
+   acknowledges the effective publishing of a messages will refer to it by
+   its sequence id.
+ * `num_messages` → *(optional)* Used when publishing a batch of messages at
+   once.
+
+##### Command SendReceipt
+
+After a message has been persisted on the configured number of replicas, the
+broker will send the acknowledgment receipt to the producer.
+
+
+```protobuf
+message CommandSendReceipt {
+  "producer_id" : 1,
+  "sequence_id" : 0,
+  "message_id" : {
+    "ledgerId" : 123,
+    "entryId" : 456
+  }
+}
+```
+
+Parameters:
+ * `producer_id` → id of producer originating the send request
+ * `sequence_id` → sequence id of the published message
+ * `message_id` → message id assigned by the system to the published message
+   Unique within a single cluster. Message id is composed of 2 longs, `ledgerId`
+   and `entryId`, that reflect that this unique id is assigned when appending
+   to a BookKeeper ledger
+
+
+##### Command CloseProducer
+
+**Note**: *This command can be sent by either producer or broker*.
+
+When receiving a `CloseProducer` command, the broker will stop accepting any
+more messages for the producer, wait until all pending messages are persisted
+and then reply `Success` to the client.
+
+The broker can send a `CloseProducer` command to client when it's performing
+a graceful failover (eg: broker is being restarted, or the topic is being unloaded
+by load balancer to be transferred to a different broker).
+
+When receiving the `CloseProducer`, the client is expected to go through the
+service discovery lookup again and recreate the producer again. The TCP
+connection is not affected.
+
+### Consumer
+
+A consumer is used to attach to a subscription and consume messages from it.
+After every reconnection, a client needs to subscribe to the topic. If a
+subscription is not already there, a new one will be created.
+
+![Consumer](assets/binary-protocol-consumer.png)
+
+#### Flow control
+
+After the consumer is ready, the client needs to *give permission* to the
+broker to push messages. This is done with the `Flow` command.
+
+A `Flow` command gives additional *permits* to send messages to the consumer.
+A typical consumer implementation will use a queue to accumulate these messages
+before the application is ready to consume them.
+
+After the application has dequeued a number of message, the consumer will
+send additional number of permits to allow the broker to push more messages.
+
+##### Command Subscribe
+
+```protobuf
+message CommandSubscribe {
+  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
+  "subscription" : "my-subscription-name",
+  "subType" : "Exclusive",
+  "consumer_id" : 1,
+  "request_id" : 1
+}
+```
+
+Parameters:
+ * `topic` → Complete topic name to where you want to create the consumer on
+ * `subscription` → Subscription name
+ * `subType` → Subscription type: Exclusive, Shared, Failover
+ * `consumer_id` → Client generated consumer identifier. Needs to be unique
+    within the same connection
+ * `request_id` → Identifier for this request. Used to match the response with
+    the originating request. Needs to be unique within the same connection
+ * `consumer_name` → *(optional)* Clients can specify a consumer name. This
+    name can be used to track a particular consumer in the stats. Also, in
+    Failover subscription type, the name is used to decide which consumer is
+    elected as *master* (the one receiving messages): consumers are sorted by
+    their consumer name and the first one is elected master.
+
+##### Command Flow
+
+```protobuf
+message CommandFlow {
+  "consumer_id" : 1,
+  "messagePermits" : 1000
+}
+```
+
+Parameters:
+* `consumer_id` → Id of an already established consumer
+* `messagePermits` → Number of additional permits to grant to the broker for
+    pushing more messages
+
+##### Command Message
+
+Command `Message` is used by the broker to push messages to an existing consumer,
+within the limits of the given permits.
+
+
+This command is used in a frame that includes the message payload as well, for
+which the complete format is specified in the [payload commands](#payload-commands)
+section.
+
+```protobuf
+message CommandMessage {
+  "consumer_id" : 1,
+  "message_id" : {
+    "ledgerId" : 123,
+    "entryId" : 456
+  }
+}
+```
+
+
+##### Command Ack
+
+An `Ack` is used to signal to the broker that a given message has been
+successfully processed by the application and can be discarded by the broker.
+
+In addition, the broker will also maintain the consumer position based on the
+acknowledged messages.
+
+```protobuf
+message CommandAck {
+  "consumer_id" : 1,
+  "ack_type" : "Individual",
+  "message_id" : {
+    "ledgerId" : 123,
+    "entryId" : 456
+  }
+}
+```
+
+Parameters:
+ * `consumer_id` → Id of an already established consumer
+ * `ack_type` → Type of acknowledgment: `Individual` or `Cumulative`
+ * `message_id` → Id of the message to acknowledge
+ * `validation_error` → *(optional)* Indicates that the consumer has discarded
+   the messages due to: `UncompressedSizeCorruption`,
+   `DecompressionError`, `ChecksumMismatch`, `BatchDeSerializeError`
+
+##### Command CloseConsumer
+
+***Note***: *This command can be sent by either producer or broker*.
+
+This command behaves the same as [`CloseProducer`](#command-closeproducer)
+
+##### Command RedeliverUnacknowledgedMessages
+
+A consumer can ask the broker to redeliver some or all of the pending messages
+that were pushed to that particular consumer and not yet acknowledged.
+
+The protobuf object accepts a list of message ids that the consumer wants to
+be redelivered. If the list is empty, the broker will redeliver all the
+pending messages.
+
+On redelivery, messages can be sent to the same consumer or, in the case of a
+shared subscription, spread across all available consumers.
+
+
+##### Command ReachedEndOfTopic
+
+This is sent by a broker to a particular consumer, whenever the topic
+has been "terminated" and all the messages on the subscription were
+acknowledged.
+
+The client should use this command to notify the application that no more
+messages are coming from the consumer.
+
+##### Command ConsumerStats
+
+This command is sent by the client to retreive Subscriber and Consumer level 
+stats from the broker.
+Parameters:
+ * `request_id` → Id of the request, used to correlate the request 
+      and the response.
+ * `consumer_id` → Id of an already established consumer.
+
+##### Command ConsumerStatsResponse
+
+This is the broker's response to ConsumerStats request by the client. 
+It contains the Subscriber and Consumer level stats of the `consumer_id` sent in the request.
+If the `error_code` or the `error_message` field is set it indicates that the request has failed.
+
+##### Command Unsubscribe
+
+This command is sent by the client to unsubscribe the `consumer_id` from the associated topic.
+Parameters:
+ * `request_id` → Id of the request.
+ * `consumer_id` → Id of an already established consumer which needs to unsubscribe.
+
+
+## Service discovery
+
+### Topic lookup
+
+Topic lookup needs to be performed each time a client needs to create or
+reconnect a producer or a consumer. Lookup is used to discover which particular
+broker is serving the topic we are about to use.
+
+Lookup can be done with a REST call as described in the
+[admin API](admin-api-persistent-topics.md#lookup-of-topic)
+docs.
+
+Since Pulsar-1.16 it is also possible to perform the lookup within the binary
+protocol.
+
+For the sake of example, let's assume we have a service discovery component
+running at `pulsar://broker.example.com:6650`
+
+Individual brokers will be running at `pulsar://broker-1.example.com:6650`,
+`pulsar://broker-2.example.com:6650`, ...
+
+A client can use a connection to the discovery service host to issue a
+`LookupTopic` command. The response can either be a broker hostname to
+connect to, or a broker hostname to which retry the lookup.
+
+The `LookupTopic` command has to be used in a connection that has already
+gone through the `Connect` / `Connected` initial handshake.
+
+![Topic lookup](assets/binary-protocol-topic-lookup.png)
+
+```protobuf
+message CommandLookupTopic {
+  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
+  "request_id" : 1,
+  "authoritative" : false
+}
+```
+
+Fields:
+ * `topic` → Topic name to lookup
+ * `request_id` → Id of the request that will be passed with its response
+ * `authoritative` → Initial lookup request should use false. When following a
+   redirect response, client should pass the same value contained in the
+   response
+
+##### LookupTopicResponse
+
+Example of response with successful lookup:
+
+```protobuf
+message CommandLookupTopicResponse {
+  "request_id" : 1,
+  "response" : "Connect",
+  "brokerServiceUrl" : "pulsar://broker-1.example.com:6650",
+  "brokerServiceUrlTls" : "pulsar+ssl://broker-1.example.com:6651",
+  "authoritative" : true
+}
+```
+
+Example of lookup response with redirection:
+
+```protobuf
+message CommandLookupTopicResponse {
+  "request_id" : 1,
+  "response" : "Redirect",
+  "brokerServiceUrl" : "pulsar://broker-2.example.com:6650",
+  "brokerServiceUrlTls" : "pulsar+ssl://broker-2.example.com:6651",
+  "authoritative" : true
+}
+```
+
+In this second case, we need to reissue the `LookupTopic` command request
+to `broker-2.example.com` and this broker will be able to give a definitive
+answer to the lookup request.
+
+### Partitioned topics discovery
+
+Partitioned topics metadata discovery is used to find out if a topic is a
+"partitioned topic" and how many partitions were set up.
+
+If the topic is marked as "partitioned", the client is expected to create
+multiple producers or consumers, one for each partition, using the `partition-X`
+suffix.
+
+This information only needs to be retrieved the first time a producer or
+consumer is created. There is no need to do this after reconnections.
+
+The discovery of partitioned topics metadata works very similar to the topic
+lookup. The client send a request to the service discovery address and the
+response will contain actual metadata.
+
+##### Command PartitionedTopicMetadata
+
+```protobuf
+message CommandPartitionedTopicMetadata {
+  "topic" : "persistent://my-property/my-cluster/my-namespace/my-topic",
+  "request_id" : 1
+}
+```
+
+Fields:
+ * `topic` → the topic for which to check the partitions metadata
+ * `request_id` → Id of the request that will be passed with its response
+
+
+##### Command PartitionedTopicMetadataResponse
+
+Example of response with metadata:
+
+```protobuf
+message CommandPartitionedTopicMetadataResponse {
+  "request_id" : 1,
+  "response" : "Success",
+  "partitions" : 32
+}
+```
+
+## Protobuf interface
+
+All Pulsar's Protobuf definitions can be found {@inject: github:here:/pulsar-common/src/main/proto/PulsarApi.proto}.
diff --git a/site2/website/versioned_docs/version-2.2.0/developing-cpp.md b/site2/website/versioned_docs/version-2.2.0/developing-cpp.md
new file mode 100644
index 0000000000..d47c409378
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/developing-cpp.md
@@ -0,0 +1,101 @@
+---
+id: version-2.2.0-develop-cpp
+title: Building Pulsar C++ client
+sidebar_label: Building Pulsar C++ client
+original_id: develop-cpp
+---
+
+## Supported platforms
+
+The Pulsar C++ client has been successfully tested on **MacOS** and **Linux**.
+
+## System requirements
+
+You need to have the following installed to use the C++ client:
+
+* [CMake](https://cmake.org/)
+* [Boost](http://www.boost.org/)
+* [Protocol Buffers](https://developers.google.com/protocol-buffers/) 2.6
+* [Log4CXX](https://logging.apache.org/log4cxx)
+* [libcurl](https://curl.haxx.se/libcurl/)
+* [Google Test](https://github.com/google/googletest)
+* [JsonCpp](https://github.com/open-source-parsers/jsoncpp)
+
+## Compilation
+
+There are separate compilation instructions for [MacOS](#macos) and [Linux](#linux). For both systems, start by cloning the Pulsar repository:
+
+```shell
+$ git clone https://github.com/apache/pulsar
+```
+
+### Linux
+
+First, install all of the necessary dependencies:
+
+```shell
+$ apt-get install cmake libssl-dev libcurl4-openssl-dev liblog4cxx-dev \
+  libprotobuf-dev libboost-all-dev google-mock libgtest-dev libjsoncpp-dev
+```
+
+Then compile and install [Google Test](https://github.com/google/googletest):
+
+```shell
+# libgtest-dev version is 1.18.0 or above
+$ cd /usr/src/googletest
+$ sudo cmake .
+$ sudo make
+$ sudo cp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/
+
+# less than 1.18.0
+$ cd /usr/src/gtest
+$ sudo cmake .
+$ sudo make
+$ sudo cp libgtest.a /usr/lib
+
+$ cd /usr/src/gmock
+$ sudo cmake .
+$ sudo make
+$ sudo cp libgmock.a /usr/lib
+```
+
+Finally, compile the Pulsar client library for C++ inside the Pulsar repo:
+
+```shell
+$ cd pulsar-client-cpp
+$ cmake .
+$ make
+```
+
+The resulting files, `libpulsar.so` and `libpulsar.a`, will be placed in the `lib` folder of the repo while two tools, `perfProducer` and `perfConsumer`, will be placed in the `perf` directory.
+
+### MacOS
+
+First, install all of the necessary dependencies:
+
+```shell
+# OpenSSL installation
+$ brew install openssl
+$ export OPENSSL_INCLUDE_DIR=/usr/local/opt/openssl/include/
+$ export OPENSSL_ROOT_DIR=/usr/local/opt/openssl/
+
+# Protocol Buffers installation
+$ brew tap homebrew/versions
+$ brew install protobuf260
+$ brew install boost
+$ brew install log4cxx
+
+# Google Test installation
+$ git clone https://github.com/google/googletest.git
+$ cd googletest
+$ cmake .
+$ make install
+```
+
+Then compile the Pulsar client library in the repo that you cloned:
+
+```shell
+$ cd pulsar-client-cpp
+$ cmake .
+$ make
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/developing-load-manager.md b/site2/website/versioned_docs/version-2.2.0/developing-load-manager.md
new file mode 100644
index 0000000000..18c782f472
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/developing-load-manager.md
@@ -0,0 +1,215 @@
+---
+id: version-2.2.0-develop-load-manager
+title: Modular load manager
+sidebar_label: Modular load manager
+original_id: develop-load-manager
+---
+
+The *modular load manager*, implemented in  [`ModularLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/ModularLoadManagerImpl.java), is a flexible alternative to the previously implemented load manager, [`SimpleLoadManagerImpl`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/SimpleLoadManagerImpl.java), which attempts to simplify how load is managed while also providing abstractions so that complex load management strategies may be implemented.
+
+## Usage
+
+There are two ways that you can enable the modular load manager:
+
+1. Change the value of the `loadManagerClassName` parameter in `conf/broker.conf` from `org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl` to `org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl`.
+2. Using the `pulsar-admin` tool. Here's an example:
+
+   ```shell
+   $ pulsar-admin update-dynamic-config \
+     --config loadManagerClassName \
+     --value org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl
+   ```
+
+   You can use the same method to change back to the original value. In either case, any mistake in specifying the load manager will cause Pulsar to default to `SimpleLoadManagerImpl`.
+
+## Verification
+
+There are a few different ways to determine which load manager is being used:
+
+1. Use `pulsar-admin` to examine the `loadManagerClassName` element:
+
+    ```shell
+   $ bin/pulsar-admin brokers get-all-dynamic-config
+   {
+     "loadManagerClassName" : "org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl"
+   }
+   ```
+
+   If there is no `loadManagerClassName` element, then the default load manager is used.
+
+2. Consult a ZooKeeper load report. With the module load manager, the load report in `/loadbalance/brokers/...` will have many differences. for example the `systemResourceUsage` sub-elements (`bandwidthIn`, `bandwidthOut`, etc.) are now all at the top level. Here is an example load report from the module load manager:
+
+    ```json
+    {
+      "bandwidthIn": {
+        "limit": 10240000.0,
+        "usage": 4.256510416666667
+      },
+      "bandwidthOut": {
+        "limit": 10240000.0,
+        "usage": 5.287239583333333
+      },
+      "bundles": [],
+      "cpu": {
+        "limit": 2400.0,
+        "usage": 5.7353247655435915
+      },
+      "directMemory": {
+        "limit": 16384.0,
+        "usage": 1.0
+      }
+    }
+    ```
+
+    With the simple load manager, the load report in `/loadbalance/brokers/...` will look like this:
+
+    ```json
+    {
+      "systemResourceUsage": {
+        "bandwidthIn": {
+          "limit": 10240000.0,
+          "usage": 0.0
+        },
+        "bandwidthOut": {
+          "limit": 10240000.0,
+          "usage": 0.0
+        },
+        "cpu": {
+          "limit": 2400.0,
+          "usage": 0.0
+        },
+        "directMemory": {
+          "limit": 16384.0,
+          "usage": 1.0
+        },
+        "memory": {
+          "limit": 8192.0,
+          "usage": 3903.0
+        }
+      }
+    }
+    ```
+
+3. The command-line [broker monitor](reference-cli-tools.md#monitor-brokers) will have a different output format depending on which load manager implementation is being used.
+
+    Here is an example from the modular load manager:
+
+    ```
+    ===================================================================================================================
+    ||SYSTEM         |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
+    ||               |0.00           |48.33          |0.01           |0.00           |0.00           |48.33          ||
+    ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
+    ||               |4              |4              |0              |2              |4              |0              ||
+    ||LATEST         |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
+    ||SHORT          |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
+    ||LONG           |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |0.00           |0.00           |0.00           |0.00           |0.00           |0.00           ||
+    ===================================================================================================================
+    ```
+
+    Here is an example from the simple load manager:
+
+    ```
+    ===================================================================================================================
+    ||COUNT          |TOPIC          |BUNDLE         |PRODUCER       |CONSUMER       |BUNDLE +       |BUNDLE -       ||
+    ||               |4              |4              |0              |2              |0              |0              ||
+    ||RAW SYSTEM     |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
+    ||               |0.25           |47.94          |0.01           |0.00           |0.00           |47.94          ||
+    ||ALLOC SYSTEM   |CPU %          |MEMORY %       |DIRECT %       |BW IN %        |BW OUT %       |MAX %          ||
+    ||               |0.20           |1.89           |               |1.27           |3.21           |3.21           ||
+    ||RAW MSG        |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |0.00           |0.00           |0.00           |0.01           |0.01           |0.01           ||
+    ||ALLOC MSG      |MSG/S IN       |MSG/S OUT      |TOTAL          |KB/S IN        |KB/S OUT       |TOTAL          ||
+    ||               |54.84          |134.48         |189.31         |126.54         |320.96         |447.50         ||
+    ===================================================================================================================
+    ```
+
+It is important to note that the module load manager is _centralized_, meaning that all requests to assign a bundle---whether it's been seen before or whether this is the first time---only get handled by the _lead_ broker (which can change over time). To determine the current lead broker, examine the `/loadbalance/leader` node in ZooKeeper.
+
+## Implementation
+
+### Data
+
+The data monitored by the modular load manager is contained in the [`LoadData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/LoadData.java) class.
+Here, the available data is subdivided into the bundle data and the broker data.
+
+#### Broker
+
+The broker data is contained in the [`BrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BrokerData.java) class. It is further subdivided into two parts,
+one being the local data which every broker individually writes to ZooKeeper, and the other being the historical broker
+data which is written to ZooKeeper by the leader broker.
+
+##### Local Broker Data
+The local broker data is contained in the class [`LocalBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-common/src/main/java/org/apache/pulsar/policies/data/loadbalancer/LocalBrokerData.java) and provides information about the following resources:
+
+* CPU usage
+* JVM heap memory usage
+* Direct memory usage
+* Bandwidth in/out usage
+* Most recent total message rate in/out across all bundles
+* Total number of topics, bundles, producers, and consumers
+* Names of all bundles assigned to this broker
+* Most recent changes in bundle assignments for this broker
+
+The local broker data is updated periodically according to the service configuration
+"loadBalancerReportUpdateMaxIntervalMinutes". After any broker updates their local broker data, the leader broker will
+receive the update immediately via a ZooKeeper watch, where the local data is read from the ZooKeeper node
+`/loadbalance/brokers/<broker host/port>`
+
+##### Historical Broker Data
+
+The historical broker data is contained in the [`TimeAverageBrokerData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/TimeAverageBrokerData.java) class.
+
+In order to reconcile the need to make good decisions in a steady-state scenario and make reactive decisions in a critical scenario, the historical data is split into two parts: the short-term data for reactive decisions, and the long-term data for steady-state decisions. Both time frames maintain the following information:
+
+* Message rate in/out for the entire broker
+* Message throughput in/out for the entire broker
+
+Unlike the bundle data, the broker data does not maintain samples for the global broker message rates and throughputs, which is not expected to remain steady as new bundles are removed or added. Instead, this data is aggregated over the short-term and long-term data for the bundles. See the section on bundle data to understand how that data is collected and maintained.
+
+The historical broker data is updated for each broker in memory by the leader broker whenever any broker writes their local data to ZooKeeper. Then, the historical data is written to ZooKeeper by the leader broker periodically according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
+
+##### Bundle Data
+
+The bundle data is contained in the [`BundleData`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/BundleData.java). Like the historical broker data, the bundle data is split into a short-term and a long-term time frame. The information maintained in each time frame:
+
+* Message rate in/out for this bundle
+* Message Throughput In/Out for this bundle
+* Current number of samples for this bundle
+
+The time frames are implemented by maintaining the average of these values over a set, limited number of samples, where
+the samples are obtained through the message rate and throughput values in the local data. Thus, if the update interval
+for the local data is 2 minutes, the number of short samples is 10 and the number of long samples is 1000, the
+short-term data is maintained over a period of `10 samples * 2 minutes / sample = 20 minutes`, while the long-term
+data is similarly over a period of 2000 minutes. Whenever there are not enough samples to satisfy a given time frame,
+the average is taken only over the existing samples. When no samples are available, default values are assumed until
+they are overwritten by the first sample. Currently, the default values are
+
+* Message rate in/out: 50 messages per second both ways
+* Message throughput in/out: 50KB per second both ways
+
+The bundle data is updated in memory on the leader broker whenever any broker writes their local data to ZooKeeper.
+Then, the bundle data is written to ZooKeeper by the leader broker periodically at the same time as the historical
+broker data, according to the configuration `loadBalancerResourceQuotaUpdateIntervalMinutes`.
+
+### Traffic Distribution
+
+The modular load manager uses the abstraction provided by [`ModularLoadManagerStrategy`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/ModularLoadManagerStrategy.java) to make decisions about bundle assignment. The strategy makes a decision by considering the service configuration, the entire load data, and the bundle data for the bundle to be assigned. Currently, the only supported strategy is [`LeastLongTermMessageRate`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/loadbalance/impl/LeastLongTermMessageRate.java), though soon users will have the ability to inject their own strategies if desired.
+
+#### Least Long Term Message Rate Strategy
+
+As its name suggests, the least long term message rate strategy attempts to distribute bundles across brokers so that
+the message rate in the long-term time window for each broker is roughly the same. However, simply balancing load based
+on message rate does not handle the issue of asymmetric resource burden per message on each broker. Thus, the system
+resource usages, which are CPU, memory, direct memory, bandwidth in, and bandwidth out, are also considered in the
+assignment process. This is done by weighting the final message rate according to
+`1 / (overload_threshold - max_usage)`, where `overload_threshold` corresponds to the configuration
+`loadBalancerBrokerOverloadedThresholdPercentage` and `max_usage` is the maximum proportion among the system resources
+that is being utilized by the candidate broker. This multiplier ensures that machines with are being more heavily taxed
+by the same message rates will receive less load. In particular, it tries to ensure that if one machine is overloaded,
+then all machines are approximately overloaded. In the case in which a broker's max usage exceeds the overload
+threshold, that broker is not considered for bundle assignment. If all brokers are overloaded, the bundle is randomly
+assigned.
+
diff --git a/site2/website/versioned_docs/version-2.2.0/developing-schema.md b/site2/website/versioned_docs/version-2.2.0/developing-schema.md
new file mode 100644
index 0000000000..498bddfc89
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/developing-schema.md
@@ -0,0 +1,58 @@
+---
+id: version-2.2.0-develop-schema
+title: Custom schema storage
+sidebar_label: Custom schema storage
+original_id: develop-schema
+---
+
+By default, Pulsar stores data type [schemas](concepts-schema-registry.md) in [Apache BookKeeper](https://bookkeeper.apache.org) (which is deployed alongside Pulsar). You can, however, use another storage system if you wish. This doc walks you through creating your own schema storage implementation.
+
+In order to use a non-default (i.e. non-BookKeeper) storage system for Pulsar schemas, you need to implement two Java interfaces: [`SchemaStorage`](#schemastorage-interface) and [`SchemaStorageFactory`](#schemastoragefactory-interface).
+
+## SchemaStorage interface
+
+The `SchemaStorage` interface has the following methods:
+
+```java
+public interface SchemaStorage {
+    // How schemas are updated
+    CompletableFuture<SchemaVersion> put(String key, byte[] value, byte[] hash);
+
+    // How schemas are fetched from storage
+    CompletableFuture<StoredSchema> get(String key, SchemaVersion version);
+
+    // How schemas are deleted
+    CompletableFuture<SchemaVersion> delete(String key);
+
+    // Utility method for converting a schema version byte array to a SchemaVersion object
+    SchemaVersion versionFromBytes(byte[] version);
+
+    // Startup behavior for the schema storage client
+    void start() throws Exception;
+
+    // Shutdown behavior for the schema storage client
+    void close() throws Exception;
+}
+```
+
+> For a full-fledged example schema storage implementation, see the [`BookKeeperSchemaStorage`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorage.java) class.
+
+## SchemaStorageFactory interface 
+
+```java
+public interface SchemaStorageFactory {
+    @NotNull
+    SchemaStorage create(PulsarService pulsar) throws Exception;
+}
+```
+
+> For a full-fledged example schema storage factory implementation, see the [`BookKeeperSchemaStorageFactory`](https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/schema/BookkeeperSchemaStorageFactory.java) class.
+
+## Deployment
+
+In order to use your custom schema storage implementation, you'll need to:
+
+1. Package the implementation in a [JAR](https://docs.oracle.com/javase/tutorial/deployment/jar/basicsindex.html) file.
+1. Add that jar to the `lib` folder in your Pulsar [binary or source distribution](getting-started-standalone.md#installing-pulsar).
+1. Change the `schemaRegistryStorageClassName` configuration in [`broker.conf`](reference-configuration.md#broker) to your custom factory class (i.e. the `SchemaStorageFactory` implementation, not the `SchemaStorage` implementation).
+1. Start up Pulsar.
diff --git a/site2/website/versioned_docs/version-2.2.0/functions-api.md b/site2/website/versioned_docs/version-2.2.0/functions-api.md
new file mode 100644
index 0000000000..a6950c1915
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/functions-api.md
@@ -0,0 +1,721 @@
+---
+id: version-2.2.0-functions-api
+title: The Pulsar Functions API
+sidebar_label: API
+original_id: functions-api
+---
+
+[Pulsar Functions](functions-overview.md) provides an easy-to-use API that developers can use to create and manage processing logic for the Apache Pulsar messaging system. With Pulsar Functions, you can write functions of any level of complexity in [Java](#functions-for-java) or [Python](#functions-for-python) and run them in conjunction with a Pulsar cluster without needing to run a separate stream processing engine.
+
+> For a more in-depth overview of the Pulsar Functions feature, see the [Pulsar Functions overview](functions-overview.md).
+
+## Core programming model
+
+Pulsar Functions provide a wide range of functionality but are based on a very simple programming model. You can think of Pulsar Functions as lightweight processes that
+
+* consume messages from one or more Pulsar topics and then
+* apply some user-defined processing logic to each incoming message. That processing logic could be just about anything you want, including
+  * producing the resulting, processed message on another Pulsar topic, or
+  * doing something else with the message, such as writing results to an external database.
+
+You could use Pulsar Functions, for example, to set up the following processing chain:
+
+* A [Python](#functions-for-python) function listens on the `raw-sentences` topic and "[sanitizes](#example-function)" incoming strings (removing extraneous whitespace and converting all characters to lower case) and then publishes the results to a `sanitized-sentences` topic
+* A [Java](#functions-for-java) function listens on the `sanitized-sentences` topic, counts the number of times each word appears within a specified time window, and publishes the results to a `results` topic
+* Finally, a Python function listens on the `results` topic and writes the results to a MySQL table
+
+### Example function
+
+Here's an example "input sanitizer" function written in Python and stored in a `sanitizer.py` file:
+
+```python
+def clean_string(s):
+    return s.strip().lower()
+
+def process(input):
+    return clean_string(input)
+```
+
+Some things to note about this Pulsar Function:
+
+* There is no client, producer, or consumer object involved. All message "plumbing" is already taken care of for you, enabling you to worry only about processing logic.
+* No topics, subscription types, tenants, or namespaces are specified in the function logic itself. Instead, topics are specified upon [deployment](#example-deployment). This means that you can use and re-use Pulsar Functions across topics, tenants, and namespaces without needing to hard-code those attributes.
+
+### Example deployment
+
+Deploying Pulsar Functions is handled by the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool, in particular the [`functions`](reference-pulsar-admin.md#functions) command. Here's an example command that would run our [sanitizer](#example-function) function from above in [local run](functions-deploying.md#local-run-mode) mode:
+
+```bash
+$ bin/pulsar-admin functions localrun \
+  --py sanitizer.py \          # The Python file with the function's code
+  --classname sanitizer \      # The class or function holding the processing logic
+  --tenant public \            # The function's tenant (derived from the topic name by default)
+  --namespace default \        # The function's namespace (derived from the topic name by default)
+  --name sanitizer-function \  # The name of the function (the class name by default)
+  --inputs dirty-strings-in \  # The input topic(s) for the function
+  --output clean-strings-out \ # The output topic for the function
+  --log-topic sanitizer-logs   # The topic to which all functions logs are published
+```
+
+For instructions on running functions in your Pulsar cluster, see the [Deploying Pulsar Functions](functions-deploying.md) guide.
+
+### Available APIs
+
+In both Java and Python, you have two options for writing Pulsar Functions:
+
+Interface | Description | Use cases
+:---------|:------------|:---------
+Language-native interface | No Pulsar-specific libraries or special dependencies required (only core libraries from Java/Python) | Functions that don't require access to the function's [context](#context)
+Pulsar Function SDK for Java/Python | Pulsar-specific libraries that provide a range of functionality not provided by "native" interfaces | Functions that require access to the function's [context](#context)
+
+In Python, for example, this language-native function, which adds an exclamation point to all incoming strings and publishes the resulting string to a topic, would have no external dependencies:
+
+```python
+def process(input):
+    return "{}!".format(input)
+```
+
+This function, however, would use the Pulsar Functions [SDK for Python](#python-sdk-functions):
+
+```python
+from pulsar import Function
+
+class DisplayFunctionName(Function):
+    def process(self, input, context):
+        function_name = context.function_name()
+        return "The function processing this message has the name {0}".format(function_name)
+```
+
+### Functions, Messages and Message Types
+
+Pulsar Functions can take byte arrays as inputs and spit out byte arrays as output. However in languages that support typed interfaces(just Java at the moment) one can write typed Functions as well. In this scenario, there are two ways one can bind messages to types.
+* [Schema Registry](#Schema-Registry)
+* [SerDe](#SerDe)
+
+### Schema Registry
+Pulsar has a built in [Schema Registry](concepts-schema-registry) and comes bundled with a variety of popular schema types(avro, json and protobuf). Pulsar Functions can leverage existing schema information from input topics to derive the input type. The same applies for output topic as well.
+
+### SerDe
+
+SerDe stands for **Ser**ialization and **De**serialization. All Pulsar Functions use SerDe for message handling. How SerDe works by default depends on the language you're using for a particular function:
+
+* In [Python](#python-serde), the default SerDe is identity, meaning that the type is serialized as whatever type the producer function returns
+* In [Java](#java-serde), a number of commonly used types (`String`s, `Integer`s, etc.) are supported by default
+
+In both languages, however, you can write your own custom SerDe logic for more complex, application-specific types. See the docs for [Java](#java-serde) and [Python](#python-serde) for language-specific instructions.
+
+### Context
+
+Both the [Java](#java-sdk-functions) and [Python](#python-sdk-functions) SDKs provide access to a **context object** that can be used by the function. This context object provides a wide variety of information and functionality to the function:
+
+* The name and ID of the Pulsar Function
+* The message ID of each message. Each Pulsar message is automatically assigned an ID.
+* The name of the topic on which the message was sent
+* The names of all input topics as well as the output topic associated with the function
+* The name of the class used for [SerDe](#serialization-and-deserialization-serde)
+* The [tenant](reference-terminology.md#tenant) and namespace associated with the function
+* The ID of the Pulsar Functions instance running the function
+* The version of the function
+* The [logger object](functions-overview.md#logging) used by the function, which can be used to create function log messages
+* Access to arbitrary [user config](#user-config) values supplied via the CLI
+* An interface for recording [metrics](functions-metrics.md)
+* An interface for storing and retrieving state in [state storage](functions-overview.md#state-storage)
+
+### User config
+
+When you run or update Pulsar Functions created using the [SDK](#available-apis), you can pass arbitrary key/values to them via the command line with the `--userConfig` flag. Key/values must be specified as JSON. Here's an example of a function creation command that passes a user config key/value to a function:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --name word-filter \
+  # Other function configs
+  --user-config '{"forbidden-word":"rosebud"}'
+```
+
+If the function were a Python function, that config value could be accessed like this:
+
+```python
+from pulsar import Function
+
+class WordFilter(Function):
+    def process(self, context, input):
+        forbidden_word = context.user_config()["forbidden-word"]
+
+        # Don't publish the message if it contains the user-supplied
+        # forbidden word
+        if forbidden_word in input:
+            pass
+        # Otherwise publish the message
+        else:
+            return input
+```
+
+## Functions for Java
+
+Writing Pulsar Functions in Java involves implementing one of two interfaces:
+
+* The [`java.util.Function`](https://docs.oracle.com/javase/8/docs/api/java/util/function/Function.html) interface
+* The {@inject: javadoc:Function:/pulsar-functions/org/apache/pulsar/functions/api/Function} interface. This interface works much like the `java.util.Function` interface, but with the important difference that it provides a {@inject: javadoc:Context:/pulsar-functions/org/apache/pulsar/functions/api/Context} object that you can use in a [variety of ways](#context)
+
+### Getting started
+
+In order to write Pulsar Functions in Java, you'll need to install the proper [dependencies](#dependencies) and package your function [as a JAR](#packaging).
+
+#### Dependencies
+
+How you get started writing Pulsar Functions in Java depends on which API you're using:
+
+* If you're writing a [Java native function](#java-native-functions), you won't need any external dependencies.
+* If you're writing a [Java SDK function](#java-sdk-functions), you'll need to import the `pulsar-functions-api` library.
+
+  Here's an example for a Maven `pom.xml` configuration file:
+
+  ```xml
+  <dependency>
+      <groupId>org.apache.pulsar</groupId>
+      <artifactId>pulsar-functions-api</artifactId>
+      <version>2.1.1-incubating</version>
+  </dependency>
+  ```
+
+  Here's an example for a Gradle `build.gradle` configuration file:
+
+  ```groovy
+  dependencies {
+    compile group: 'org.apache.pulsar', name: 'pulsar-functions-api', version: '2.1.1-incubating'
+  }
+  ```
+
+#### Packaging
+
+Whether you're writing Java Pulsar Functions using the [native](#java-native-functions) Java `java.util.Function` interface or using the [Java SDK](#java-sdk-functions), you'll need to package your function(s) as a "fat" JAR.
+
+> #### Starter repo
+> If you'd like to get up and running quickly, you can use [this repo](https://github.com/streamlio/pulsar-functions-java-starter), which contains the necessary Maven configuration to build a fat JAR as well as some example functions.
+
+### Java native functions
+
+If your function doesn't require access to its [context](#context), you can create a Pulsar Function by implementing the [`java.util.Function`](https://docs.oracle.com/javase/8/docs/api/java/util/function/Function.html) interface, which has this very simple, single-method signature:
+
+```java
+public interface Function<I, O> {
+    O apply(I input);
+}
+```
+
+Here's an example function that takes a string as its input, adds an exclamation point to the end of the string, and then publishes the resulting string:
+
+```java
+import java.util.Function;
+
+public class ExclamationFunction implements Function<String, String> {
+    @Override
+    public String process(String input) {
+        return String.format("%s!", input);
+    }
+}
+```
+
+In general, you should use native functions when you don't need access to the function's [context](#context). If you *do* need access to the function's context, then we recommend using the [Pulsar Functions Java SDK](#java-sdk-functions).
+
+#### Java native examples
+
+There is one example Java native function in this {@inject: github:folder:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples}:
+
+* {@inject: github:`JavaNativeExclamationFunction`:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/JavaNativeExclamationFunction.java}
+
+### Java SDK functions
+
+To get started developing Pulsar Functions using the Java SDK, you'll need to add a dependency on the `pulsar-functions-api` artifact to your project. Instructions can be found [above](#dependencies).
+
+> An easy way to get up and running with Pulsar Functions in Java is to clone the [`pulsar-functions-java-starter`](https://github.com/streamlio/pulsar-functions-java-starter) repo and follow the instructions there.
+
+
+#### Java SDK examples
+
+There are several example Java SDK functions in this {@inject: github:folder:/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples}:
+
+Function name | Description
+:-------------|:-----------
+[`ContextFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ContextFunction.java) | Illustrates [context](#context)-specific functionality like [logging](#java-logging) and [metrics](#java-metrics)
+[`WordCountFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/WordCountFunction.java) | Illustrates usage of Pulsar Function [state-storage](functions-overview.md#state-storage)
+[`ExclamationFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/ExclamationFunction.java) | A basic string manipulation function for the Java SDK
+[`LoggingFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/LoggingFunction.java) | A function that shows how [logging](#java-logging) works for Java
+[`PublishFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/PublishFunction.java) | Publishes results to a topic specified in the function's [user config](#java-user-config) (rather than on the function's output topic)
+[`UserConfigFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/UserConfigFunction.java) | A function that consumes [user-supplied configuration](#java-user-config) values
+[`UserMetricFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/UserMetricFunction.java) | A function that records metrics
+[`VoidFunction`](https://github.com/apache/pulsar/blob/master/pulsar-functions/java-examples/src/main/java/org/apache/pulsar/functions/api/examples/UserMetricFunction.java)  | A simple [void function](#void-functions)
+
+### Java context object
+
+The {@inject: javadoc:Context:/client/org/apache/pulsar/functions/api/Context} interface provides a number of methods that you can use to access the function's [context](#context). The various method signatures for the `Context` interface are listed below:
+
+```java
+public interface Context {
+    Record<?> getCurrentRecord();
+    Collection<String> getInputTopics();
+    String getOutputTopic();
+    String getOutputSchemaType();
+    String getTenant();
+    String getNamespace();
+    String getFunctionName();
+    String getFunctionId();
+    String getInstanceId();
+    String getFunctionVersion();
+    Logger getLogger();
+    void incrCounter(String key, long amount);
+    long getCounter(String key);
+    void putState(String key, ByteBuffer value);
+    ByteBuffer getState(String key);
+    Map<String, Object> getUserConfigMap();
+    Optional<Object> getUserConfigValue(String key);
+    Object getUserConfigValueOrDefault(String key, Object defaultValue);
+    void recordMetric(String metricName, double value);
+    <O> CompletableFuture<Void> publish(String topicName, O object, String schemaOrSerdeClassName);
+    <O> CompletableFuture<Void> publish(String topicName, O object);
+}
+```
+
+Here's an example function that uses several methods available via the `Context` object:
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+import java.util.stream.Collectors;
+
+public class ContextFunction implements Function<String, Void> {
+    public Void process(String input, Context context) {
+        Logger LOG = context.getLogger();
+        String inputTopics = context.getInputTopics().stream().collect(Collectors.joining(", "));
+        String functionName = context.getFunctionName();
+
+        String logMessage = String.format("A message with a value of \"%s\" has arrived on one of the following topics: %s\n",
+                input,
+                inputTopics);
+
+        LOG.info(logMessage);
+
+        String metricName = String.format("function-%s-messages-received", functionName);
+        context.recordMetric(metricName, 1);
+
+        return null;
+    }
+}
+```
+
+### Void functions
+
+Pulsar Functions can publish results to an output topic, but this isn't required. You can also have functions that simply produce a log, write results to a database, etc. Here's a function that writes a simple log every time a message is received:
+
+```java
+import org.slf4j.Logger;
+
+public class LogFunction implements PulsarFunction<String, Void> {
+    public String apply(String input, Context context) {
+        Logger LOG = context.getLogger();
+        LOG.info("The following message was received: {}", input);
+        return null;
+    }
+}
+```
+
+> When using Java functions in which the output type is `Void`, the function must *always* return `null`.
+
+### Java SerDe
+
+Pulsar Functions use [SerDe](#serialization-and-deserialization-serde) when publishing data to and consuming data from Pulsar topics. When you're writing Pulsar Functions in Java, the following basic Java types are built in and supported by default:
+
+* `String`
+* `Double`
+* `Integer`
+* `Float`
+* `Long`
+* `Short`
+* `Byte`
+
+Built-in vs. custom. For custom, you need to implement this interface:
+
+```java
+public interface SerDe<T> {
+    T deserialize(byte[] input);
+    byte[] serialize(T input);
+}
+```
+
+#### Java SerDe example
+
+Imagine that you're writing Pulsar Functions in Java that are processing tweet objects. Here's a simple example `Tweet` class:
+
+```java
+public class Tweet {
+    private String username;
+    private String tweetContent;
+
+    public Tweet(String username, String tweetContent) {
+        this.username = username;
+        this.tweetContent = tweetContent;
+    }
+
+    // Standard setters and getters
+}
+```
+
+In order to be able to pass `Tweet` objects directly between Pulsar Functions, you'll need to provide a custom SerDe class. In the example below, `Tweet` objects are basically strings in which the username and tweet content are separated by a `|`.
+
+```java
+package com.example.serde;
+
+import org.apache.pulsar.functions.api.SerDe;
+
+import java.util.regex.Pattern;
+
+public class TweetSerde implements SerDe<Tweet> {
+    public Tweet deserialize(byte[] input) {
+        String s = new String(input);
+        String[] fields = s.split(Pattern.quote("|"));
+        return new Tweet(fields[0], fields[1]);
+    }
+
+    public byte[] serialize(Tweet input) {
+        return "%s|%s".format(input.getUsername(), input.getTweetContent()).getBytes();
+    }
+}
+```
+
+To apply this custom SerDe to a particular Pulsar Function, you would need to:
+
+* Package the `Tweet` and `TweetSerde` classes into a JAR
+* Specify a path to the JAR and SerDe class name when deploying the function
+
+Here's an example [`create`](reference-pulsar-admin.md#create-1) operation:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar /path/to/your.jar \
+  --output-serde-classname com.example.serde.TweetSerde \
+  # Other function attributes
+```
+
+> #### Custom SerDe classes must be packaged with your function JARs
+> Pulsar does not store your custom SerDe classes separately from your Pulsar Functions. That means that you'll need to always include your SerDe classes in your function JARs. If not, Pulsar will return an error.
+
+### Java logging
+
+Pulsar Functions that use the [Java SDK](#java-sdk-functions) have access to an [SLF4j](https://www.slf4j.org/) [`Logger`](https://www.slf4j.org/api/org/apache/log4j/Logger.html) object that can be used to produce logs at the chosen log level. Here's a simple example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`:
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+public class LoggingFunction implements Function<String, Void> {
+    @Override
+    public void apply(String input, Context context) {
+        Logger LOG = context.getLogger();
+        String messageId = new String(context.getMessageId());
+
+        if (input.contains("danger")) {
+            LOG.warn("A warning was received in message {}", messageId);
+        } else {
+            LOG.info("Message {} received\nContent: {}", messageId, input);
+        }
+
+        return null;
+    }
+}
+```
+
+If you want your function to produce logs, you need to specify a log topic when creating or running the function. Here's an example:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar my-functions.jar \
+  --classname my.package.LoggingFunction \
+  --log-topic persistent://public/default/logging-function-logs \
+  # Other function configs
+```
+
+Now, all logs produced by the `LoggingFunction` above can be accessed via the `persistent://public/default/logging-function-logs` topic.
+
+### Java user config
+
+The Java SDK's [`Context`](#context) object enables you to access key/value pairs provided to the Pulsar Function via the command line (as JSON). Here's an example function creation command that passes a key/value pair:
+
+```bash
+$ bin/pulsar-admin functions create \
+  # Other function configs
+  --user-config '{"word-of-the-day":"verdure"}'
+```
+
+To access that value in a Java function:
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+import java.util.Optional;
+
+public class UserConfigFunction implements Function<String, Void> {
+    @Override
+    public void apply(String input, Context context) {
+        Logger LOG = context.getLogger();
+        Optional<String> wotd = context.getUserConfigValue("word-of-the-day");
+        if (wotd.isPresent()) {
+            LOG.info("The word of the day is {}", wotd);
+        } else {
+            LOG.warn("No word of the day provided");
+        }
+        return null;
+    }
+}
+```
+
+The `UserConfigFunction` function will log the string `"The word of the day is verdure"` every time the function is invoked (i.e. every time a message arrives). The `word-of-the-day` user config will be changed only when the function is updated with a new config value via the command line.
+
+You can also access the entire user config map or set a default value in case no value is present:
+
+```java
+// Get the whole config map
+Map<String, String> allConfigs = context.getUserConfigMap();
+
+// Get value or resort to default
+String wotd = context.getUserConfigValueOrDefault("word-of-the-day", "perspicacious");
+```
+
+> For all key/value pairs passed to Java Pulsar Functions, both the key *and* the value are `String`s. If you'd like the value to be of a different type, you will need to deserialize from the `String` type.
+
+### Java metrics
+
+You can record metrics using the [`Context`](#context) object on a per-key basis. You can, for example, set a metric for the key `process-count` and a different metric for the key `elevens-count` every time the function processes a message. Here's an example:
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+public class MetricRecorderFunction implements Function<Integer, Void> {
+    @Override
+    public void apply(Integer input, Context context) {
+        // Records the metric 1 every time a message arrives
+        context.recordMetric("hit-count", 1);
+
+        // Records the metric only if the arriving number equals 11
+        if (input == 11) {
+            context.recordMetric("elevens-count", 1);
+        }
+
+        return null;
+    }
+}
+```
+
+> For instructions on reading and using metrics, see the [Monitoring](deploy-monitoring.md) guide.
+
+
+## Functions for Python
+
+Writing Pulsar Functions in Python entails implementing one of two things:
+
+* A `process` function that takes an input (message data from the function's input topic(s)), applies some kind of logic to it, and either returns an object (to be published to the function's output topic) or `pass`es and thus doesn't produce a message
+* A `Function` class that has a `process` method that provides a message input to process and a [context](#context) object
+
+### Getting started
+
+Regardless of which [deployment mode](functions-deploying.md) you're using, 'pulsar-client' python library has to installed on any machine that's running Pulsar Functions written in Python.
+
+That could be your local machine for [local run mode](functions-deploying.md#local-run-mode) or a machine running a Pulsar [broker](reference-terminology.md#broker) for [cluster mode](functions-deploying.md#cluster-mode). To install those libraries using pip:
+
+```bash
+$ pip install pulsar-client
+```
+
+### Packaging
+
+At the moment, the code for Pulsar Functions written in Python must be contained within a single Python file. In the future, Pulsar Functions may support other packaging formats, such as [**P**ython **EX**ecutables](https://github.com/pantsbuild/pex) (PEXes).
+
+### Python native functions
+
+If your function doesn't require access to its [context](#context), you can create a Pulsar Function by implementing a `process` function, which provides a single input object that you can process however you wish. Here's an example function that takes a string as its input, adds an exclamation point at the end of the string, and then publishes the resulting string:
+
+```python
+def process(input):
+    return "{0}!".format(input)
+```
+
+In general, you should use native functions when you don't need access to the function's [context](#context). If you *do* need access to the function's context, then we recommend using the [Pulsar Functions Python SDK](#python-sdk-functions).
+
+#### Python native examples
+
+There is one example Python native function in this {@inject: github:folder:/pulsar-functions/python-examples}:
+
+* {@inject: github:`native_exclamation_function.py`:/pulsar-functions/python-examples/native_exclamation_function.py}
+
+### Python SDK functions
+
+To get started developing Pulsar Functions using the Python SDK, you'll need to install the [`pulsar-client`](/api/python) library using the instructions [above](#getting-started).
+
+#### Python SDK examples
+
+There are several example Python functions in this {@inject: github:folder:/pulsar-functions/python-examples}:
+
+Function file | Description
+:-------------|:-----------
+[`exclamation_function.py`](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/exclamation_function.py) | Adds an exclamation point at the end of each incoming string
+[`logging_function.py`](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/logging_function.py) | Logs each incoming message
+[`thumbnailer.py`](https://github.com/apache/pulsar/blob/master/pulsar-functions/python-examples/thumbnailer.py) | Takes image data as input and outputs a 128x128 thumbnail of each image
+
+#### Python context object
+
+The [`Context`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/context.py) class provides a number of methods that you can use to access the function's [context](#context). The various methods for the `Context` class are listed below:
+
+Method | What it provides
+:------|:----------------
+`get_message_id` | The message ID of the message being processed
+`get_current_message_topic_name` | The topic of the message being currently being processed
+`get_function_tenant` | The tenant under which the current Pulsar Function runs under
+`get_function_namespace` | The namespace under which the current Pulsar Function runs under
+`get_function_name` | The name of the current Pulsar Function
+`get_function_id` | The ID of the current Pulsar Function
+`get_instance_id` | The ID of the current Pulsar Functions instance
+`get_function_version` | The version of the current Pulsar Function
+`get_logger` | A logger object that can be used for [logging](#python-logging)
+`get_user_config_value` | Returns the value of a [user-defined config](#python-user-config) (or `None` if the config doesn't exist)
+`get_user_config_map` | Returns the entire user-defined config as a dict
+`record_metric` | Records a per-key [metric](#python-metrics)
+`publish` | Publishes a message to the specified Pulsar topic
+`get_output_serde_class_name` | The name of the output [SerDe](#python-serde) class
+`ack` | [Acks](reference-terminology.md#acknowledgment-ack) the message being processed to Pulsar
+
+### Python SerDe
+
+Pulsar Functions use [SerDe](#serialization-and-deserialization-serde) when publishing data to and consuming data from Pulsar topics (this is true of both [native](#python-native-functions) functions and [SDK](#python-sdk-functions) functions). You can specify the SerDe when [creating](functions-deploying.md#cluster-mode) or [running](functions-deploying.md#local-run-mode) functions. Here's an example:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --tenant public \
+  --namespace default \
+  --name my_function \
+  --py my_function.py \
+  --classname my_function.MyFunction \
+  --custom-serde-inputs '{"input-topic-1":"Serde1","input-topic-2":"Serde2"}' \
+  --output-serde-classname Serde3 \
+  --output output-topic-1
+```
+
+In this case, there are two input topics, `input-topic-1` and `input-topic-2`, each of which is mapped to a different SerDe class (the map must be specified as a JSON string). The output topic, `output-topic-1`, uses the `Serde3` class for SerDe. At the moment, all Pulsar Function logic, include processing function and SerDe classes, must be contained within a single Python file.
+
+When using Pulsar Functions for Python, you essentially have three SerDe options:
+
+1. You can use the [`IdentitySerde`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L70), which leaves the data unchanged. The `IdentitySerDe` is the **default**. Creating or running a function without explicitly specifying SerDe will mean that this option is used.
+2. You can use the [`PickeSerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L62), which uses Python's [`pickle`](https://docs.python.org/3/library/pickle.html) for SerDe.
+3. You can create a custom SerDe class by implementing the baseline [`SerDe`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L50) class, which has just two methods: [`serialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L53) for converting the object into bytes, and [`deserialize`](https://github.com/apache/pulsar/blob/master/pulsar-client-cpp/python/pulsar/functions/serde.py#L58) for converting bytes into an object of the required application-specific type.
+
+The table below shows when you should use each SerDe:
+
+SerDe option | When to use
+:------------|:-----------
+`IdentitySerde` | When you're working with simple types like strings, Booleans, integers, and the like
+`PickleSerDe` | When you're working with complex, application-specific types and are comfortable with `pickle`'s "best effort" approach
+Custom SerDe | When you require explicit control over SerDe, potentially for performance or data compatibility purposes
+
+#### Python SerDe example
+
+Imagine that you're writing Pulsar Functions in Python that are processing tweet objects. Here's a simple `Tweet` class:
+
+```python
+class Tweet(object):
+    def __init__(self, username, tweet_content):
+        self.username = username
+        self.tweet_content = tweet_content
+```
+
+In order to use this class in Pulsar Functions, you'd have two options:
+
+1. You could specify `PickleSerDe`, which would apply the [`pickle`](https://docs.python.org/3/library/pickle.html) library's SerDe
+1. You could create your own SerDe class. Here's a simple example:
+
+  ```python
+  from pulsar import SerDe
+
+  class TweetSerDe(SerDe):
+      def __init__(self, tweet):
+          self.tweet = tweet
+
+      def serialize(self, input):
+          return bytes("{0}|{1}".format(self.tweet.username, self.tweet.tweet_content))
+
+      def deserialize(self, input_bytes):
+          tweet_components = str(input_bytes).split('|')
+          return Tweet(tweet_components[0], tweet_componentsp[1])
+  ```
+
+### Python logging
+
+Pulsar Functions that use the [Python SDK](#python-sdk-functions) have access to a logging object that can be used to produce logs at the chosen log level. Here's a simple example function that logs either a `WARNING`- or `INFO`-level log based on whether the incoming string contains the word `danger`:
+
+```python
+from pulsar import Function
+
+class LoggingFunction(Function):
+    def process(self, input, context):
+        logger = context.get_logger()
+        msg_id = context.get_message_id()
+        if 'danger' in input:
+            logger.warn("A warning was received in message {0}".format(context.get_message_id()))
+        else:
+            logger.info("Message {0} received\nContent: {1}".format(msg_id, input))
+```
+
+If you want your function to produce logs on a Pulsar topic, you need to specify a **log topic** when creating or running the function. Here's an example:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --py logging_function.py \
+  --classname logging_function.LoggingFunction \
+  --log-topic logging-function-logs \
+  # Other function configs
+```
+
+Now, all logs produced by the `LoggingFunction` above can be accessed via the `logging-function-logs` topic.
+
+### Python user config
+
+The Python SDK's [`Context`](#context) object enables you to access key/value pairs provided to the Pulsar Function via the command line (as JSON). Here's an example function creation command that passes a key/value pair:
+
+```bash
+$ bin/pulsar-admin functions create \
+  # Other function configs \
+  --user-config '{"word-of-the-day":"verdure"}'
+```
+
+To access that value in a Python function:
+
+```python
+from pulsar import Function
+
+class UserConfigFunction(Function):
+    def process(self, input, context):
+        logger = context.get_logger()
+        wotd = context.get_user_config_value('word-of-the-day')
+        if wotd is None:
+            logger.warn('No word of the day provided')
+        else:
+            logger.info("The word of the day is {0}".format(wotd))
+```
+
+### Python metrics
+
+You can record metrics using the [`Context`](#context) object on a per-key basis. You can, for example, set a metric for the key `process-count` and a different metric for the key `elevens-count` every time the function processes a message. Here's an example:
+
+```python
+from pulsar import Function
+
+class MetricRecorderFunction(Function):
+    def process(self, input, context):
+        context.record_metric('hit-count', 1)
+
+        if input == 11:
+            context.record_metric('elevens-count', 1)
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/functions-deploying.md b/site2/website/versioned_docs/version-2.2.0/functions-deploying.md
new file mode 100644
index 0000000000..3852d17140
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/functions-deploying.md
@@ -0,0 +1,227 @@
+---
+id: version-2.2.0-functions-deploying
+title: Deploying and managing Pulsar Functions
+sidebar_label: Deploying functions
+original_id: functions-deploying
+---
+
+At the moment, there are two deployment modes available for Pulsar Functions:
+
+Mode | Description
+:----|:-----------
+Local run mode | The function runs in your local environment, for example on your laptop
+Cluster mode | The function runs *inside of* your Pulsar cluster, on the same machines as your Pulsar brokers
+
+> #### Contributing new deployment modes
+> The Pulsar Functions feature was designed, however, with extensibility in mind. Other deployment options will be available in the future. If you'd like to add a new deployment option, we recommend getting in touch with the Pulsar developer community at [dev@pulsar.apache.org](mailto:dev@pulsar.apache.org).
+
+## Requirements
+
+In order to deploy and manage Pulsar Functions, you need to have a Pulsar cluster running. There are several options for this:
+
+* You can run a [standalone cluster](getting-started-standalone.md) locally on your own machine
+* You can deploy a Pulsar cluster on [Kubernetes](deploy-kubernetes.md), [Amazon Web Services](deploy-aws.md), [bare metal](deploy-bare-metal.md), [DC/OS](deploy-dcos.md), and more
+
+If you're running a non-[standalone](reference-terminology.md#standalone) cluster, you'll need to obtain the service URL for the cluster. How you obtain the service URL will depend on how you deployed your Pulsar cluster.
+
+## Command-line interface
+
+Pulsar Functions are deployed and managed using the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface, which contains commands such as [`create`](reference-pulsar-admin.md#functions-create) for deploying functions in [cluster mode](#cluster-mode), [`trigger`](reference-pulsar-admin.md#trigger) for [triggering](#triggering-pulsar-functions) functions, [`list`](reference-pulsar-admin.md#list-2) for listing deployed functions, and several others.
+
+### Fully Qualified Function Name (FQFN)
+
+Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function's tenant, namespace, and function name. FQFN's look like this:
+
+```http
+tenant/namespace/name
+```
+
+FQFNs enable you to, for example, create multiple functions with the same name provided that they're in different namespaces.
+
+### Default arguments
+
+When managing Pulsar Functions, you'll need to specify a variety of information about those functions, including tenant, namespace, input and output topics, etc. There are some parameters, however, that have default values that will be supplied if omitted. The table below lists the defaults:
+
+Parameter | Default
+:---------|:-------
+Function name | Whichever value is specified for the class name (minus org, library, etc.). The flag `--classname org.example.MyFunction`, for example, would give the function a name of `MyFunction`.
+Tenant | Derived from the input topics' names. If the input topics are under the `marketing` tenant---i.e. the topic names have the form `persistent://marketing/{namespace}/{topicName}`---then the tenant will be `marketing`.
+Namespace | Derived from the input topics' names. If the input topics are under the `asia` namespace under the `marketing` tenant---i.e. the topic names have the form `persistent://marketing/asia/{topicName}`, then the namespace will be `asia`.
+Output topic | `{input topic}-{function name}-output`. A function with an input topic name of `incoming` and a function name of `exclamation`, for example, would have an output topic of `incoming-exclamation-output`.
+Subscription type | For at-least-once and at-most-once [processing guarantees](functions-guarantees.md), the [`SHARED`](concepts-messaging.md#shared) is applied by default; for effectively-once guarantees, [`FAILOVER`](concepts-messaging.md#failover) is applied
+Processing guarantees | [`ATLEAST_ONCE`](functions-guarantees.md)
+Pulsar service URL | `pulsar://localhost:6650`
+
+#### Example use of defaults
+
+Take this `create` command:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar my-pulsar-functions.jar \
+  --classname org.example.MyFunction \
+  --inputs my-function-input-topic1,my-function-input-topic2
+```
+
+The created function would have default values supplied for the function name (`MyFunction`), tenant (`public`), namespace (`default`), subscription type (`SHARED`), processing guarantees (`ATLEAST_ONCE`), and Pulsar service URL (`pulsar://localhost:6650`).
+
+## Local run mode
+
+If you run a Pulsar Function in **local run** mode, it will run on the machine from which the command is run (this could be your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, etc.). Here's an example [`localrun`](reference-pulsar-admin.md#localrun) command:
+
+```bash
+$ bin/pulsar-admin functions localrun \
+  --py myfunc.py \
+  --classname myfunc.SomeFunction \
+  --inputs persistent://public/default/input-1 \
+  --output persistent://public/default/output-1
+```
+
+By default, the function will connect to a Pulsar cluster running on the same machine, via a local [broker](reference-terminology.md#broker) service URL of `pulsar://localhost:6650`. If you'd like to use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. Here's an example:
+
+```bash
+$ bin/pulsar-admin functions localrun \
+  --broker-service-url pulsar://my-cluster-host:6650 \
+  # Other function parameters
+```
+
+## Cluster mode
+
+When you run a Pulsar Function in **cluster mode**, the function code will be uploaded to a Pulsar broker and run *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. Here's an example:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --py myfunc.py \
+  --classname myfunc.SomeFunction \
+  --inputs persistent://public/default/input-1 \
+  --output persistent://public/default/output-1
+```
+
+### Updating cluster mode functions
+
+You can use the [`update`](reference-pulsar-admin.md#update-1) command to update a Pulsar Function running in cluster mode. This command, for example, would update the function created in the section [above](#cluster-mode):
+
+```bash
+$ bin/pulsar-admin functions update \
+  --py myfunc.py \
+  --classname myfunc.SomeFunction \
+  --inputs persistent://public/default/new-input-topic \
+  --output persistent://public/default/new-output-topic
+```
+
+### Parallelism
+
+Pulsar Functions run as processes called **instances**. When you run a Pulsar Function, it runs as a single instance by default (and in [local run mode](#local-run-mode) you can *only* run a single instance of a function).
+
+You can also specify the *parallelism* of a function, i.e. the number of instances to run, when you create the function. You can set the parallelism factor using the `--parallelism` flag of the [`create`](reference-pulsar-admin.md#functions-create) command. Here's an example:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --parallelism 3 \
+  # Other function info
+```
+
+You can adjust the parallelism of an already created function using the [`update`](reference-pulsar-admin.md#update-1) interface.
+
+```bash
+$ bin/pulsar-admin functions update \
+  --parallelism 5 \
+  # Other function
+```
+
+If you're specifying a function's configuration via YAML, use the `parallelism` parameter. Here's an example config file:
+
+```yaml
+# function-config.yaml
+parallelism: 3
+inputs:
+- persistent://public/default/input-1
+output: persistent://public/default/output-1
+# other parameters
+```
+
+And here's the corresponding update command:
+
+```bash
+$ bin/pulsar-admin functions update \
+  --function-config-file function-config.yaml
+```
+
+### Function instance resources
+
+When you run Pulsar Functions in [cluster run](#cluster-mode) mode, you can specify the resources that are assigned to each function [instance](#parallelism):
+
+Resource | Specified as... | Runtimes
+:--------|:----------------|:--------
+CPU | The number of cores | Docker (coming soon)
+RAM | The number of bytes | Process, Docker
+Disk space | The number of bytes | Docker
+
+Here's an example function creation command that allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar target/my-functions.jar \
+  --classname org.example.functions.MyFunction \
+  --cpu 8 \
+  --ram 8589934592 \
+  --disk 10737418240
+```
+
+> #### Resources are *per instance*
+> The resources that you apply to a given Pulsar Function are applied to each [instance](#parallelism) of the function. If you apply 8 GB of RAM to a function with a paralellism of 5, for example, then you are applying 40 GB of RAM total for the function. You should always make sure to factor paralellism---i.e. the number of instances---into your resource calculations
+
+## Triggering Pulsar Functions
+
+If a Pulsar Function is running in [cluster mode](#cluster-mode), you can **trigger** it at any time using the command line. Triggering a function means that you send a message with a specific value to the function and get the function's output (if any) via the command line.
+
+> Triggering a function is ultimately no different from invoking a function by producing a message on one of the function's input topics. The [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command is essentially a convenient mechanism for sending messages to functions without needing to use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library.
+
+To show an example of function triggering, let's start with a simple [Python function](functions-api.md#functions-for-python) that returns a simple string based on the input:
+
+```python
+# myfunc.py
+def process(input):
+    return "This function has been triggered with a value of {0}".format(input)
+```
+
+Let's run that function in [local run mode](functions-deploying.md#local-run-mode):
+
+```bash
+$ bin/pulsar-admin functions create \
+  --tenant public \
+  --namespace default \
+  --name myfunc \
+  --py myfunc.py \
+  --classname myfunc \
+  --inputs persistent://public/default/in \
+  --output persistent://public/default/out
+```
+
+Now let's make a consumer listen on the output topic for messages coming from the `myfunc` function using the [`pulsar-client consume`](reference-cli-tools.md#consume) command:
+
+```bash
+$ bin/pulsar-client consume persistent://public/default/out \
+  --subscription-name my-subscription
+  --num-messages 0 # Listen indefinitely
+```
+
+Now let's trigger that function:
+
+```bash
+$ bin/pulsar-admin functions trigger \
+  --tenant public \
+  --namespace default \
+  --name myfunc \
+  --trigger-value "hello world"
+```
+
+The consumer listening on the output topic should then produce this in its logs:
+
+```
+----- got message -----
+This function has been triggered with a value of hello world
+```
+
+> #### Topic info not required
+> In the `trigger` command above, you may have noticed that you only need to specify basic information about the function (tenant, namespace, and name). To trigger the function, you didn't need to know the function's input topic(s).
diff --git a/site2/website/versioned_docs/version-2.2.0/functions-guarantees.md b/site2/website/versioned_docs/version-2.2.0/functions-guarantees.md
new file mode 100644
index 0000000000..ecea376f8f
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/functions-guarantees.md
@@ -0,0 +1,42 @@
+---
+id: version-2.2.0-functions-guarantees
+title: Processing guarantees
+sidebar_label: Processing guarantees
+original_id: functions-guarantees
+---
+
+Pulsar Functions provides three different messaging semantics that you can apply to any function:
+
+Delivery semantics | Description
+:------------------|:-------
+**At-most-once** delivery | Each message that is sent to the function will most likely be processed but also may not be (hence the "at most")
+**At-least-once** delivery | Each message that is sent to the function could be processed more than once (hence the "at least")
+**Effectively-once** delivery | Each message that is sent to the function will have one output associated with it
+
+## Applying processing guarantees to a function
+
+You can set the processing guarantees for a Pulsar Function when you create the Function. This [`pulsar-function create`](reference-pulsar-admin.md#create-1) command, for example, would apply effectively-once guarantees to the Function:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --processing-guarantees EFFECTIVELY_ONCE \
+  # Other function configs
+```
+
+The available options are:
+
+* `ATMOST_ONCE`
+* `ATLEAST_ONCE`
+* `EFFECTIVELY_ONCE`
+
+> By default, Pulsar Functions provide at-most-once delivery guarantees. So if you create a function without supplying a value for the `--processingGuarantees` flag, then the function will provide at-most-once guarantees.
+
+## Updating the processing guarantees of a function
+
+You can change the processing guarantees applied to a function once it's already been created using the [`update`](reference-pulsar-admin.md#update-1) command. Here's an example:
+
+```bash
+$ bin/pulsar-admin functions update \
+  --processing-guarantees ATMOST_ONCE \
+  # Other function configs
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/functions-overview.md b/site2/website/versioned_docs/version-2.2.0/functions-overview.md
new file mode 100644
index 0000000000..bb7fd1b7ec
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/functions-overview.md
@@ -0,0 +1,452 @@
+---
+id: version-2.2.0-functions-overview
+title: Pulsar Functions overview
+sidebar_label: Overview
+original_id: functions-overview
+---
+
+**Pulsar Functions** are lightweight compute processes that
+
+* consume messages from one or more Pulsar topics,
+* apply a user-supplied processing logic to each message,
+* publish the results of the computation to another topic
+
+Here's an example Pulsar Function for Java (using the [native interface](functions-api.md#java-native-functions)):
+
+```java
+import java.util.Function;
+
+public class ExclamationFunction implements Function<String, String> {
+    @Override
+    public String apply(String input) { return String.format("%s!", input); }
+}
+```
+
+Here's an equivalent function in Python (also using the [native interface](functions-api.md#python-native-functions)):
+
+```python
+def process(input):
+    return "{0}!".format(input)
+```
+
+Functions are executed each time a message is published to the input topic. If a function is listening on the topic `tweet-stream`, for example, then the function would be run each time a message is published to that topic.
+
+## Goals
+
+The core goal behind Pulsar Functions is to enable you to easily create processing logic of any level of complexity without needing to deploy a separate neighboring system (such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), [Apache Flink](https://flink.apache.org/), etc.). Pulsar Functions is essentially ready-made compute infrastructure at your disposal as part of your Pulsar messaging system. This core goal is tied to a series of other goals:
+
+* Developer productivity ([language-native](#language-native-functions) vs. [Pulsar Functions SDK](#the-pulsar-functions-sdk) functions)
+* Easy troubleshooting
+* Operational simplicity (no need for an external processing system)
+
+## Inspirations
+
+The Pulsar Functions feature was inspired by (and takes cues from) several systems and paradigms:
+
+* Stream processing engines such as [Apache Storm](http://storm.apache.org/), [Apache Heron](https://apache.github.io/incubator-heron), and [Apache Flink](https://flink.apache.org)
+* "Serverless" and "Function as a Service" (FaaS) cloud platforms like [Amazon Web Services Lambda](https://aws.amazon.com/lambda/), [Google Cloud Functions](https://cloud.google.com/functions/), and [Azure Cloud Functions](https://azure.microsoft.com/en-us/services/functions/)
+
+Pulsar Functions could be described as
+
+* [Lambda](https://aws.amazon.com/lambda/)-style functions that are
+* specifically designed to use Pulsar as a message bus
+
+## Programming model
+
+The core programming model behind Pulsar Functions is very simple:
+
+* Functions receive messages from one or more **input [topics](reference-terminology.md#topic)**. Every time a message is received, the function can do a variety of things:
+  * Apply some processing logic to the input and write output to:
+    * An **output topic** in Pulsar
+    * [Apache BookKeeper](#state-storage)
+  * Write logs to a **log topic** (potentially for debugging purposes)
+  * Increment a [counter](#word-count-example)
+
+![Pulsar Functions core programming model](assets/pulsar-functions-overview.png)
+
+### Word count example
+
+If you were to implement the classic word count example using Pulsar Functions, it might look something like this:
+
+![Pulsar Functions word count example](assets/pulsar-functions-word-count.png)
+
+If you were writing the function in [Java](functions-api.md#functions-for-java) using the [Pulsar Functions SDK for Java](functions-api.md#java-sdk-functions), you could write the function like this...
+
+```java
+package org.example.functions;
+
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+
+import java.util.Arrays;
+
+public class WordCountFunction implements Function<String, Void> {
+    // This function is invoked every time a message is published to the input topic
+    @Override
+    public Void process(String input, Context context) {
+        Arrays.asList(input.split(" ")).forEach(word -> {
+            String counterKey = word.toLowerCase();
+            context.incrCounter(counterKey, 1)
+        });
+        return null;
+    }
+}
+```
+
+...and then [deploy it](#cluster-run-mode) in your Pulsar cluster using the [command line](#command-line-interface) like this:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar target/my-jar-with-dependencies.jar \
+  --classname org.example.functions.WordCountFunction \
+  --tenant public \
+  --namespace default \
+  --name word-count \
+  --inputs persistent://public/default/sentences \
+  --output persistent://public/default/count
+```
+
+### Content-based routing example
+
+The use cases for Pulsar Functions are essentially endless, but let's dig into a more sophisticated example that involves content-based routing.
+
+Imagine a function that takes items (strings) as input and publishes them to either a fruits or vegetables topic, depending on the item. Or, if an item is neither a fruit nor a vegetable, a warning is logged to a [log topic](#logging). Here's a visual representation:
+
+![Pulsar Functions routing example](assets/pulsar-functions-routing-example.png)
+
+If you were implementing this routing functionality in Python, it might look something like this:
+
+```python
+from pulsar import Function
+
+class RoutingFunction(Function):
+    def __init__(self):
+        self.fruits_topic = "persistent://public/default/fruits"
+        self.vegetables_topic = "persistent://public/default/vegetables"
+
+    def is_fruit(item):
+        return item in ["apple", "orange", "pear", "other fruits..."]
+
+    def is_vegetable(item):
+        return item in ["carrot", "lettuce", "radish", "other vegetables..."]
+
+    def process(self, item, context):
+        if self.is_fruit(item):
+            context.publish(self.fruits_topic, item)
+        elif self.is_vegetable(item):
+            context.publish(self.vegetables_topic, item)
+        else:
+            warning = "The item {0} is neither a fruit nor a vegetable".format(item)
+            context.get_logger().warn(warning)
+```
+
+## Command-line interface
+
+Pulsar Functions are managed using the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool (in particular the [`functions`](reference-pulsar-admin.md#functions) command). Here's an example command that would run a function in [local run mode](#local-run-mode):
+
+```bash
+$ bin/pulsar-functions localrun \
+  --inputs persistent://public/default/test_src \
+  --output persistent://public/default/test_result \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction
+```
+
+## Fully Qualified Function Name (FQFN)
+
+Each Pulsar Function has a **Fully Qualified Function Name** (FQFN) that consists of three elements: the function's tenant, namespace, and function name. FQFN's look like this:
+
+```http
+tenant/namespace/name
+```
+
+FQFNs enable you to, for example, create multiple functions with the same name provided that they're in different namespaces.
+
+## Configuration
+
+Pulsar Functions can be configured in two ways:
+
+* Via [command-line arguments](#command-line-interface) passed to the [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface
+* Via [YAML](http://yaml.org/) configuration files
+
+If you're supplying a YAML configuration, you must specify a path to the file on the command line. Here's an example:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --function-config-file ./my-function.yaml
+```
+
+And here's an example `my-function.yaml` file:
+
+```yaml
+name: my-function
+tenant: public
+namespace: default
+jar: ./target/my-functions.jar
+className: org.example.pulsar.functions.MyFunction
+inputs:
+- persistent://public/default/test_src
+output: persistent://public/default/test_result
+```
+
+You can also mix and match configuration methods by specifying some function attributes via the CLI and others via YAML configuration.
+
+## Supported languages
+
+Pulsar Functions can currently be written in [Java](functions-api.md#functions-for-java) and [Python](functions-api.md#functions-for-python). Support for additional languages is coming soon.
+
+## The Pulsar Functions API
+
+The Pulsar Functions API enables you to create processing logic that is:
+
+* Type safe. Pulsar Functions can process raw bytes or more complex, application-specific types.
+* Based on SerDe (**Ser**ialization/**De**serialization). A variety of types are supported "out of the box" but you can also create your own custom SerDe logic.
+
+### Function context
+
+Each Pulsar Function created using the [Pulsar Functions SDK](#the-pulsar-functions-sdk) has access to a context object that both provides:
+
+1. A wide variety of information about the function, including:
+  * The name of the function
+  * The tenant and namespace of the function
+  * [User-supplied configuration](#user-configuration) values
+2. Special functionality, including:
+  * The ability to produce [logs](#logging) to a specified logging topic
+  * The ability to produce [metrics](#metrics)
+
+### Language-native functions
+
+Both Java and Python support writing "native" functions, i.e. Pulsar Functions with no dependencies.
+
+The benefit of native functions is that they don't have any dependencies beyond what's already available in Java/Python "out of the box." The downside is that they don't provide access to the function's [context](#function-context), which is necessary for a variety of functionality, including [logging](#logging), [user configuration](#user-configuration), and more.
+
+## The Pulsar Functions SDK
+
+If you'd like a Pulsar Function to have access to a [context object](#function-context), you can use the **Pulsar Functions SDK**, available for both [Java](functions-api.md#functions-for-java) and [Pythnon](functions-api.md#functions-for-python).
+
+### Java
+
+Here's an example Java function that uses information about its context:
+
+```java
+import org.apache.pulsar.functions.api.Context;
+import org.apache.pulsar.functions.api.Function;
+import org.slf4j.Logger;
+
+public class ContextAwareFunction implements Function<String, Void> {
+    @Override
+    public Void process(String input, Context, context) {
+        Logger LOG = context.getLogger();
+        String functionTenant = context.getTenant();
+        String functionNamespace = context.getNamespace();
+        String functionName = context.getName();
+        LOG.info("Function tenant/namespace/name: {}/{}/{}", functionTenant, functionNamespace, functionName);
+        return null;
+    }
+}
+```
+
+### Python
+
+Here's an example Python function that uses information about its context:
+
+```python
+from pulsar import Function
+
+class ContextAwareFunction(Function):
+    def process(self, input, context):
+        log = context.get_logger()
+        function_tenant = context.get_function_tenant()
+        function_namespace = context.get_function_namespace()
+        function_name = context.get_function_name()
+        log.info("Function tenant/namespace/name: {0}/{1}/{2}".format(function_tenant, function_namespace, function_name))
+```
+
+## Deployment
+
+The Pulsar Functions feature was built to support a variety of deployment options. At the moment, there are two ways to run Pulsar Functions:
+
+Deployment mode | Description
+:---------------|:-----------
+[Local run mode](#local-run-mode) | The function runs in your local environment, for example on your laptop
+[Cluster mode](#cluster-run-mode) | The function runs *inside of* your Pulsar cluster, on the same machines as your Pulsar [brokers](reference-terminology.md#broker)
+
+### Local run mode
+
+If you run a Pulsar Function in **local run** mode, it will run on the machine from which the command is run (this could be your laptop, an [AWS EC2](https://aws.amazon.com/ec2/) instance, etc.). Here's an example [`localrun`](reference-pulsar-admin.md#localrun) command:
+
+```bash
+$ bin/pulsar-admin functions localrun \
+  --py myfunc.py \
+  --classname myfunc.SomeFunction \
+  --inputs persistent://public/default/input-1 \
+  --output persistent://public/default/output-1
+```
+
+By default, the function will connect to a Pulsar cluster running on the same machine, via a local broker service URL of `pulsar://localhost:6650`. If you'd like to use local run mode to run a function but connect it to a non-local Pulsar cluster, you can specify a different broker URL using the `--brokerServiceUrl` flag. Here's an example:
+
+```bash
+$ bin/pulsar-admin functions localrun \
+  --broker-service-url pulsar://my-cluster-host:6650 \
+  # Other function parameters
+```
+
+### Cluster run mode
+
+When you run a Pulsar Function in **cluster mode**, the function code will be uploaded to a Pulsar broker and run *alongside the broker* rather than in your [local environment](#local-run-mode). You can run a function in cluster mode using the [`create`](reference-pulsar-admin.md#create-1) command. Here's an example:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --py myfunc.py \
+  --classname myfunc.SomeFunction \
+  --inputs persistent://public/default/input-1 \
+  --output persistent://public/default/output-1
+```
+
+This command will upload `myfunc.py` to Pulsar, which will use the code to start one [or more](#parallelism) instances of the function.
+
+### Parallelism
+
+By default, only one **instance** of a Pulsar Function runs when you create and run it in [cluster run mode](#cluster-run-mode). You can also, however, run multiple instances in parallel. You can specify the number of instances when you create the function, or update an existing single-instance function with a new parallelism factor.
+
+This command, for example, would create and run a function with a parallelism of 5 (i.e. 5 instances):
+
+```bash
+$ bin/pulsar-admin functions create \
+  --name parallel-fun \
+  --tenant public \
+  --namespace default \
+  --py func.py \
+  --classname func.ParallelFunction \
+  --parallelism 5
+```
+
+### Function instance resources
+
+When you run Pulsar Functions in [cluster run](#cluster-run-mode) mode, you can specify the resources that are assigned to each function [instance](#parallelism):
+
+Resource | Specified as... | Runtimes
+:--------|:----------------|:--------
+CPU | The number of cores | Docker (coming soon)
+RAM | The number of bytes | Process, Docker
+Disk space | The number of bytes | Docker
+
+Here's an example function creation command that allocates 8 cores, 8 GB of RAM, and 10 GB of disk space to a function:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar target/my-functions.jar \
+  --classname org.example.functions.MyFunction \
+  --cpu 8 \
+  --ram 8589934592 \
+  --disk 10737418240
+```
+
+For more information on resources, see the [Deploying and Managing Pulsar Functions](functions-deploying.md#resources) documentation.
+
+### Logging
+
+Pulsar Functions created using the [Pulsar Functions SDK](#the-pulsar-functions-sdk) can send logs to a log topic that you specify as part of the function's configuration. The function created using the command below, for example, would produce all logs on the `persistent://public/default/my-func-1-log` topic:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --name my-func-1 \
+  --log-topic persistent://public/default/my-func-1-log \
+  # Other configs
+```
+
+Here's an example [Java function](functions-api.md#java-logging) that logs at different log levels based on the function's input:
+
+```java
+public class LoggerFunction implements Function<String, Void> {
+    @Override
+    public Void process(String input, Context context) {
+        Logger LOG = context.getLogger();
+        if (input.length() <= 100) {
+            LOG.info("This string has a length of {}", input);
+        } else {
+            LOG.warn("This string is getting too long! It has {} characters", input);
+        }
+    }
+}
+```
+
+### User configuration
+
+Pulsar Functions can be passed arbitrary key-values via the command line (both keys and values must be strings). This set of key-values is called the functions **user configuration**. User configurations must consist of JSON strings.
+
+Here's an example of passing a user configuration to a function:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --user-config '{"key-1":"value-1","key-2","value-2"}' \
+  # Other configs
+```
+
+Here's an example of a function that accesses that config map:
+
+```java
+public class ConfigMapFunction implements Function<String, Void> {
+    @Override
+    public Void process(String input, Context context) {
+        String val1 = context.getUserConfigValue("key1").get();
+        String val2 = context.getUserConfigValue("key2").get();
+        context.getLogger().info("The user-supplied values are {} and {}", val1, val2);
+        return null;
+    }
+}
+```
+
+### Triggering Pulsar Functions
+
+Pulsar Functions running in [cluster mode](#cluster-run-mode) can be [triggered](functions-deploying.md#triggering-pulsar-functions) via the [command line](#command-line-interface). With triggering you can easily pass a specific value to a function and get the function's return value *without* needing to worry about creating a client, sending a message to the right input topic, etc. Triggering can be very useful for---but is by no means limited to---testing and debugging purposes.
+
+> Triggering a function is ultimately no different from invoking a function by producing a message on one of the function's input topics. The [`pulsar-admin functions trigger`](reference-pulsar-admin.md#trigger) command is essentially a convenient mechanism for sending messages to functions without needing to use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool or a language-specific client library.
+
+Let's take an example Pulsar Function written in Python (using the [native interface](functions-api.md#python-native-functions)) that simply reverses string inputs:
+
+```python
+def process(input):
+    return input[::-1]
+```
+
+If that function were running in a Pulsar cluster, it could be triggered like this:
+
+```bash
+$ bin/pulsar-admin functions trigger \
+  --tenant public \
+  --namespace default \
+  --name reverse-func \
+  --trigger-value "snoitcnuf raslup ot emoclew"
+```
+
+That should return `welcome to pulsar functions` as the console output.
+
+> Instead of passing in a string via the CLI, you can also trigger a Pulsar Function with the contents of a file using the `--triggerFile` flag.
+
+## Processing guarantees
+
+The Pulsar Functions feature provides three different messaging semantics that you can apply to any function:
+
+Delivery semantics | Description
+:------------------|:-------
+**At-most-once** delivery | Each message that is sent to the function will most likely be processed but also may not be (hence the "at most")
+**At-least-once** delivery | Each message that is sent to the function could be processed more than once (hence the "at least")
+**Effectively-once** delivery | Each message that is sent to the function will have one output associated with it
+
+This command, for example, would run a function in [cluster mode](#cluster-run-mode) with effectively-once guarantees applied:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --name my-effectively-once-function \
+  --processing-guarantees EFFECTIVELY_ONCE \
+  # Other function configs
+```
+
+## Metrics
+
+Pulsar Functions that use the [Pulsar Functions SDK](#the-pulsar-functions-sdk) can publish metrics to Pulsar. For more information, see [Metrics for Pulsar Functions](functions-metrics.md).
+
+## State storage
+
+Pulsar Functions use [Apache BookKeeper](https://bookkeeper.apache.org) as a state storage interface. All Pulsar installations, including local standalone installations, include a deployment of BookKeeper bookies.
diff --git a/site2/website/versioned_docs/version-2.2.0/functions-quickstart.md b/site2/website/versioned_docs/version-2.2.0/functions-quickstart.md
new file mode 100644
index 0000000000..c7fa9bc0d8
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/functions-quickstart.md
@@ -0,0 +1,266 @@
+---
+id: version-2.2.0-functions-quickstart
+title: Getting started with Pulsar Functions
+sidebar_label: Getting started
+original_id: functions-quickstart
+---
+
+This tutorial will walk you through running a [standalone](reference-terminology.md#standalone) Pulsar [cluster](reference-terminology.md#cluster) on your machine and then running your first Pulsar Functions using that cluster. The first function will run in local run mode (outside your Pulsar [cluster](reference-terminology.md#cluster)), while the second will run in cluster mode (inside your cluster).
+
+> In local run mode, your Pulsar Function will communicate with your Pulsar cluster but will run outside of the cluster.
+
+## Prerequisites
+
+In order to follow along with this tutorial, you'll need to have [Maven](https://maven.apache.org/download.cgi) installed on your machine.
+
+## Run a standalone Pulsar cluster
+
+In order to run our Pulsar Functions, we'll need to run a Pulsar cluster locally first. The easiest way to do that is to run Pulsar in [standalone](reference-terminology.md#standalone) mode. Follow these steps to start up a standalone cluster:
+
+```bash
+$ wget pulsar:binary_release_url
+$ tar xvfz apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+$ bin/pulsar standalone \
+  --advertised-address 127.0.0.1
+```
+
+When running Pulsar in standalone mode, the `public` tenant and `default` namespace will be created automatically for you. That tenant and namespace will be used throughout this tutorial.
+
+## Run a Pulsar Function in local run mode
+
+Let's start with a simple function that takes a string as input from a Pulsar topic, adds an exclamation point to the end of the string, and then publishes that new string to another Pulsar topic. Here's the code for the function:
+
+```java
+package org.apache.pulsar.functions.api.examples;
+
+import java.util.function.Function;
+
+public class ExclamationFunction implements Function<String, String> {
+    @Override
+    public String apply(String input) {
+        return String.format("%s!", input);
+    }
+}
+```
+
+A JAR file containing this and several other functions (written in Java) is included with the binary distribution you downloaded above (in the `examples` folder). To run the function in local mode, i.e. on our laptop but outside our Pulsar cluster:
+
+```bash
+$ bin/pulsar-admin functions localrun \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --inputs persistent://public/default/exclamation-input \
+  --output persistent://public/default/exclamation-output \
+  --name exclamation
+```
+
+> #### Multiple input topics allowed
+>
+> In the example above, a single topic was specified using the `--inputs` flag. You can also specify multiple input topics as a comma-separated list using the same flag. Here's an example:
+>
+> ```bash
+> --inputs topic1,topic2
+> ```
+
+We can open up another shell and use the [`pulsar-client`](reference-cli-tools.md#pulsar-client) tool to listen for messages on the output topic:
+
+```bash
+$ bin/pulsar-client consume persistent://public/default/exclamation-output \
+  --subscription-name my-subscription \
+  --num-messages 0
+```
+
+> Setting the `--num-messages` flag to 0 means that the consumer will listen on the topic indefinitely (rather than only accepting a certain number of messages).
+
+With a listener up and running, we can open up another shell and produce a message on the input topic that we specified:
+
+```bash
+$ bin/pulsar-client produce persistent://public/default/exclamation-input \
+  --num-produce 1 \
+  --messages "Hello world"
+```
+
+In the output, you should see the following:
+
+```
+----- got message -----
+Hello world!
+```
+
+Success! As you can see, the message has been successfully processed by the exclamation function. To shut down the function, simply hit **Ctrl+C**.
+
+Here's what happened:
+
+* The `Hello world` message that we published to the input topic (`persistent://public/default/exclamation-input`) was passed to the exclamation function that we ran on our machine
+* The exclamation function processed the message (providing a result of `Hello world!`) and published the result to the output topic (`persistent://public/default/exclamation-output`).
+* If our exclamation function *hadn't* been running, Pulsar would have durably stored the message data published to the input topic in [Apache BookKeeper](https://bookkeeper.apache.org) until a consumer consumed and acknowledged the message
+
+## Run a Pulsar Function in cluster mode
+
+[Local run mode](#run-a-pulsar-function-in-local-run-mode) is useful for development and experimentation, but if you want to use Pulsar Functions in a real Pulsar deployment, you'll want to run them in **cluster mode**. In this mode, Pulsar Functions run *inside* your Pulsar cluster and are managed using the same [`pulsar-admin functions`](reference-pulsar-admin.md#functions) interface that we've been using thus far.
+
+This command, for example, would deploy the same exclamation function we ran locally above *in our Pulsar cluster* (rather than outside it):
+
+```bash
+$ bin/pulsar-admin functions create \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --inputs persistent://public/default/exclamation-input \
+  --output persistent://public/default/exclamation-output \
+  --name exclamation
+```
+
+You should see `Created successfully` in the output. Now, let's see a list of functions running in our cluster:
+
+```bash
+$ bin/pulsar-admin functions list \
+  --tenant public \
+  --namespace default
+```
+
+We should see just the `exclamation` function listed there. We can also check the status of our deployed function using the `getstatus` command:
+
+```bash
+$ bin/pulsar-admin functions getstatus \
+  --tenant public \
+  --namespace default \
+  --name exclamation
+```
+
+You should see this JSON output:
+
+```json
+{
+  "functionStatusList": [
+    {
+      "running": true,
+      "instanceId": "0"
+    }
+  ]
+}
+```
+
+As we can see, (a) the instance is currently running and (b) there is one instance, with an ID of 0, running. We can get other information about the function (topics, tenant, namespace, etc.) using the `get` command instead of `getstatus`:
+
+```bash
+$ bin/pulsar-admin functions get \
+  --tenant public \
+  --namespace default \
+  --name exclamation
+```
+
+You should see this JSON output:
+
+```json
+{
+  "tenant": "public",
+  "namespace": "default",
+  "name": "exclamation",
+  "className": "org.apache.pulsar.functions.api.examples.ExclamationFunction",
+  "output": "persistent://public/default/exclamation-output",
+  "autoAck": true,
+  "inputs": [
+    "persistent://public/default/exclamation-input"
+  ],
+  "parallelism": 1
+}
+```
+
+As we can see, the parallelism of the function is 1, meaning that only one instance of the function is running in our cluster. Let's update our function to a parallelism of 3 using the `update` command:
+
+```bash
+$ bin/pulsar-admin functions update \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --inputs persistent://public/default/exclamation-input \
+  --output persistent://public/default/exclamation-output \
+  --tenant public \
+  --namespace default \
+  --name exclamation \
+  --parallelism 3
+```
+
+You should see `Updated successfully` in the output. If you run the `get` command from above for the function, you can see that the parallelism has increased to 3, meaning that there are now three instances of the function running in our cluster:
+
+```json
+{
+  "tenant": "public",
+  "namespace": "default",
+  "name": "exclamation",
+  "className": "org.apache.pulsar.functions.api.examples.ExclamationFunction",
+  "output": "persistent://public/default/exclamation-output",
+  "autoAck": true,
+  "inputs": [
+    "persistent://public/default/exclamation-input"
+  ],
+  "parallelism": 3
+}
+```
+
+Finally, we can shut down our running function using the `delete` command:
+
+```bash
+$ bin/pulsar-admin functions delete \
+  --tenant public \
+  --namespace default \
+  --name exclamation
+```
+
+If you see `Deleted successfully` in the output, then you've succesfully run, updated, and shut down a Pulsar Function running in cluster mode. Congrats! Now, let's go even further and run a brand new function in the next section.
+
+## Writing and running a new function
+
+> In order to write and run the [Python](functions-api.md#functions-for-python) function below, you'll need to install a few dependencies:
+> ```bash
+> $ pip install pulsar-client
+> ```
+
+In the above examples, we ran and managed a pre-written Pulsar Function and saw how it worked. To really get our hands dirty, let's write and our own function from scratch, using the Python API. This simple function will also take a string as input but it will reverse the string and publish the resulting, reversed string to the specified topic.
+
+First, create a new Python file:
+
+```bash
+$ touch reverse.py
+```
+
+In that file, add the following:
+
+```python
+def process(input):
+    return input[::-1]
+```
+
+Here, the `process` method defines the processing logic of the Pulsar Function. It simply uses some Python slice magic to reverse each incoming string. Now, we can deploy the function using `create`:
+
+```bash
+$ bin/pulsar-admin functions create \
+  --py reverse.py \
+  --classname reverse \
+  --inputs persistent://public/default/backwards \
+  --output persistent://public/default/forwards \
+  --tenant public \
+  --namespace default \
+  --name reverse
+```
+
+If you see `Created successfully`, the function is ready to accept incoming messages. Because the function is running in cluster mode, we can **trigger** the function using the [`trigger`](reference-pulsar-admin.md#trigger) command. This command will send a message that we specify to the function and also give us the function's output. Here's an example:
+
+```bash
+$ bin/pulsar-admin functions trigger \
+  --name reverse \
+  --tenant public \
+  --namespace default \
+  --trigger-value "sdrawrof won si tub sdrawkcab saw gnirts sihT"
+```
+
+You should get this output:
+
+```
+This string was backwards but is now forwards
+```
+
+Once again, success! We created a brand new Pulsar Function, deployed it in our Pulsar standalone cluster in [cluster mode](#run-a-pulsar-function-in-cluster-mode) and successfully triggered the function. If you're ready for more, check out one of these docs:
+
+* [The Pulsar Functions API](functions-api.md)
+* [Deploying Pulsar Functions](functions-deploying.md)
diff --git a/site2/website/versioned_docs/version-2.2.0/getting-started-clients.md b/site2/website/versioned_docs/version-2.2.0/getting-started-clients.md
new file mode 100644
index 0000000000..4cba67300f
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/getting-started-clients.md
@@ -0,0 +1,58 @@
+---
+id: version-2.2.0-client-libraries
+title: Pulsar client libraries
+sidebar_label: Client libraries
+original_id: client-libraries
+---
+
+Pulsar currently has client libraries available for following languages:
+
+* [Java](#java-client)
+* [Go](#go-client)
+* [Python](#python-client)
+* [C++](#c-client)
+
+## Java client
+
+For a tutorial on using the Pulsar Java client to produce and consume messages, see [The Pulsar Java client](client-libraries-java.md).
+
+There are also two independent sets of Javadoc API docs available:
+
+Library | Purpose
+:-------|:-------
+[`org.apache.pulsar.client.api`](/api/client) | The [Pulsar Java client](client-libraries-java.md) for producing and consuming messages on Pulsar topics
+[`org.apache.pulsar.client.admin`](/api/admin) | The Java client for the [Pulsar admin interface](admin-api-overview.md)
+
+
+## Go client
+
+For a tutorial on using the Pulsar Go client, see [The Pulsar Go client](client-libraries-go.md).
+
+
+## Python client
+
+For a tutorial on using the Pulsar Python client, see [The Pulsar Python client](client-libraries-python.md).
+
+There are also [pdoc](https://github.com/BurntSushi/pdoc)-generated API docs for the Python client [here](/api/python).
+
+## C++ client
+
+For a tutorial on using the Pulsar C++ clent, see [The Pulsar C++ client](client-libraries-cpp.md).
+
+There are also [Doxygen](http://www.stack.nl/~dimitri/doxygen/)-generated API docs for the C++ client [here](/api/cpp).
+
+## Feature Matrix
+
+This matrix listing all the features among different languages in Pulsar master can be found [here](https://github.com/apache/pulsar/wiki/Client-Features-Matrix).
+
+## Thirdparty Clients
+
+Besides the official released clients, there are also multiple projects on developing a Pulsar client in different languages.
+
+> if you have developed a Pulsar client, but it doesn't show up here. Feel free to submit a pull request to add your client to the list below.
+
+| Language | Project | Maintainer | License | Description |
+|----------|---------|------------|---------|-------------|
+| Go | [pulsar-client-go](https://github.com/Comcast/pulsar-client-go) | [Comcast](https://github.com/Comcast) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | A native golang client |
+| Go | [go-pulsar](https://github.com/t2y/go-pulsar) | [t2y](https://github.com/t2y) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | |
+| Scala | [pulsar4s](https://github.com/sksamuel/pulsar4s) | [sksamuel](https://github.com/sksamuel) | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) | Idomatic, typesafe, and reactive Scala client for Apache Pulsar |
diff --git a/site2/website/versioned_docs/version-2.2.0/getting-started-standalone.md b/site2/website/versioned_docs/version-2.2.0/getting-started-standalone.md
new file mode 100644
index 0000000000..b22210989f
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/getting-started-standalone.md
@@ -0,0 +1,198 @@
+---
+id: version-2.2.0-standalone
+title: Setting up a local standalone cluster
+sidebar_label: Run Pulsar locally
+original_id: standalone
+---
+
+For the purposes of local development and testing, you can run Pulsar in standalone mode on your own machine. Standalone mode includes a Pulsar broker as well as the necessary ZooKeeper and BookKeeper components running inside of a single Java Virtual Machine (JVM) process.
+
+> #### Pulsar in production? 
+> If you're looking to run a full production Pulsar installation, see the [Deploying a Pulsar instance](deploy-bare-metal.md) guide.
+
+## Run Pulsar Standalone Manually
+
+### System requirements
+
+Pulsar is currently available for **MacOS** and **Linux**. In order to use Pulsar, you'll need to install [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html).
+
+
+### Installing Pulsar
+
+To get started running Pulsar, download a binary tarball release in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} binary release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:binary_release_url
+  ```
+
+Once the tarball is downloaded, untar it and `cd` into the resulting directory:
+
+```bash
+$ tar xvfz apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+```
+
+### What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | Pulsar's command-line tools, such as [`pulsar`](reference-cli-tools.md#pulsar) and [`pulsar-admin`](reference-pulsar-admin.md)
+`conf` | Configuration files for Pulsar, including for [broker configuration](reference-configuration.md#broker), [ZooKeeper configuration](reference-configuration.md#zookeeper), and more
+`examples` | A Java JAR file containing example [Pulsar Functions](functions-overview.md)
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used by Pulsar
+`licenses` | License files, in `.txt` form, for various components of the Pulsar [codebase](developing-codebase.md)
+
+These directories will be created once you begin running Pulsar:
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory used by ZooKeeper and BookKeeper
+`instances` | Artifacts created for [Pulsar Functions](functions-overview.md)
+`logs` | Logs created by the installation
+
+
+### Installing Builtin Connectors
+
+Since release `2.1.0-incubating`, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
+If you would like to enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:connector_release_url" download>Pulsar IO Connectors {{pulsar:version}} release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:connector_release_url
+  ```
+
+Once the tarball is downloaded, in the pulsar directory, untar the io-connectors package and copy the connectors as `connectors`
+in the pulsar directory:
+
+```bash
+$ tar xvfz /path/to/apache-pulsar-io-connectors-{{pulsar:version}}-bin.tar.gz
+
+// you will find a directory named `apache-pulsar-io-connectors-{{pulsar:version}}` in the pulsar directory
+// then copy the connectors
+
+$ cd apache-pulsar-io-connectors-{{pulsar:version}}/connectors connectors
+
+$ ls connectors
+pulsar-io-aerospike-{{pulsar:version}}.nar
+pulsar-io-cassandra-{{pulsar:version}}.nar
+pulsar-io-kafka-{{pulsar:version}}.nar
+pulsar-io-kinesis-{{pulsar:version}}.nar
+pulsar-io-rabbitmq-{{pulsar:version}}.nar
+pulsar-io-twitter-{{pulsar:version}}.nar
+...
+```
+
+> #### NOTES
+>
+> If you are running Pulsar in a bare metal cluster, you need to make sure `connectors` tarball is unzipped in every broker's pulsar directory
+> (or in every function-worker's pulsar directory if you are running a separate worker cluster for Pulsar functions).
+> 
+> If you are [running Pulsar in Docker](getting-started-docker.md) or deploying Pulsar using a docker image (e.g. [K8S](deploy-kubernetes.md) or [DCOS](deploy-dcos.md)),
+> you can use `apachepulsar/pulsar-all` image instead of `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+### Starting the cluster
+
+Once you have an up-to-date local copy of the release, you can start up a local cluster using the [`pulsar`](reference-cli-tools.md#pulsar) command, which is stored in the `bin` directory, and specifying that you want to start up Pulsar in standalone mode:
+
+```bash
+$ bin/pulsar standalone
+```
+
+If Pulsar has been successfully started, you should see `INFO`-level log messages like this:
+
+```bash
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@95] - Global Zookeeper cache started
+2017-06-01 14:46:29,192 - INFO  - [main:AuthenticationService@61] - Authentication is disabled
+2017-06-01 14:46:29,192 - INFO  - [main:WebSocketService@108] - Pulsar WebSocket Service started
+```
+
+> #### Automatically created namespace
+> When you start a local standalone cluster, Pulsar will automatically create a `public/default` [namespace](concepts-messaging.md#namespaces) that you can use for development purposes. All Pulsar topics are managed within namespaces. For more info, see [Topics](concepts-messaging.md#topics).
+
+## Run Pulsar Standalone in Docker
+
+Alternatively, you can run pulsar standalone locally in docker.
+
+```bash
+docker run -it -p 80:80 -p 8080:8080 -p 6650:6650 apachepulsar/pulsar-standalone
+```
+
+The command forwards following port to localhost:
+
+- 80: the port for pulsar dashboard
+- 8080: the http service url for pulsar service
+- 6650: the binary protocol service url for pulsar service
+
+After the docker container is running, you can access the dashboard under http://localhost .
+
+## Testing your cluster setup
+
+Pulsar provides a CLI tool called [`pulsar-client`](reference-cli-tools.md#pulsar-client) that enables you to do things like send messages to a Pulsar topic in a running cluster. This command will send a simple message saying `hello-pulsar` to the `my-topic` topic:
+
+```bash
+$ bin/pulsar-client produce my-topic --messages "hello-pulsar"
+```
+
+If the message has been successfully published to the topic, you should see a confirmation like this in the `pulsar-client` logs:
+
+```
+13:09:39.356 [main] INFO  org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced
+```
+
+
+> #### No need to explicitly create new topics
+> You may have noticed that we did not explicitly create the `my-topic` topic to which we sent the `hello-pulsar` message. If you attempt to write a message to a topic that does not yet exist, Pulsar will automatically create that topic for you.
+
+## Using Pulsar clients locally
+
+Pulsar currently offers client libraries for [Java](client-libraries-java.md), [Python](client-libraries-python.md), and [C++](client-libraries-cpp.md). If you're running a local standalone cluster, you can use one of these root URLs for interacting with your cluster:
+
+* `http://localhost:8080`
+* `pulsar://localhost:6650`
+
+Here's an example producer for a Pulsar topic using the [Java](client-libraries-java.md) client:
+
+```java
+String localClusterUrl = "pulsar://localhost:6650";
+
+PulsarClient client = PulsarClient.builder().serviceURL(localClusterUrl).build();
+Producer<byte[]> producer = client.newProducer().topic("my-topic").create();
+```
+
+Here's an example [Python](client-libraries-python.md) producer:
+
+```python
+import pulsar
+
+client = pulsar.Client('pulsar://localhost:6650')
+producer = client.create_producer('my-topic')
+```
+
+Finally, here's an example [C++](client-libraries-cpp.md) producer:
+
+```cpp
+Client client("pulsar://localhost:6650");
+Producer producer;
+Result result = client.createProducer("my-topic", producer);
+if (result != ResultOk) {
+    LOG_ERROR("Error creating producer: " << result);
+    return -1;
+}
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/io-managing.md b/site2/website/versioned_docs/version-2.2.0/io-managing.md
new file mode 100644
index 0000000000..31848dce74
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/io-managing.md
@@ -0,0 +1,162 @@
+---
+id: version-2.2.0-io-managing
+title: Managing Connectors
+sidebar_label: Managing Connectors
+original_id: io-managing
+---
+
+This section describes how to manage Pulsar IO connectors in a Pulsar cluster. You will learn how to:
+
+- Deploy builtin connectors
+- Monitor and update running connectors with Pulsar Admin CLI
+- Deploy customized connectors
+- Upgrade a connector
+
+## Using Builtin Connectors
+
+Pulsar bundles several [builtin connectors](io-overview.md#working-with-connectors) that should be used for moving data in and out
+of commonly used systems such as databases, messaging systems. Getting set up to use these builtin connectors is simple. You can follow
+the [instructions](getting-started-standalone.md#installing-builtin-connectors) on installing builtin connectors. After setup, all
+the builtin connectors will be automatically discovered by Pulsar brokers (or function-workers), so no additional installation steps are
+required.
+
+## Configuring Connectors
+
+Configuring Pulsar IO connectors is straightforward. What you need to do is to provide a yaml configuration file when your [run connectors](#running-connectors).
+The yaml configuration file basically tells Pulsar where to locate the sources and sinks and how to connect those sources and sinks with Pulsar topics.
+
+Below is an example yaml configuration file for Cassandra Sink:
+
+```shell
+tenant: public
+namespace: default
+name: cassandra-test-sink
+...
+# cassandra specific config
+configs:
+    roots: "localhost:9042"
+    keyspace: "pulsar_test_keyspace"
+    columnFamily: "pulsar_test_table"
+    keyname: "key"
+    columnName: "col"
+```
+
+The example yaml basically tells Pulsar which Cassandra cluster to connect, what is the `keyspace` and `columnFamily` to be used in Cassandra for collecting data,
+and how to map a Pulsar message into Cassandra table key and columns.
+
+For details, consult the documentation for [individual connectors](io-overview.md#working-with-connectors).
+
+## Running Connectors
+
+Pulsar connectors can be managed using the [`source`](reference-pulsar-admin.md#source) and [`sink`](reference-pulsar-admin.md#sink) commands of the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool.
+
+### Running sources
+
+You can submit a source to be run in an existing Pulsar cluster using a command of this form:
+
+```bash
+$ ./bin/pulsar-admin source create --classname  <classname> --archive <jar-location> --tenant <tenant> --namespace <namespace> --name <source-name> --destination-topic-name <output-topic>
+```
+
+Here’s an example command:
+
+```bash
+bin/pulsar-admin source create --classname org.apache.pulsar.io.twitter.TwitterFireHose --archive ~/application.jar --tenant test --namespace ns1 --name twitter-source --destination-topic-name twitter_data
+```
+
+Instead of submitting a source to run on an existing Pulsar cluster, you alternatively can run a source as a process on your local machine:
+
+```bash
+bin/pulsar-admin source localrun --classname  org.apache.pulsar.io.twitter.TwitterFireHose --archive ~/application.jar --tenant test --namespace ns1 --name twitter-source --destination-topic-name twitter_data
+```
+
+If you are submitting a built-in source, you don't need to specify `--classname` and `--archive`.
+You can simply specify the source type `--source-type`. The command to submit a built-in source is
+in following form:
+
+```bash
+./bin/pulsar-admin source create \
+    --tenant <tenant> \
+    --namespace <namespace> \
+    --name <source-name> \
+    --destination-topic-name <input-topics> \
+    --source-type <source-type>
+```
+
+Here's an example to submit a Kafka source:
+
+```bash
+./bin/pulsar-admin source create \
+    --tenant test-tenant \
+    --namespace test-namespace \
+    --name test-kafka-source \
+    --destination-topic-name pulsar_sink_topic \
+    --source-type kafka
+```
+
+### Running Sinks
+
+You can submit a sink to be run in an existing Pulsar cluster using a command of this form:
+
+```bash
+./bin/pulsar-admin sink create --classname  <classname> --archive <jar-location> --tenant test --namespace <namespace> --name <sink-name> --inputs <input-topics>
+```
+
+Here’s an example command:
+
+```bash
+./bin/pulsar-admin sink create --classname  org.apache.pulsar.io.cassandra --archive ~/application.jar --tenant test --namespace ns1 --name cassandra-sink --inputs test_topic
+```
+
+Instead of submitting a sink to run on an existing Pulsar cluster, you alternatively can run a sink as a process on your local machine:
+
+```bash
+./bin/pulsar-admin sink localrun --classname  org.apache.pulsar.io.cassandra --archive ~/application.jar --tenant test --namespace ns1 --name cassandra-sink --inputs test_topic
+```
+
+If you are submitting a built-in sink, you don't need to specify `--classname` and `--archive`.
+You can simply specify the sink type `--sink-type`. The command to submit a built-in sink is
+in following form:
+
+```bash
+./bin/pulsar-admin sink create \
+    --tenant <tenant> \
+    --namespace <namespace> \
+    --name <sink-name> \
+    --inputs <input-topics> \
+    --sink-type <sink-type>
+```
+
+Here's an example to submit a Cassandra sink:
+
+```bash
+./bin/pulsar-admin sink create \
+    --tenant test-tenant \
+    --namespace test-namespace \
+    --name test-cassandra-sink \
+    --inputs pulsar_input_topic \
+    --sink-type cassandra
+```
+
+## Monitoring Connectors
+
+Since Pulsar IO connectors are running as [Pulsar Functions](functions-overiew.md), so you can use [`functions`](reference-pulsar-admin.md#source) commands
+available in the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool.
+
+### Retrieve Connector Metadata
+
+```
+bin/pulsar-admin functions get \
+    --tenant <tenant> \
+    --namespace <namespace> \
+    --name <connector-name>
+```
+
+### Retrieve Connector Running Status
+
+```
+bin/pulsar-admin functions getstatus \
+    --tenant <tenant> \
+    --namespace <namespace> \
+    --name <connector-name>
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/io-overview.md b/site2/website/versioned_docs/version-2.2.0/io-overview.md
new file mode 100644
index 0000000000..56d17bd86c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/io-overview.md
@@ -0,0 +1,40 @@
+---
+id: version-2.2.0-io-overview
+title: Pulsar IO Overview
+sidebar_label: Overview
+original_id: io-overview
+---
+
+Messaging systems are most powerful when you can easily use them in conjunction with external systems like databases and other messaging systems. **Pulsar IO** is a feature of Pulsar that enables you to easily create, deploy, and manage Pulsar **connectors** that interact with external systems, such as [Apache Cassandra](https://cassandra.apache.org), [Aerospike](https://www.aerospike.com), and many others.
+
+> #### Pulsar IO and Pulsar Functions
+> Under the hood, Pulsar IO connectors are specialized [Pulsar Functions](functions-overview.md) purpose-built to interface with external systems. The [administrative interface](io-quickstart.md) for Pulsar IO is, in fact, quite similar to that of Pulsar Functions.
+
+## Sources and sinks
+
+Pulsar IO connectors come in two types:
+
+* **Sources** feed data *into* Pulsar from other systems. Common sources include other messaging systems and "firehose"-style data pipeline APIs.
+* **Sinks** are fed data *from* Pulsar. Common sinks include other messaging systems and SQL and NoSQL databases.
+
+This diagram illustrates the relationship between sources, sinks, and Pulsar:
+
+![Pulsar IO diagram](assets/pulsar-io.png "Pulsar IO connectors (sources and sinks)")
+
+## Working with connectors
+
+Pulsar IO connectors can be managed via the [`pulsar-admin`](reference-pulsar-admin.md) CLI tool, in particular the [`source`](reference-pulsar-admin.md#source) and [`sink`](reference-pulsar-admin.md#sink) commands.
+
+> For a guide to managing connectors in your Pulsar installation, see the [Getting started with Pulsar IO](io-quickstart.md)
+
+The following connectors are currently available for Pulsar:
+
+|Name|Java Class|Documentation|
+|---|---|---|
+|[Aerospike sink](https://www.aerospike.com/)|[`org.apache.pulsar.io.aerospike.AerospikeSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/aerospike/src/main/java/org/apache/pulsar/io/aerospike/AerospikeStringSink.java)|[Documentation](io-aerospike.md)|
+|[Cassandra sink](https://cassandra.apache.org)|[`org.apache.pulsar.io.cassandra.CassandraSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/cassandra/src/main/java/org/apache/pulsar/io/cassandra/CassandraStringSink.java)|[Documentation](io-cassandra.md)|
+|[Kafka source](https://kafka.apache.org)|[`org.apache.pulsar.io.kafka.KafkaSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaStringSource.java)|[Documentation](io-kafka.md#source)|
+|[Kafka sink](https://kafka.apache.org)|[`org.apache.pulsar.io.kafka.KafkaSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaStringSink.java)|[Documentation](io-kafka.md#sink)|
+|[Kinesis sink](https://aws.amazon.com/kinesis/)|[`org.apache.pulsar.io.kinesis.KinesisSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kinesis/src/main/java/org/apache/pulsar/io/kinesis/KinesisSink.java)|[Documentation](io-kinesis.md#sink)|
+|[RabbitMQ source](https://www.rabbitmq.com)|[`org.apache.pulsar.io.rabbitmq.RabbitMQSource`](https://github.com/apache/pulsar/blob/master/pulsar-io/rabbitmq/src/main/java/org/apache/pulsar/io/rabbitmq/RabbitMQSource.java)|[Documentation](io-rabbitmq.md#sink)|
+|[Twitter Firehose source](https://developer.twitter.com/en/docs)|[`org.apache.pulsar.io.twitter.TwitterFireHose`](https://github.com/apache/pulsar/blob/master/pulsar-io/twitter/src/main/java/org/apache/pulsar/io/twitter/TwitterFireHose.java)|[Documentation](io-twitter.md#source)|
diff --git a/site2/website/versioned_docs/version-2.2.0/io-quickstart.md b/site2/website/versioned_docs/version-2.2.0/io-quickstart.md
new file mode 100644
index 0000000000..b917f79c77
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/io-quickstart.md
@@ -0,0 +1,400 @@
+---
+id: version-2.2.0-io-quickstart
+title: Tutorial: Connecting Pulsar with Apache Cassandra
+sidebar_label: Getting started
+original_id: io-quickstart
+---
+
+This tutorial provides a hands-on look at how you can move data out of Pulsar without writing a single line of code.
+It is helpful to review the [concepts](io-overview.md) for Pulsar I/O in tandem with running the steps in this guide
+to gain a deeper understanding. At the end of this tutorial, you will be able to:
+
+- Connect your Pulsar cluster with your Cassandra cluster
+
+> #### Tip
+>
+> 1. These instructions assume you are running Pulsar in [standalone mode](getting-started-standalone.md). However all
+> the commands used in this tutorial should be able to be used in a multi-nodes Pulsar cluster without any changes.
+>
+> 2. All the instructions are assumed to run at the root directory of a Pulsar binary distribution.
+
+## Installing Pulsar
+
+To get started running Pulsar, download a binary tarball release in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} binary release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:binary_release_url
+  ```
+
+Once the tarball is downloaded, untar it and `cd` into the resulting directory:
+
+```bash
+$ tar xvfz apache-pulsar-{{pulsar:version}}-bin.tar.gz
+$ cd apache-pulsar-{{pulsar:version}}
+```
+
+## Installing Builtin Connectors
+
+Since release `2.1.0-incubating`, Pulsar releases a separate binary distribution, containing all the `builtin` connectors.
+If you would like to enable those `builtin` connectors, you can download the connectors tarball release in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:connector_release_url" download>Pulsar IO Connectors {{pulsar:version}} release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  $ wget pulsar:connector_release_url
+  ```
+
+Once the tarball is downloaded, in the pulsar directory, untar the io-connectors package and copy the connectors as `connectors`
+in the pulsar directory:
+
+```bash
+$ tar xvfz /path/to/apache-pulsar-io-connectors-{{pulsar:version}}-bin.tar.gz
+
+// you will find a directory named `apache-pulsar-io-connectors-{{pulsar:version}}` in the pulsar directory
+// then copy the connectors
+
+$ cp -r apache-pulsar-io-connectors-{{pulsar:version}}/connectors connectors
+
+$ ls connectors
+pulsar-io-aerospike-{{pulsar:version}}.nar
+pulsar-io-cassandra-{{pulsar:version}}.nar
+pulsar-io-kafka-{{pulsar:version}}.nar
+pulsar-io-kinesis-{{pulsar:version}}.nar
+pulsar-io-rabbitmq-{{pulsar:version}}.nar
+pulsar-io-twitter-{{pulsar:version}}.nar
+...
+```
+
+
+## Start Pulsar Service
+
+```bash
+bin/pulsar standalone
+```
+
+All the components of a Pulsar service will start in order. You can curl those pulsar service endpoints to make sure Pulsar service is up running correctly.
+
+1. Check pulsar binary protocol port.
+
+```bash
+telnet localhost 6650
+```
+
+2. Check pulsar function cluster
+
+```bash
+curl -s http://localhost:8080/admin/v2/functions/cluster
+```
+
+Example output:
+```shell
+[{"workerId":"c-standalone-fw-localhost-6750","workerHostname":"localhost","port":6750}]
+```
+
+3. Make sure public tenant and default namespace exist
+
+```bash
+curl -s http://localhost:8080/admin/v2/namespaces/public
+```
+
+Example outoupt:
+```shell
+["public/default","public/functions"]
+```
+
+4. All builtin connectors should be listed as available.
+
+```bash
+curl -s http://localhost:8080/admin/v2/functions/connectors
+```
+
+Example output:
+```json
+[{"name":"aerospike","description":"Aerospike database sink","sinkClass":"org.apache.pulsar.io.aerospike.AerospikeStringSink"},{"name":"cassandra","description":"Writes data into Cassandra","sinkClass":"org.apache.pulsar.io.cassandra.CassandraStringSink"},{"name":"kafka","description":"Kafka source and sink connector","sourceClass":"org.apache.pulsar.io.kafka.KafkaStringSource","sinkClass":"org.apache.pulsar.io.kafka.KafkaBytesSink"},{"name":"kinesis","description":"Kinesis sink connector","sinkClass":"org.apache.pulsar.io.kinesis.KinesisSink"},{"name":"rabbitmq","description":"RabbitMQ source connector","sourceClass":"org.apache.pulsar.io.rabbitmq.RabbitMQSource"},{"name":"twitter","description":"Ingest data from Twitter firehose","sourceClass":"org.apache.pulsar.io.twitter.TwitterFireHose"}]
+```
+
+If an error occurred while starting Pulsar service, you may be able to seen exception at the terminal you are running `pulsar/standalone`,
+or you can navigate the `logs` directory under the Pulsar directory to view the logs.
+
+## Connect Pulsar to Apache Cassandra
+
+> Make sure you have docker available at your laptop. If you don't have docker installed, you can follow the [instructions](https://docs.docker.com/docker-for-mac/install/).
+
+We are using `cassandra` docker image to start a single-node cassandra cluster in Docker.
+
+### Setup the Cassandra Cluster
+
+#### Start a Cassandra Cluster
+
+```bash
+docker run -d --rm --name=cassandra -p 9042:9042 cassandra
+```
+
+Before moving to next steps, make sure the cassandra cluster is up running.
+
+1. Make sure the docker process is running.
+
+```bash
+docker ps
+```
+
+2. Check the cassandra logs to make sure cassandra process is running as expected.
+
+```bash
+docker logs cassandra
+```
+
+3. Check the cluster status
+
+```bash
+docker exec cassandra nodetool status
+```
+
+Example output:
+```
+Datacenter: datacenter1
+=======================
+Status=Up/Down
+|/ State=Normal/Leaving/Joining/Moving
+--  Address     Load       Tokens       Owns (effective)  Host ID                               Rack
+UN  172.17.0.2  103.67 KiB  256          100.0%            af0e4b2f-84e0-4f0b-bb14-bd5f9070ff26  rack1
+```
+
+#### Create keyspace and table
+
+We are using `cqlsh` to connect to the cassandra cluster to create keyspace and table.
+
+```bash
+$ docker exec -ti cassandra cqlsh localhost
+Connected to Test Cluster at localhost:9042.
+[cqlsh 5.0.1 | Cassandra 3.11.2 | CQL spec 3.4.4 | Native protocol v4]
+Use HELP for help.
+cqlsh>
+```
+
+All the following commands are executed in `cqlsh`.
+
+##### Create keyspace `pulsar_test_keyspace`
+
+```bash
+cqlsh> CREATE KEYSPACE pulsar_test_keyspace WITH replication = {'class':'SimpleStrategy', 'replication_factor':1};
+```
+
+#### Create table `pulsar_test_table`
+
+```bash
+cqlsh> USE pulsar_test_keyspace;
+cqlsh:pulsar_test_keyspace> CREATE TABLE pulsar_test_table (key text PRIMARY KEY, col text);
+```
+
+### Configure a Cassandra Sink
+
+Now that we have a Cassandra cluster running locally. In this section, we will configure a Cassandra sink connector.
+The Cassandra sink connector will read messages from a Pulsar topic and write the messages into a Cassandra table.
+
+In order to run a Cassandra sink connector, you need to prepare a yaml config file including informations that Pulsar IO
+runtime needs to know. For example, how Pulsar IO can find the cassandra cluster, what is the keyspace and table that
+Pulsar IO will be using for writing Pulsar messages to.
+
+Create a file `examples/cassandra-sink.yml` and edit it to fill in following content:
+
+```
+configs:
+    roots: "localhost:9042"
+    keyspace: "pulsar_test_keyspace"
+    columnFamily: "pulsar_test_table"
+    keyname: "key"
+    columnName: "col"
+```
+
+To learn more about Cassandra Connector, see [Cassandra Connector](io-cassandra.md).
+
+### Submit a Cassandra Sink
+
+Pulsar provides the [CLI](reference-cli-tools.md) for running and managing Pulsar I/O connectors.
+
+We can run following command to sink a sink connector with type `cassandra` and config file `examples/cassandra-sink.yml`.
+
+```shell
+bin/pulsar-admin sink create \
+    --tenant public \
+    --namespace default \
+    --name cassandra-test-sink \
+    --sink-type cassandra \
+    --sink-config-file examples/cassandra-sink.yml \
+    --inputs test_cassandra
+```
+
+Once the command is executed, Pulsar will create a sink connector named `cassandra-test-sink` and the sink connector will be running
+as a Pulsar Function and write the messages produced in topic `test_cassandra` to Cassandra table `pulsar_test_table`.
+
+### Inspect the Cassandra Sink
+
+Since an IO connector is running as [Pulsar Functions](functions-overview.md), you can use [functions CLI](reference-pulsar-admin.md#functions)
+for inspecting and managing the IO connectors.
+
+#### Retrieve Sink Info
+
+```bash
+bin/pulsar-admin functions get \
+    --tenant public \
+    --namespace default \
+    --name cassandra-test-sink
+```
+
+Example output:
+
+```shell
+{
+  "tenant": "public",
+  "namespace": "default",
+  "name": "cassandra-test-sink",
+  "className": "org.apache.pulsar.functions.api.utils.IdentityFunction",
+  "autoAck": true,
+  "parallelism": 1,
+  "source": {
+    "topicsToSerDeClassName": {
+      "test_cassandra": ""
+    }
+  },
+  "sink": {
+    "configs": "{\"roots\":\"cassandra\",\"keyspace\":\"pulsar_test_keyspace\",\"columnFamily\":\"pulsar_test_table\",\"keyname\":\"key\",\"columnName\":\"col\"}",
+    "builtin": "cassandra"
+  },
+  "resources": {}
+}
+```
+
+#### Check Sink Running Status
+
+```bash
+bin/pulsar-admin functions getstatus \
+    --tenant public \
+    --namespace default \
+    --name cassandra-test-sink
+```
+
+Example output:
+
+```shell
+{
+  "functionStatusList": [
+    {
+      "running": true,
+      "instanceId": "0",
+      "metrics": {
+        "metrics": {
+          "__total_processed__": {},
+          "__total_successfully_processed__": {},
+          "__total_system_exceptions__": {},
+          "__total_user_exceptions__": {},
+          "__total_serialization_exceptions__": {},
+          "__avg_latency_ms__": {}
+        }
+      },
+      "workerId": "c-standalone-fw-localhost-6750"
+    }
+  ]
+}
+```
+
+### Verify the Cassandra Sink
+
+Now lets produce some messages to the input topic of the Cassandra sink `test_cassandra`.
+
+```bash
+for i in {0..9}; do bin/pulsar-client produce -m "key-$i" -n 1 test_cassandra; done
+```
+
+Inspect the sink running status again. You should be able to see 10 messages are processed by the Cassandra sink.
+
+```bash
+bin/pulsar-admin functions getstatus \
+    --tenant public \
+    --namespace default \
+    --name cassandra-test-sink
+```
+
+Example output:
+
+```shell
+{
+  "functionStatusList": [
+    {
+      "running": true,
+      "numProcessed": "11",
+      "numSuccessfullyProcessed": "11",
+      "lastInvocationTime": "1532031040117",
+      "instanceId": "0",
+      "metrics": {
+        "metrics": {
+          "__total_processed__": {
+            "count": 5.0,
+            "sum": 5.0,
+            "max": 5.0
+          },
+          "__total_successfully_processed__": {
+            "count": 5.0,
+            "sum": 5.0,
+            "max": 5.0
+          },
+          "__total_system_exceptions__": {},
+          "__total_user_exceptions__": {},
+          "__total_serialization_exceptions__": {},
+          "__avg_latency_ms__": {}
+        }
+      },
+      "workerId": "c-standalone-fw-localhost-6750"
+    }
+  ]
+}
+```
+
+Finally, lets inspect the results in Cassandra using `cqlsh`
+
+```bash
+docker exec -ti cassandra cqlsh localhost
+```
+
+Select the rows from the Cassandra table `pulsar_test_table`:
+
+```bash
+cqlsh> use pulsar_test_keyspace;
+cqlsh:pulsar_test_keyspace> select * from pulsar_test_table;
+
+ key    | col
+--------+--------
+  key-5 |  key-5
+  key-0 |  key-0
+  key-9 |  key-9
+  key-2 |  key-2
+  key-1 |  key-1
+  key-3 |  key-3
+  key-6 |  key-6
+  key-7 |  key-7
+  key-4 |  key-4
+  key-8 |  key-8
+```
+
+### Delete the Cassandra Sink
+
+```shell
+bin/pulsar-admin sink delete \
+    --tenant public \
+    --namespace default \
+    --name cassandra-test-sink
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/reference-configuration.md b/site2/website/versioned_docs/version-2.2.0/reference-configuration.md
new file mode 100644
index 0000000000..ea4da1b10c
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/reference-configuration.md
@@ -0,0 +1,468 @@
+---
+id: version-2.2.0-reference-configuration
+title: Pulsar configuration
+sidebar_label: Pulsar configuration
+original_id: reference-configuration
+---
+
+<style type="text/css">
+  table{
+    font-size: 80%;
+  }
+</style>
+
+
+Pulsar configuration can be managed either via a series of configuration files contained in the [`conf`](https://github.com/apache/pulsar/tree/master/conf) directory of a Pulsar [installation](getting-started-standalone.md)
+
+* [BookKeeper](#bookkeeper)
+* [Broker](#broker)
+* [Client](#client)
+* [Service discovery](#service-discovery)
+* [Log4j](#log4j)
+* [Log4j shell](#log4j-shell)
+* [Standalone](#standalone)
+* [WebSocket](#websocket)
+* [ZooKeeper](#zookeeper)
+
+## BookKeeper
+
+BookKeeper is a replicated log storage system that Pulsar uses for durable storage of all messages.
+
+
+|Name|Description|Default|
+|---|---|---|
+|bookiePort|The port on which the bookie server listens.|3181|
+|allowLoopback|Whether the bookie is allowed to use a loopback interface as its primary interface (i.e. the interface used to establish its identity). By default, loopback interfaces are not allowed as the primary interface. Using a loopback interface as the primary interface usually indicates a configuration error. For example, it’s fairly common in some VPS setups to not configure a hostname or to have the hostname resolve to `127.0.0.1`. If this is the case, then all bookies in the cluster will establish their identities as `127.0.0.1:3181` and only one will be able to join the cluster. For VPSs configured like this, you should explicitly set the listening interface.|false|
+|listeningInterface|The network interface on which the bookie listens. If not set, the bookie will listen on all interfaces.|eth0|
+|journalDirectory|The directory where Bookkeeper outputs its write-ahead log (WAL)|data/bookkeeper/journal|
+|ledgerDirectories|The directory where Bookkeeper outputs ledger snapshots. This could define multiple directories to store snapshots separated by comma, for example `ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data`. Ideally, ledger dirs and the journal dir are each in a different device, which reduces the contention between random I/O and sequential write. It is possible to run with a single disk, but performance will be significantly lower.|data/bookkeeper/ledgers|
+|ledgerManagerType|The type of ledger manager used to manage how ledgers are stored, managed, and garbage collected. See [BookKeeper Internals](http://bookkeeper.apache.org/docs/latest/getting-started/concepts) for more info.|hierarchical|
+|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This parameter is used by the ZooKeeper-based ledger manager as a root znode to store all ledgers.|/ledgers|
+|ledgerStorageClass|Ledger storage implementation class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage|
+|entryLogFilePreallocationEnabled|Enable or disable entry logger preallocation|true|
+|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log file will be created when the old one reaches the file size limitation.|2147483648|
+|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a minor compaction. If set to less than zero, the minor compaction is disabled.|0.2|
+|minorCompactionInterval|Time interval to run minor compaction, in seconds. If set to less than zero, the minor compaction is disabled.|3600|
+|majorCompactionThreshold|The threshold of major compaction. Entry log files whose remaining size percentage reaches below this threshold will be compacted in a major compaction. Those entry log files whose remaining size percentage is still higher than the threshold will never be compacted. If set to less than zero, the minor compaction is disabled.|0.5|
+|majorCompactionInterval|The time interval to run major compaction, in seconds. If set to less than zero, the major compaction is disabled.|86400|
+|compactionMaxOutstandingRequests|Sets the maximum number of entries that can be compacted without flushing. When compacting, the entries are written to the entrylog and the new offsets are cached in memory. Once the entrylog is flushed the index is updated with the new offsets. This parameter controls the number of entries added to the entrylog before a flush is forced. A higher value for this parameter means more memory will be used for offsets. Each offset consists of 3 longs. This parameter should not be modified unless you’re fully aware of the consequences.|100000|
+|compactionRate|The rate at which compaction will read entries, in adds per second.|1000|
+|isThrottleByBytes|Throttle compaction by bytes or by entries.|false|
+|compactionRateByEntries|The rate at which compaction will read entries, in adds per second.|1000|
+|compactionRateByBytes|Set the rate at which compaction will readd entries. The unit is bytes added per second.|1000000|
+|journalMaxSizeMB|Max file size of journal file, in megabytes. A new journal file will be created when the old one reaches the file size limitation.|2048|
+|journalMaxBackups|The max number of old journal filse to keep. Keeping a number of old journal files would help data recovery in special cases.|5|
+|journalPreAllocSizeMB|How space to pre-allocate at a time in the journal.|16|
+|journalWriteBufferSizeKB|The of the write buffers used for the journal.|64|
+|journalRemoveFromPageCache|Whether pages should be removed from the page cache after force write.|true|
+|journalAdaptiveGroupWrites|Whether to group journal force writes, which optimizes group commit for higher throughput.|true|
+|journalMaxGroupWaitMSec|The maximum latency to impose on a journal write to achieve grouping.|1|
+|journalAlignmentSize|All the journal writes and commits should be aligned to given size|4096|
+|journalBufferedWritesThreshold|Maximum writes to buffer to achieve grouping|524288|
+|journalFlushWhenQueueEmpty|If we should flush the journal when journal queue is empty|false|
+|numJournalCallbackThreads|The number of threads that should handle journal callbacks|8|
+|rereplicationEntryBatchSize|The number of max entries to keep in fragment for re-replication|5000|
+|gcWaitTime|How long the interval to trigger next garbage collection, in milliseconds. Since garbage collection is running in background, too frequent gc will heart performance. It is better to give a higher number of gc interval if there is enough disk capacity.|900000|
+|gcOverreplicatedLedgerWaitTime|How long the interval to trigger next garbage collection of overreplicated ledgers, in milliseconds. This should not be run very frequently since we read the metadata for all the ledgers on the bookie from zk.|86400000|
+|flushInterval|How long the interval to flush ledger index pages to disk, in milliseconds. Flushing index files will introduce much random disk I/O. If separating journal dir and ledger dirs each on different devices, flushing would not affect performance. But if putting journal dir and ledger dirs on same device, performance degrade significantly on too frequent flushing. You can consider increment flush interval to get better performance, but you need to pay more time on bookie server restart after failure.|60000|
+|bookieDeathWatchInterval|Interval to watch whether bookie is dead or not, in milliseconds|1000|
+|zkServers|A list of one of more servers on which zookeeper is running. The server list can be comma separated values, for example: zkServers=zk1:2181,zk2:2181,zk3:2181.|localhost:2181|
+|zkTimeout|ZooKeeper client session timeout in milliseconds Bookie server will exit if it received SESSION_EXPIRED because it was partitioned off from ZooKeeper for more than the session timeout JVM garbage collection, disk I/O will cause SESSION_EXPIRED. Increment this value could help avoiding this issue|30000|
+|serverTcpNoDelay|This settings is used to enabled/disabled Nagle’s algorithm, which is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. If you are sending many small messages, such that more than one can fit in a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm can provide better performance.|true|
+|openFileLimit|Max number of ledger index files could be opened in bookie server If number of ledger index files reaches this limitation, bookie server started to swap some ledgers from memory to disk. Too frequent swap will affect performance. You can tune this number to gain performance according your requirements.|0|
+|pageSize|Size of a index page in ledger cache, in bytes A larger index page can improve performance writing page to disk, which is efficent when you have small number of ledgers and these ledgers have similar number of entries. If you have large number of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.|8192|
+|pageLimit|How many index pages provided in ledger cache If number of index pages reaches this limitation, bookie server starts to swap some ledgers from memory to disk. You can increment this value when you found swap became more frequent. But make sure pageLimit*pageSize should not more than JVM max memory limitation, otherwise you would got OutOfMemoryException. In general, incrementing pageLimit, using smaller index page would gain bettern performance in lager number of ledgers with fewer entries case If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute the limitation of number of index pages.|0|
+|readOnlyModeEnabled|If all ledger directories configured are full, then support only read requests for clients. If “readOnlyModeEnabled=true” then on all ledger disks full, bookie will be converted to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. By default this will be disabled.|true|
+|diskUsageThreshold|For each ledger dir, maximum disk space which can be used. Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will be written to that partition. If all ledger dir partions are full, then bookie will turn to readonly mode if ‘readOnlyModeEnabled=true’ is set, else it will shutdown. Valid values should be in between 0 and 1 (exclusive).|0.95|
+|diskCheckInterval|Disk check interval in milli seconds, interval to check the ledger dirs usage.|10000|
+|auditorPeriodicCheckInterval|Interval at which the auditor will do a check of all ledgers in the cluster. By default this runs once a week. The interval is set in seconds. To disable the periodic check completely, set this to 0. Note that periodic checking will put extra load on the cluster, so it should not be run more frequently than once a day.|604800|
+|auditorPeriodicBookieCheckInterval|The interval between auditor bookie checks. The auditor bookie check, checks ledger metadata to see which bookies should contain entries for each ledger. If a bookie which should contain entries is unavailable, thea the ledger containing that entry is marked for recovery. Setting this to 0 disabled the periodic check. Bookie checks will still run when a bookie fails. The interval is specified in seconds.|86400|
+|numAddWorkerThreads|number of threads that should handle write requests. if zero, the writes would be handled by netty threads directly.|0|
+|numReadWorkerThreads|number of threads that should handle read requests. if zero, the reads would be handled by netty threads directly.|8|
+|maxPendingReadRequestsPerThread|If read workers threads are enabled, limit the number of pending requests, to avoid the executor queue to grow indefinitely.|2500|
+|readBufferSizeBytes|The number of bytes we should use as capacity for BufferedReadChannel.|4096|
+|writeBufferSizeBytes|The number of bytes used as capacity for the write buffer|65536|
+|useHostNameAsBookieID|Whether the bookie should use its hostname to register with the coordination service (e.g.: zookeeper service). When false, bookie will use its ipaddress for the registration.|false|
+|statsProviderClass||org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider|
+|prometheusStatsHttpPort||8000|
+|dbStorage_writeCacheMaxSizeMb|Size of Write Cache. Memory is allocated from JVM direct memory. Write cache is used to buffer entries before flushing into the entry log For good performance, it should be big enough to hold a sub|512|
+|dbStorage_readAheadCacheMaxSizeMb|Size of Read cache. Memory is allocated from JVM direct memory. This read cache is pre-filled doing read-ahead whenever a cache miss happens|256|
+|dbStorage_readAheadCacheBatchSize|How many entries to pre-fill in cache after a read cache miss|1000|
+|dbStorage_rocksDB_blockCacheSize|Size of RocksDB block-cache. For best performance, this cache should be big enough to hold a significant portion of the index database which can reach ~2GB in some cases|268435456|
+|dbStorage_rocksDB_writeBufferSizeMB||64|
+|dbStorage_rocksDB_sstSizeInMB||64|
+|dbStorage_rocksDB_blockSize||65536|
+|dbStorage_rocksDB_bloomFilterBitsPerKey||10|
+|dbStorage_rocksDB_numLevels||-1|
+|dbStorage_rocksDB_numFilesInLevel0||4|
+|dbStorage_rocksDB_maxSizeInLevel1MB||256|
+
+
+
+## Broker
+
+Pulsar brokers are responsible for handling incoming messages from producers, dispatching messages to consumers, replicating data between clusters, and more.
+
+|Name|Description|Default|
+|---|---|---|
+|enablePersistentTopics|  Whether persistent topics are enabled on the broker |true|
+|enableNonPersistentTopics| Whether non-persistent topics are enabled on the broker |true|
+|functionsWorkerEnabled|  Whether the Pulsar Functions worker service is enabled in the broker  |false|
+|zookeeperServers|  Zookeeper quorum connection string  ||
+|globalZookeeperServers|  Global Zookeeper quorum connection string || 
+|brokerServicePort| Broker data port  |6650|
+|brokerServicePortTls|  Broker data port for TLS  |6651|
+|webServicePort|  Port to use to server HTTP request  |8080|
+|webServicePortTls| Port to use to server HTTPS request |8443|
+|webSocketServiceEnabled| Enable the WebSocket API service in broker  |false|
+|bindAddress| Hostname or IP address the service binds on, default is 0.0.0.0.  |0.0.0.0|
+|advertisedAddress| Hostname or IP address the service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used.  ||
+|clusterName| Name of the cluster to which this broker belongs to ||
+|brokerDeduplicationEnabled|  Sets the default behavior for message deduplication in the broker. If enabled, the broker will reject messages that were already stored in the topic. This setting can be overridden on a per-namespace basis.  |false|
+|brokerDeduplicationMaxNumberOfProducers| The maximum number of producers for which information will be stored for deduplication purposes.  |10000|
+|brokerDeduplicationEntriesInterval|  The number of entries after which a deduplication informational snapshot is taken. A larger interval will lead to fewer snapshots being taken, though this would also lengthen the topic recovery time (the time required for entries published after the snapshot to be replayed). |1000|
+|brokerDeduplicationProducerInactivityTimeoutMinutes| The time of inactivity (in minutes) after which the broker will discard deduplication information related to a disconnected producer. |360|
+|zooKeeperSessionTimeoutMillis| Zookeeper session timeout in milliseconds |30000|
+|brokerShutdownTimeoutMs| Time to wait for broker graceful shutdown. After this time elapses, the process will be killed  |60000|
+|backlogQuotaCheckEnabled|  Enable backlog quota check. Enforces action on topic when the quota is reached  |true|
+|backlogQuotaCheckIntervalInSeconds|  How often to check for topics that have reached the quota |60|
+|backlogQuotaDefaultLimitGB|  Default per-topic backlog quota limit |10|
+|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics  |true|
+|brokerDeleteInactiveTopicsFrequencySeconds|  How often to check for inactive topics  |60|
+|messageExpiryCheckIntervalInMinutes| How frequently to proactively check and purge expired messages  |5|
+|brokerServiceCompactionMonitorIntervalInSeconds| Interval between checks to see if topics with compaction policies need to be compacted  |60|
+|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed.  |1000|
+|clientLibraryVersionCheckEnabled|  Enable check for minimum allowed client library version |false|
+|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information  |true|
+|statusFilePath|  Path for the file used to determine the rotation status for the broker when responding to service discovery health checks ||
+|preferLaterVersions| If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to use only brokers running the latest software version (to minimize impact to bundles)  |false|
+|tlsEnabled|  Enable TLS  |false|
+|tlsCertificateFilePath|  Path for the TLS certificate file ||
+|tlsKeyFilePath|  Path for the TLS private key file ||
+|tlsTrustCertsFilePath| Path for the trusted TLS certificate file ||
+|tlsAllowInsecureConnection|  Accept untrusted TLS certificate from client  |false|
+|maxUnackedMessagesPerConsumer| Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending messages to consumer once, this limit reaches until consumer starts acknowledging messages back. Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction  |50000|
+|maxUnackedMessagesPerSubscription| Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit check and dispatcher can dispatch messages without any restriction  |200000|
+|maxConcurrentLookupRequest|  Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic |50000|
+|maxConcurrentTopicLoadRequest| Max number of concurrent topic loading request broker allows to control number of zk-operations |5000|
+|authenticationEnabled| Enable authentication |false|
+|authenticationProviders| Autentication provider name list, which is comma separated list of class names  ||
+|authorizationEnabled|  Enforce authorization |false|
+|superUserRoles|  Role names that are treated as “super-user”, meaning they will be able to do all admin operations and publish/consume from all topics ||
+|brokerClientAuthenticationPlugin|  Authentication settings of the broker itself. Used when the broker connects to other brokers, either in same or other clusters  ||
+|brokerClientAuthenticationParameters|||
+|athenzDomainNames| Supported Athenz provider domain names(comma separated) for authentication  ||
+|bookkeeperClientAuthenticationPlugin|  Authentication plugin to use when connecting to bookies ||
+|bookkeeperClientAuthenticationParametersName|  BookKeeper auth plugin implementatation specifics parameters name and values  ||
+|bookkeeperClientAuthenticationParameters|||   
+|bookkeeperClientTimeoutInSeconds|  Timeout for BK add / read operations  |30|
+|bookkeeperClientSpeculativeReadTimeoutInMillis|  Speculative reads are initiated if a read request doesn’t complete within a certain time Using a value of 0, is disabling the speculative reads |0|
+|bookkeeperClientHealthCheckEnabled|  Enable bookies health check. Bookies that have more than the configured number of failure within the interval will be quarantined for some time. During this period, new ledgers won’t be created on these bookies  |true|
+|bookkeeperClientHealthCheckIntervalSeconds||60|
+|bookkeeperClientHealthCheckErrorThresholdPerInterval||5|
+|bookkeeperClientHealthCheckQuarantineTimeInSeconds ||1800|
+|bookkeeperClientRackawarePolicyEnabled|  Enable rack-aware bookie selection policy. BK will chose bookies from different racks when forming a new bookie ensemble  |true|
+|bookkeeperClientIsolationGroups| Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie outside the specified groups will not be used by the broker  ||
+|managedLedgerDefaultEnsembleSize|  Number of bookies to use when creating a ledger |2|
+|managedLedgerDefaultWriteQuorum| Number of copies to store for each message  |2|
+|managedLedgerDefaultAckQuorum| Number of guaranteed copies (acks to wait before write is complete) |2|
+|managedLedgerCacheSizeMB|  Amount of memory to use for caching data payload in managed ledger. This memory is allocated from JVM direct memory and it’s shared across all the topics running in the same broker  |1024|
+|managedLedgerCacheEvictionWatermark| Threshold to which bring down the cache level when eviction is triggered  |0.9|
+|managedLedgerDefaultMarkDeleteRateLimit| Rate limit the amount of writes per second generated by consumer acking the messages  |1.0|
+|managedLedgerMaxEntriesPerLedger|  Max number of entries to append to a ledger before triggering a rollover. A ledger rollover is triggered on these conditions: <ul><li>Either the max rollover time has been reached</li><li>or max entries have been written to the ledged and at least min-time has passed</li></ul>|50000|
+|managedLedgerMinLedgerRolloverTimeMinutes| Minimum time between ledger rollover for a topic  |10|
+|managedLedgerMaxLedgerRolloverTimeMinutes| Maximum time before forcing a ledger rollover for a topic |240|
+|managedLedgerCursorMaxEntriesPerLedger|  Max number of entries to append to a cursor ledger  |50000|
+|managedLedgerCursorRolloverTimeInSeconds|  Max time before triggering a rollover on a cursor ledger  |14400|
+|managedLedgerMaxUnackedRangesToPersist|  Max number of “acknowledgment holes” that are going to be persistently stored. When acknowledging out of order, a consumer will leave holes that are supposed to be quickly filled by acking all the messages. The information of which messages are acknowledged is persisted by compressing in “ranges” of messages that were acknowledged. After the max number of ranges is reached, the information will only be tracked in memory and messages will be redelivered in case of crashes.  |1000|
+|autoSkipNonRecoverableData|  Skip reading non-recoverable/unreadable data-ledger under managed-ledger’s list.It helps when data-ledgers gets corrupted at bookkeeper and managed-cursor is stuck at that ledger. |false|
+|loadBalancerEnabled| Enable load balancer  |true|
+|loadBalancerPlacementStrategy| Strategy to assign a new bundle weightedRandomSelection ||
+|loadBalancerReportUpdateThresholdPercentage| Percentage of change to trigger load report update  |10|
+|loadBalancerReportUpdateMaxIntervalMinutes|  maximum interval to update load report  |15|
+|loadBalancerHostUsageCheckIntervalMinutes| Frequency of report to collect  |1|
+|loadBalancerSheddingIntervalMinutes| Load shedding interval. Broker periodically checks whether some traffic should be offload from some over-loaded broker to other under-loaded brokers  |30|
+|loadBalancerSheddingGracePeriodMinutes|  Prevent the same topics to be shed and moved to other broker more that once within this timeframe |30|
+|loadBalancerBrokerMaxTopics| Usage threshold to allocate max number of topics to broker  |50000|
+|loadBalancerBrokerUnderloadedThresholdPercentage|  Usage threshold to determine a broker as under-loaded |1|
+|loadBalancerBrokerOverloadedThresholdPercentage| Usage threshold to determine a broker as over-loaded  |85|
+|loadBalancerResourceQuotaUpdateIntervalMinutes|  Interval to update namespace bundle resource quotat |15|
+|loadBalancerBrokerComfortLoadLevelPercentage|  Usage threshold to determine a broker is having just right level of load  |65|
+|loadBalancerAutoBundleSplitEnabled|  enable/disable namespace bundle auto split  |false|
+|loadBalancerNamespaceBundleMaxTopics|  maximum topics in a bundle, otherwise bundle split will be triggered  |1000|
+|loadBalancerNamespaceBundleMaxSessions|  maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered  |1000|
+|loadBalancerNamespaceBundleMaxMsgRate| maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered  |1000|
+|loadBalancerNamespaceBundleMaxBandwidthMbytes| maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered  |100|
+|loadBalancerNamespaceMaximumBundles| maximum number of bundles in a namespace  |128|
+|replicationMetricsEnabled| Enable replication metrics  |true|
+|replicationConnectionsPerBroker| Max number of connections to open for each broker in a remote cluster More connections host-to-host lead to better throughput over high-latency links.  |16|
+|replicationProducerQueueSize|  Replicator producer queue size  |1000|
+|replicatorPrefix|  Replicator prefix used for replicator producer name and cursor name pulsar.repl||
+|replicationTlsEnabled| Enable TLS when talking with other clusters to replicate messages |false|
+|defaultRetentionTimeInMinutes| Default message retention time  ||
+|defaultRetentionSizeInMB|  Default retention size  |0|
+|keepAliveIntervalSeconds|  How often to check whether the connections are still alive  |30|
+|brokerServicePurgeInactiveFrequencyInSeconds|  How often broker checks for inactive topics to be deleted (topics with no subscriptions and no one connected) |60|
+|loadManagerClassName|  Name of load manager to use |org.apache.pulsar.broker.loadbalance.impl.SimpleLoadManagerImpl|
+|managedLedgerOffloadDriver|  Driver to use to offload old data to long term storage (Possible values: S3)  ||
+|managedLedgerOffloadMaxThreads|  Maximum number of thread pool threads for ledger offloading |2|
+|s3ManagedLedgerOffloadRegion|  For Amazon S3 ledger offload, AWS region  ||
+|s3ManagedLedgerOffloadBucket|  For Amazon S3 ledger offload, Bucket to place offloaded ledger into ||
+|s3ManagedLedgerOffloadServiceEndpoint| For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) ||
+|s3ManagedLedgerOffloadMaxBlockSizeInBytes| For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) |67108864|
+|s3ManagedLedgerOffloadReadBufferSizeInBytes| For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default)  |1048576|
+
+
+
+
+## Client
+
+The [`pulsar-client`](reference-cli-tools.md#pulsar-client) CLI tool can be used to publish messages to Pulsar and consume messages from Pulsar topics. This tool can be used in lieu of a client library.
+
+|Name|Description|Default|
+|---|---|---|
+|webServiceUrl| The web URL for the cluster.  |http://localhost:8080/|
+|brokerServiceUrl|  The Pulsar protocol URL for the cluster.  |pulsar://localhost:6650/|
+|authPlugin|  The authentication plugin.  ||
+|authParams|  The authentication parameters for the cluster, as a comma-separated string. ||
+|useTls|  Whether or not TLS authentication will be enforced in the cluster.  |false|
+|tlsAllowInsecureConnection|||    
+|tlsTrustCertsFilePath|||
+
+
+## Service discovery
+
+|Name|Description|Default|
+|---|---|---|
+|zookeeperServers|  Zookeeper quorum connection string (comma-separated)  ||
+|globalZookeeperServers|  Global zookeeper quorum connection string (comma-separated) ||
+|zookeeperSessionTimeoutMs| ZooKeeper session timeout |30000|
+|servicePort| Port to use to server binary-proto request  |6650|
+|servicePortTls|  Port to use to server binary-proto-tls request  |6651|
+|webServicePort|  Port that discovery service listen on |8080|
+|webServicePortTls| Port to use to server HTTPS request |8443|
+|bindOnLocalhost| Control whether to bind directly on localhost rather than on normal hostname  |false|
+|authenticationEnabled| Enable authentication |false|
+|authenticationProviders| Authentication provider name list, which is comma separated list of class names (comma-separated) ||
+|authorizationEnabled|  Enforce authorization |false|
+|superUserRoles|  Role names that are treated as “super-user”, meaning they will be able to do all admin operations and publish/consume from all topics (comma-separated) ||
+|tlsEnabled|  Enable TLS  |false|
+|tlsCertificateFilePath|  Path for the TLS certificate file ||
+|tlsKeyFilePath|  Path for the TLS private key file ||
+
+
+
+## Log4j
+
+
+|Name|Default|
+|---|---|
+|pulsar.root.logger|  WARN,CONSOLE|
+|pulsar.log.dir|  logs|
+|pulsar.log.file| pulsar.log|
+|log4j.rootLogger|  ${pulsar.root.logger}|
+|log4j.appender.CONSOLE|  org.apache.log4j.ConsoleAppender|
+|log4j.appender.CONSOLE.Threshold|  DEBUG|
+|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout|
+|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ISO8601} - %-5p - [%t:%C{1}@%L] - %m%n|
+|log4j.appender.ROLLINGFILE|  org.apache.log4j.DailyRollingFileAppender|
+|log4j.appender.ROLLINGFILE.Threshold|  DEBUG|
+|log4j.appender.ROLLINGFILE.File| ${pulsar.log.dir}/${pulsar.log.file}|
+|log4j.appender.ROLLINGFILE.layout| org.apache.log4j.PatternLayout|
+|log4j.appender.ROLLINGFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n|
+|log4j.appender.TRACEFILE|  org.apache.log4j.FileAppender|
+|log4j.appender.TRACEFILE.Threshold|  TRACE|
+|log4j.appender.TRACEFILE.File| pulsar-trace.log|
+|log4j.appender.TRACEFILE.layout| org.apache.log4j.PatternLayout|
+|log4j.appender.TRACEFILE.layout.ConversionPattern| %d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n|
+
+
+## Log4j shell
+
+|Name|Default|
+|---|---|
+|bookkeeper.root.logger|  ERROR,CONSOLE|
+|log4j.rootLogger|  ${bookkeeper.root.logger}|
+|log4j.appender.CONSOLE|  org.apache.log4j.ConsoleAppender|
+|log4j.appender.CONSOLE.Threshold|  DEBUG|
+|log4j.appender.CONSOLE.layout| org.apache.log4j.PatternLayout|
+|log4j.appender.CONSOLE.layout.ConversionPattern| %d{ABSOLUTE} %-5p %m%n|
+|log4j.logger.org.apache.zookeeper| ERROR|
+|log4j.logger.org.apache.bookkeeper|  ERROR|
+|log4j.logger.org.apache.bookkeeper.bookie.BookieShell| INFO|
+
+
+## Standalone
+
+|Name|Description|Default|
+|---|---|---|
+|zookeeperServers|  The quorum connection string for local ZooKeeper  ||
+|globalZookeeperServers|  The quorum connection string for global ZooKeeper ||
+|brokerServicePort| The port on which the standalone broker listens for connections |6650|
+|webServicePort|  THe port used by the standalone broker for HTTP requests  |8080|
+|bindAddress| The hostname or IP address on which the standalone service binds  |0.0.0.0|
+|advertisedAddress| The hostname or IP address that the standalone service advertises to the outside world. If not set, the value of `InetAddress.getLocalHost().getHostName()` is used.  ||
+|clusterName| The name of the cluster that this broker belongs to. |standalone|
+|zooKeeperSessionTimeoutMillis| The ZooKeeper session timeout, in milliseconds. |30000|
+|brokerShutdownTimeoutMs| The time to wait for graceful broker shutdown. After this time elapses, the process will be killed. |60000|
+|backlogQuotaCheckEnabled|  Enable the backlog quota check, which enforces a specified action when the quota is reached.  |true|
+|backlogQuotaCheckIntervalInSeconds|  How often to check for topics that have reached the backlog quota.  |60|
+|backlogQuotaDefaultLimitGB|  The default per-topic backlog quota limit.  |10|
+|brokerDeleteInactiveTopicsEnabled| Enable the deletion of inactive topics. |true|
+|brokerDeleteInactiveTopicsFrequencySeconds|  How often to check for inactive topics, in seconds. |60|
+|messageExpiryCheckIntervalInMinutes| How often to proactively check and purged expired messages. |5|
+|activeConsumerFailoverDelayTimeMillis| How long to delay rewinding cursor and dispatching messages when active consumer is changed.  |1000|
+|clientLibraryVersionCheckEnabled|  Enable checks for minimum allowed client library version. |false|
+|clientLibraryVersionCheckAllowUnversioned| Allow client libraries with no version information  |true|
+|statusFilePath|  The path for the file used to determine the rotation status for the broker when responding to service discovery health checks |/usr/local/apache/htdocs|
+|maxUnackedMessagesPerConsumer| The maximum number of unacknowledged messages allowed to be received by consumers on a shared subscription. The broker will stop sending messages to a consumer once this limit is reached or until the consumer begins acknowledging messages. A value of 0 disables the unacked message limit check and thus allows consumers to receive messages without any restrictions. |50000|
+|maxUnackedMessagesPerSubscription| The same as above, except per subscription rather than per consumer.  |200000|
+|authenticationEnabled| Enable authentication for the broker. |false|
+|authenticationProviders| A comma-separated list of class names for authentication providers. |false|
+|authorizationEnabled|  Enforce authorization in brokers. |false|
+|superUserRoles|  Role names that are treated as “superusers.” Superusers are authorized to perform all admin tasks. ||  
+|brokerClientAuthenticationPlugin|  The authentication settings of the broker itself. Used when the broker connects to other brokers either in the same cluster or from other clusters. ||
+|brokerClientAuthenticationParameters|  The parameters that go along with the plugin specified using brokerClientAuthenticationPlugin.  ||
+|athenzDomainNames| Supported Athenz authentication provider domain names as a comma-separated list.  ||
+|bookkeeperClientAuthenticationPlugin|  Authentication plugin to be used when connecting to bookies (BookKeeper servers). ||
+|bookkeeperClientAuthenticationParametersName|  BookKeeper authentication plugin implementation parameters and values.  ||
+|bookkeeperClientAuthenticationParameters|  Parameters associated with the bookkeeperClientAuthenticationParametersName ||
+|bookkeeperClientTimeoutInSeconds|  Timeout for BookKeeper add and read operations. |30|
+|bookkeeperClientSpeculativeReadTimeoutInMillis|  Speculative reads are initiated if a read request doesn’t complete within a certain time. A value of 0 disables speculative reads.  |0|
+|bookkeeperClientHealthCheckEnabled|  Enable bookie health checks.  |true|
+|bookkeeperClientHealthCheckIntervalSeconds|  The time interval, in seconds, at which health checks are performed. New ledgers are not created during health checks.  |60|
+|bookkeeperClientHealthCheckErrorThresholdPerInterval|  Error threshold for health checks.  |5|
+|bookkeeperClientHealthCheckQuarantineTimeInSeconds|  If bookies have more than the allowed number of failures within the time interval specified by bookkeeperClientHealthCheckIntervalSeconds |1800|
+|bookkeeperClientRackawarePolicyEnabled|    |true|
+|bookkeeperClientIsolationGroups|||   
+|managedLedgerDefaultEnsembleSize|    |1|
+|managedLedgerDefaultWriteQuorum|   |1|
+|managedLedgerDefaultAckQuorum|   |1|
+|managedLedgerCacheSizeMB|    |1024|
+|managedLedgerCacheEvictionWatermark|   |0.9|
+|managedLedgerDefaultMarkDeleteRateLimit|   |0.1|
+|managedLedgerMaxEntriesPerLedger|    |50000|
+|managedLedgerMinLedgerRolloverTimeMinutes|   |10|
+|managedLedgerMaxLedgerRolloverTimeMinutes|   |240|
+|managedLedgerCursorMaxEntriesPerLedger|    |50000|
+|managedLedgerCursorRolloverTimeInSeconds|    |14400|
+|autoSkipNonRecoverableData|    |false|
+|loadBalancerEnabled|   |false|
+|loadBalancerPlacementStrategy|   |weightedRandomSelection|
+|loadBalancerReportUpdateThresholdPercentage|   |10|
+|loadBalancerReportUpdateMaxIntervalMinutes|    |15|
+|loadBalancerHostUsageCheckIntervalMinutes|  |1|
+|loadBalancerSheddingIntervalMinutes|   |30|
+|loadBalancerSheddingGracePeriodMinutes|    |30|
+|loadBalancerBrokerMaxTopics|   |50000|
+|loadBalancerBrokerUnderloadedThresholdPercentage|    |1|
+|loadBalancerBrokerOverloadedThresholdPercentage|   |85|
+|loadBalancerResourceQuotaUpdateIntervalMinutes|    |15|
+|loadBalancerBrokerComfortLoadLevelPercentage|    |65|
+|loadBalancerAutoBundleSplitEnabled|    |false|
+|loadBalancerNamespaceBundleMaxTopics|    |1000|
+|loadBalancerNamespaceBundleMaxSessions|    |1000|
+|loadBalancerNamespaceBundleMaxMsgRate|   |1000|
+|loadBalancerNamespaceBundleMaxBandwidthMbytes|   |100|
+|loadBalancerNamespaceMaximumBundles|   |128|
+|replicationMetricsEnabled|   |true|
+|replicationConnectionsPerBroker|   |16|
+|replicationProducerQueueSize|    |1000|
+|defaultRetentionTimeInMinutes|   |0|
+|defaultRetentionSizeInMB|    |0|
+|keepAliveIntervalSeconds|    |30|
+|brokerServicePurgeInactiveFrequencyInSeconds|    |60|
+
+
+
+
+
+## WebSocket
+
+|Name|Description|Default|
+|---|---|---|
+|globalZookeeperServers    |||
+|zooKeeperSessionTimeoutMillis|   |30000|
+|serviceUrl|||
+|serviceUrlTls|||
+|brokerServiceUrl||| 
+|brokerServiceUrlTls|||
+|webServicePort||8080|
+|webServicePortTls||8443|
+|bindAddress||0.0.0.0|
+|clusterName |||
+|authenticationEnabled||false|
+|authenticationProviders|||   
+|authorizationEnabled||false|
+|superUserRoles |||
+|brokerClientAuthenticationPlugin|||
+|brokerClientAuthenticationParameters||| 
+|tlsEnabled||false|
+|tlsAllowInsecureConnection||false|
+|tlsCertificateFilePath|||
+|tlsKeyFilePath |||
+|tlsTrustCertsFilePath|||
+
+
+## Pulsar proxy 
+
+The [Pulsar proxy](concepts-architecture-overview.md#pulsar-proxy) can be configured in the `conf/proxy.conf` file.
+
+
+|Name|Description|Default|
+|---|---|---|
+|zookeeperServers|  The ZooKeeper quorum connection string (as a comma-separated list)  ||
+|configurationStoreServers| Configuration store connection string (as a comma-separated list) ||
+|zookeeperSessionTimeoutMs| ZooKeeper session timeout (in milliseconds) |30000|
+|servicePort| The port to use for server binary Protobuf requests |6650|
+|servicePortTls|  The port to use to server binary Protobuf TLS requests  |6651|
+|statusFilePath|  Path for the file used to determine the rotation status for the proxy instance when responding to service discovery health checks ||
+|authenticationEnabled| Whether authentication is enabled for the Pulsar proxy  |false|
+|authenticationProviders| Authentication provider name list (a comma-separated list of class names) ||
+|authorizationEnabled|  Whether authorization is enforced by the Pulsar proxy |false|
+|authorizationProvider| Authorization provider as a fully qualified class name  |org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider|
+|brokerClientAuthenticationPlugin|  The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers  ||
+|brokerClientAuthenticationParameters|  The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers  ||
+|brokerClientTrustCertsFilePath|  The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers ||
+|superUserRoles|  Role names that are treated as “super-users,” meaning that they will be able to perform all admin ||
+|forwardAuthorizationCredentials| Whether client authorization credentials are forwared to the broker for re-authorization. Authentication must be enabled via authenticationEnabled=true for this to take effect.  |false|
+|maxConcurrentInboundConnections| Max concurrent inbound connections. The proxy will reject requests beyond that. |10000|
+|maxConcurrentLookupRequests| Max concurrent outbound connections. The proxy will error out requests beyond that. |50000|
+|tlsEnabledInProxy| Whether TLS is enabled for the proxy  |false|
+|tlsEnabledWithBroker|  Whether TLS is enabled when communicating with Pulsar brokers |false|
+|tlsCertificateFilePath|  Path for the TLS certificate file ||
+|tlsKeyFilePath|  Path for the TLS private key file ||
+|tlsTrustCertsFilePath| Path for the trusted TLS certificate pem file ||
+|tlsHostnameVerificationEnabled|  Whether the hostname is validated when the proxy creates a TLS connection with brokers  |false|
+|tlsRequireTrustedClientCertOnConnect|  Whether client certificates are required for TLS. Connections are rejected if the client certificate isn’t trusted. |false|
+
+
+## ZooKeeper
+
+ZooKeeper handles a broad range of essential configuration- and coordination-related tasks for Pulsar. The default configuration file for ZooKeeper is in the `conf/zookeeper.conf` file in your Pulsar installation. The following parameters are available:
+
+
+|Name|Description|Default|
+|---|---|---|
+|tickTime|  The tick is the basic unit of time in ZooKeeper, measured in milliseconds and used to regulate things like heartbeats and timeouts. tickTime is the length of a single tick.  |2000|
+|initLimit| The maximum time, in ticks, that the leader ZooKeeper server allows follower ZooKeeper servers to successfully connect and sync. The tick time is set in milliseconds using the tickTime parameter. |10|
+|syncLimit| The maximum time, in ticks, that a follower ZooKeeper server is allowed to sync with other ZooKeeper servers. The tick time is set in milliseconds using the tickTime parameter.  |5|
+|dataDir| The location where ZooKeeper will store in-memory database snapshots as well as the transaction log of updates to the database. |data/zookeeper|
+|clientPort|  The port on which the ZooKeeper server will listen for connections. |2181|
+|autopurge.snapRetainCount| In ZooKeeper, auto purge determines how many recent snapshots of the database stored in dataDir to retain within the time interval specified by autopurge.purgeInterval (while deleting the rest).  |3|
+|autopurge.purgeInterval| The time interval, in hours, by which the ZooKeeper database purge task is triggered. Setting to a non-zero number will enable auto purge; setting to 0 will disable. Read this guide before enabling auto purge. |1|
+|maxClientCnxns|  The maximum number of client connections. Increase this if you need to handle more ZooKeeper clients. |60|
+
+
+
+
+In addition to the parameters in the table above, configuring ZooKeeper for Pulsar involves adding
+a `server.N` line to the `conf/zookeeper.conf` file for each node in the ZooKeeper cluster, where `N` is the number of the ZooKeeper node. Here's an example for a three-node ZooKeeper cluster:
+
+```properties
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+```
+
+> We strongly recommend consulting the [ZooKeeper Administrator's Guide](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html) for a more thorough and comprehensive introduction to ZooKeeper configuration
diff --git a/site2/website/versioned_docs/version-2.2.0/reference-pulsar-admin.md b/site2/website/versioned_docs/version-2.2.0/reference-pulsar-admin.md
new file mode 100644
index 0000000000..3f837bb418
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/reference-pulsar-admin.md
@@ -0,0 +1,1799 @@
+---
+id: version-2.2.0-pulsar-admin
+title: Pulsar admin CLI
+sidebar_label: Pulsar Admin CLI
+original_id: pulsar-admin
+---
+
+The `pulsar-admin` tool enables you to manage Pulsar installations, including clusters, brokers, namespaces, tenants, and more.
+
+Usage
+```bash
+$ pulsar-admin command
+```
+
+Commands
+* `broker-stats`
+* `brokers`
+* `clusters`
+* `functions`
+* `namespaces`
+* `ns-isolation-policy`
+* `sink`
+* `source`
+* `topics`
+* `tenants`
+* `resource-quotas`
+* `schemas`
+
+## `broker-stats`
+
+Operations to collect broker statistics
+
+```bash
+$ pulsar-admin broker-stats subcommand
+```
+
+Subcommands
+* `allocator-stats`
+* `destinations`
+* `mbeans`
+* `monitoring-metrics`
+* `topics`
+
+
+### `allocator-stats`
+
+Dump allocator stats
+
+Usage
+```bash
+$ pulsar-admin broker-stats allocator-stats allocator-name
+```
+
+### `desinations`
+
+Dump topic stats
+
+Usage
+```bash
+$ pulsar-admin broker-stats destinations options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-i`, `--indent`|Indent JSON output|false|
+
+### `mbeans`
+
+Dump Mbean stats
+
+Usage
+```bash
+$ pulsar-admin broker-stats mbeans options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-i`, `--indent`|Indent JSON output|false|
+
+
+### `monitoring-metrics`
+
+Dump metrics for monitoring
+
+Usage
+```bash
+$ pulsar-admin broker-stats monitoring-metrics options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-i`, `--indent`|Indent JSON output|false|
+
+
+### `topics`
+
+Dump topic stats
+
+Usage
+```bash
+$ pulsar-admin broker-stats topics options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-i`, `--indent`|Indent JSON output|false|
+
+
+## `brokers`
+
+Operations about brokers
+
+```bash
+$ pulsar-admin brokers subcommand
+```
+
+Subcommands
+* `list`
+* `namespaces`
+* `update-dynamic-config`
+* `list-dynamic-config`
+* `get-all-dynamic-config`
+* `get-internal-config`
+
+### `list`
+List active brokers of the cluster
+
+Usage
+```bash
+$ pulsar-admin brokers list cluster-name
+```
+
+### `namespaces`
+List namespaces owned by the broker
+
+Usage
+```bash
+$ pulsar-admin brokers namespaces cluster-name options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--url`|The URL for the broker||
+
+
+### `update-dynamic-config`
+Update a broker's dynamic service configuration
+
+Usage
+```bash
+$ pulsar-admin brokers update-dynamic-config options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--config`|Service configuration parameter name||
+|`--value`|Value for the configuration parameter value specified using the `--config` flag||
+
+
+### `list-dynamic-config`
+Get list of updatable configuration name
+
+Usage
+```bash
+$ pulsar-admin brokers list-dynamic-config
+```
+
+### `get-all-dynamic-config`
+Get all overridden dynamic-configuration values
+
+Usage
+```bash
+$ pulsar-admin brokers get-all-dynamic-config
+```
+
+### `get-internal-config`
+Get internal configuration information
+
+Usage
+```bash
+$ pulsar-admin brokers get-internal-config
+```
+
+
+## `clusters`
+Operations about clusters
+
+Usage
+```bash
+$ pulsar-admin clusters subcommand
+```
+
+Subcommands
+* `get`
+* `create`
+* `update`
+* `delete`
+* `list`
+* `update-peer-clusters`
+
+
+### `get`
+Get the configuration data for the specified cluster
+
+Usage
+```bash
+$ pulsar-admin clusters get cluster-name
+```
+
+### `create`
+Provisions a new cluster. This operation requires Pulsar super-user privileges.
+
+Usage
+```bash
+$ pulsar-admin clusters create cluster-name options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--broker-url`|The URL for the broker service.||
+|`--broker-url-secure`|The broker service URL for a secure connection||
+|`--url`|service-url||
+|`--url-secure`|service-url for secure connection||
+
+
+### `update`
+Update the configuration for a cluster
+
+Usage
+```bash
+$ pulsar-admin clusters update cluster-name options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--broker-url`|The URL for the broker service.||
+|`--broker-url-secure`|The broker service URL for a secure connection||
+|`--url`|service-url||
+|`--url-secure`|service-url for secure connection||
+
+
+### `delete`
+Deletes an existing cluster
+
+Usage
+```bash
+$ pulsar-admin clusters delete cluster-name
+```
+
+### `list`
+List the existing clusters
+
+Usage
+```bash
+$ pulsar-admin clusters list
+```
+
+### `update-peer-clusters`
+Update peer cluster names
+
+Usage
+```bash
+$ pulsar-admin clusters update-peer-clusters peer-cluster-names
+```
+
+## `functions`
+
+A command-line interface for Pulsar Functions
+
+Usage
+```bash
+$ pulsar-admin functions subcommand
+```
+
+Subcommands
+* `localrun`
+* `create`
+* `delete`
+* `update`
+* `get`
+* `getstatus`
+* `list`
+* `querystate`
+* `trigger`
+
+
+### `localrun`
+Run a Pulsar Function locally
+
+
+Usage
+```bash
+$ pulsar-admin functions localrun options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--cpu`|The CPU to allocate to each function instance (in number of cores)||
+|`--ram`|The RAM to allocate to each function instance (in bytes)||
+|`--disk`|The disk space to allocate to each function instance (in bytes)||
+|`--auto-ack`|Let the functions framework manage acking||
+|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer||
+|`--broker-service-url `|The URL of the Pulsar broker||
+|`--classname`|The name of the function’s class||
+|`--custom-serde-inputs`|A map of the input topic to SerDe name||
+|`--custom-schema-inputs`|A map of the input topic to Schema class name||
+|`--client-auth-params`|Client Authentication Params||
+|`--function-config-file`|The path of the YAML config file used to configure the function||
+|`--hostname-verification-enabled`|Enable Hostname verification||
+|`--instance-id-offset`|Instance ids will be assigned starting from this offset||
+|`--inputs`|The input topics for the function (as a comma-separated list if more than one topic is desired)||
+|`--log-topic`|The topic to which logs from this function are published||
+|`--jar`|A path to the JAR file for the function (if the function is written in Java)||
+|`--name`|The name of the function||
+|`--namespace`|The function’s namespace||
+|`--output`|The name of the topic to which the function publishes its output (if any)||
+|`--output-serde-classname`|The SerDe class used for the function’s output||
+|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1|
+|`--processing-guarantees`|The processing guarantees applied to the function. Can be one of: ATLEAST_ONCE, ATMOST_ONCE, or EFFECTIVELY_ONCE|ATLEAST_ONCE|
+|`--py`|The path of the Python file containing the function’s processing logic (if the function is written in Python)||
+|`--schema-type`|Schema Type to be used for storing output messages||
+|`--sliding-interval-count`|Number of messages after which the window ends||
+|`--sliding-interval-duration-ms`|The time duration after which the window slides||
+|`--state-storage-service-url`|The service URL for the function’s state storage (if the function uses a storage system different from the Apache BookKeeper cluster used by Pulsar)||
+|`--subscription-type`|The subscription type used by the function when consuming messages on the input topic(s). Can be either SHARED or EXCLUSIVE|SHARED|
+|`--tenant`|The function’s tenant||
+|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern||
+|`--user-config`|A user-supplied config value, set as a key/value pair. You can set multiple user config values.||
+|`--window-length-count`|The number of messages per window.||
+|`--window-length-duration-ms`|The time duration of the window in milliseconds.||
+
+
+### `create`
+Creates a new Pulsar Function on the target infrastructure
+
+Usage
+```
+$ pulsar-admin functions create options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--cpu`|The CPU to allocate to each function instance (in number of cores)||
+|`--ram`|The RAM to allocate to each function instance (in bytes)||
+|`--disk`|The disk space to allocate to each function instance (in bytes)||
+|`--auto-ack`|Let the functions framework manage acking||
+|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer||
+|`--classname`|The name of the function’s class||
+|`--custom-serde-inputs`|A map of the input topic to SerDe name||
+|`--custom-schema-inputs`|A map of the input topic to Schema class name||
+|`--function-config-file`|The path of the YAML config file used to configure the function||
+|`--inputs`|The input topics for the function (as a comma-separated list if more than one topic is desired)||
+|`--log-topic`|The topic to which logs from this function are published||
+|`--jar`|A path to the JAR file for the function (if the function is written in Java)||
+|`--name`|The name of the function||
+|`--namespace`|The function’s namespace||
+|`--output`|The name of the topic to which the function publishes its output (if any)||
+|`--output-serde-classname`|The SerDe class used for the function’s output||
+|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1|
+|`--processing-guarantees`|The processing guarantees applied to the function. Can be one of: ATLEAST_ONCE, ATMOST_ONCE, or EFFECTIVELY_ONCE|ATLEAST_ONCE|
+|`--py`|The path of the Python file containing the function’s processing logic (if the function is written in Python)||
+|`--schema-type`|Schema Type to be used for storing output messages||
+|`--sliding-interval-count`|Number of messages after which the window ends||
+|`--sliding-interval-duration-ms`|The time duration after which the window slides||
+|`--subscription-type`|The subscription type used by the function when consuming messages on the input topic(s). Can be either SHARED or EXCLUSIVE|SHARED|
+|`--tenant`|The function’s tenant||
+|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern||
+|`--user-config`|A user-supplied config value, set as a key/value pair. You can set multiple user config values.||
+|`--window-length-count`|The number of messages per window.||
+|`--window-length-duration-ms`|The time duration of the window in milliseconds.||
+
+
+### `delete`
+Deletes an existing Pulsar Function
+
+Usage
+```bash
+$ pulsar-admin functions delete options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--name`|The name of the function to delete||
+|`--namespace`|The namespace of the function to delete||
+|`--tenant`|The tenant of the function to delete||
+
+
+### `update`
+Updates an existing Pulsar Function
+
+Usage
+```bash
+$ pulsar-admin functions update options
+```
+
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--cpu`|The CPU to allocate to each function instance (in number of cores)||
+|`--ram`|The RAM to allocate to each function instance (in bytes)||
+|`--disk`|The disk space to allocate to each function instance (in bytes)||
+|`--auto-ack`|Let the functions framework manage acking||
+|`--subs-name`|Pulsar source subscription name if user wants a specific subscription-name for input-topic consumer||
+|`--classname`|The name of the function’s class||
+|`--custom-serde-inputs`|A map of the input topic to SerDe name||
+|`--custom-schema-inputs`|A map of the input topic to Schema class name||
+|`--function-config-file`|The path of the YAML config file used to configure the function||
+|`--inputs`|The input topics for the function (as a comma-separated list if more than one topic is desired)||
+|`--log-topic`|The topic to which logs from this function are published||
+|`--jar`|A path to the JAR file for the function (if the function is written in Java)||
+|`--name`|The name of the function||
+|`--namespace`|The function’s namespace||
+|`--output`|The name of the topic to which the function publishes its output (if any)||
+|`--output-serde-classname`|The SerDe class used for the function’s output||
+|`--parallelism`|The function’s parallelism factor, i.e. the number of instances of the function to run|1|
+|`--processing-guarantees`|The processing guarantees applied to the function. Can be one of: ATLEAST_ONCE, ATMOST_ONCE, or EFFECTIVELY_ONCE|ATLEAST_ONCE|
+|`--py`|The path of the Python file containing the function’s processing logic (if the function is written in Python)||
+|`--schema-type`|Schema Type to be used for storing output messages||
+|`--sliding-interval-count`|Number of messages after which the window ends||
+|`--sliding-interval-duration-ms`|The time duration after which the window slides||
+|`--subscription-type`|The subscription type used by the function when consuming messages on the input topic(s). Can be either SHARED or EXCLUSIVE|SHARED|
+|`--tenant`|The function’s tenant||
+|`--topics-pattern`|The topic pattern to consume from list of topics under a namespace that match the pattern||
+|`--user-config`|A user-supplied config value, set as a key/value pair. You can set multiple user config values.||
+|`--window-length-count`|The number of messages per window.||
+|`--window-length-duration-ms`|The time duration of the window in milliseconds.||
+
+
+### `get`
+Fetch information about an existing Pulsar Function
+
+Usage
+```bash
+$ pulsar-admin functions get options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--name`|The name of the function||
+|`--namespace`|The namespace of the function||
+|`--tenant`|The tenant of the function||
+
+
+### `restart`
+Restarts either all instances or one particular instance of a function
+
+Usage
+```bash
+$ pulsar-admin functions restart options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--name`|The name of the function||
+|`--namespace`|The namespace of the function||
+|`--tenant`|The tenant of the function||
+|`--instance-id`|The function instanceId; restart all instances if instance-id is not provided||
+
+
+### `stop`
+Temporary stops function instance. (If worker restarts then it reassigns and starts functiona again)
+
+Usage
+```bash
+$ pulsar-admin functions stop options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--name`|The name of the function||
+|`--namespace`|The namespace of the function||
+|`--tenant`|The tenant of the function||
+|`--instance-id`|The function instanceId; stop all instances if instance-id is not provided||
+
+
+### `getstatus`
+Get the status of an existing Pulsar Function
+
+Usage
+```bash
+$ pulsar-admin functions getstatus options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--name`|The name of the function||
+|`--namespace`|The namespace of the function||
+|`--tenant`|The tenant of the function||
+|`--instance-id`|The function instanceId; get status of all instances if instance-id is not provided||
+
+### `list`
+List all Pulsar Functions for a specific tenant and namespace
+
+Usage
+```bash
+$ pulsar-admin functions list options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--namespace`|The namespace of the function||
+|`--tenant`|The tenant of the function||
+
+
+### `querystate`
+Retrieve the current state of a Pulsar Function by key
+
+Usage
+```bash
+$ pulsar-admin functions querystate options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-k`, `--key`|The key for the state you want to fetch||
+|`--name`|The name of the function whose state you want to query||
+|`--namespace`|The namespace of the function whose state you want to query||
+|`--tenant`|The tenant of the function whose state you want to query||
+|`-u`, `--storage-service-url`|The service URL for the function’s state storage (if the function uses a storage system different from the Apache BookKeeper cluster used by Pulsar)||
+|`-w`, `--watch`|If set, watching for state changes is enabled|false|
+
+
+### `trigger`
+Triggers the specified Pulsar Function with a supplied value or file data
+
+Usage
+```bash
+$ pulsar-admin functions trigger options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--name`|The name of the Pulsar Function to trigger||
+|`--namespace`|The namespace of the Pulsar Function to trigger||
+|`--tenant`|The tenant of the Pulsar Function to trigger||
+|`--trigger-file`|The path to the file containing the data with which the Pulsar Function is to be triggered||
+|`--trigger-value`|The value with which the Pulsar Function is to be triggered||
+
+
+## `namespaces`
+
+Operations for managing namespaces
+
+
+```bash
+$ pulsar-admin namespaces subcommand
+```
+
+Subcommands
+* `list`
+* `list-cluster`
+* `destinations`
+* `policies`
+* `create`
+* `delete`
+* `set-deduplication`
+* `permissions`
+* `grant-permission`
+* `revoke-permission`
+* `set-clusters`
+* `get-clusters`
+* `get-backlog-quotas`
+* `set-backlog-quota`
+* `remove-backlog-quota`
+* `get-persistence`
+* `set-persistence`
+* `get-message-ttl`
+* `set-message-ttl`
+* `get-retention`
+* `set-retention`
+* `unload`
+* `clear-backlog`
+* `unsubscribe`
+* `get-compaction-threshold`
+* `set-compaction-threshold`
+* `get-offload-threshold`
+* `set-offload-threshold`
+
+
+### `list`
+Get the namespaces for a tenant
+
+Usage
+```bash
+$ pulsar-admin namespaces list tenant-name
+```
+
+### `list-cluster`
+Get the namespaces for a tenant in the cluster
+
+Usage
+```bash
+$ pulsar-admin namespaces list-cluster tenant/cluster
+```
+
+### `destinations`
+Get the destinations for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces destinations tenant/cluster/namespace
+```
+
+### `policies`
+Get the policies of a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces policies tenant/cluster/namespace
+```
+
+### `create`
+Create a new namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces create tenant/cluster/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-b` , `--bundles`|The number of bundles to activate|0|
+
+
+### `delete`
+Deletes a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces delete tenant/cluster/namespace
+```
+
+### `set-deduplication`
+Enable or disable message deduplication on a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-deduplication tenant/cluster/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--enable`, `-e`|Enable message deduplication on the specified namespace|false|
+|`--disable`, `-d`|Disable message deduplication on the specified namespace|false|
+
+
+### `permissions`
+Get the permissions on a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces permissions tenant/cluster/namespace
+```
+
+### `grant-permission`
+Grant permissions on a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces grant-permission tenant/cluster/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--actions`|Actions to be granted (`produce` or `consume`)||
+|`--role`|The client role to which to grant the permissions||
+
+
+### `revoke-permission`
+Revoke permissions on a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces revoke-permission tenant/cluster/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--role`|The client role to which to grant the permissions||
+
+
+### `set-clusters`
+Set replication clusters for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-clusters tenant/cluster/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-c`, `--clusters`|Replication clusters ID list (comma-separated values)||
+
+
+### `get-clusters`
+Get replication clusters for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-clusters tenant/cluster/namespace
+```
+
+### `get-backlog-quotas`
+Get the backlog quota policies for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-backlog-quotas tenant/cluster/namespace
+```
+
+### `set-backlog-quota`
+Set a backlog quota for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-backlog-quota tenant/cluster/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-l`, `--limit`|The backlog size limit (for example `10M` or `16G`)||
+|`-p`, `--policy`|The retention policy to enforce when the limit is reached. The valid options are: `producer_request_hold`, `producer_exception` or `consumer_backlog_eviction`|
+
+Example
+```bash
+$ pulsar-admin namespaces set-backlog-quota my-prop/my-cluster/my-ns \
+--limit 2G \
+--policy producer_request_hold
+```
+
+### `remove-backlog-quota`
+Remove a backlog quota policy from a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces remove-backlog-quota tenant/cluster/namespace
+```
+
+### `get-persistence`
+Get the persistence policies for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-persistence tenant/cluster/namespace
+```
+
+### `set-persistence`
+Set the persistence policies for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-persistence tenant/cluster/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-a`, `--bookkeeper-ack-quorom`|The number of acks (guaranteed copies) to wait for each entry|0|
+|`-e`, `--bookkeeper-ensemble`|The number of bookies to use for a topic|0|
+|`-w`, `--bookkeeper-write-quorum`|How many writes to make of each entry|0|
+|`-r`, `--ml-mark-delete-max-rate`|Throttling rate of mark-delete operation (0 means no throttle)||
+
+
+### `get-message-ttl`
+Get the message TTL for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-message-ttl tenant/cluster/namespace
+```
+
+### `set-message-ttl`
+Set the message TTL for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-message-ttl options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-ttl`, `--messageTTL`|Message TTL in seconds|0|
+
+
+### `get-retention`
+Get the retention policy for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-retention tenant/cluster/namespace
+```
+
+### `set-retention`
+Set the retention policy for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-retention tenant/cluster/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-s`, `--size`|The retention size limits (for example 10M, 16G or 3T). 0 means no retention and -1 means infinite size retention||
+|`-t`, `--time`|The retention time in minutes, hours, days, or weeks. Examples: 100m, 13h, 2d, 5w. 0 means no retention and -1 means infinite time retention||
+
+
+### `unload`
+Unload a namespace or namespace bundle from the current serving broker.
+
+Usage
+```bash
+$ pulsar-admin namespaces unload tenant/cluster/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|||
+
+
+### `clear-backlog`
+Clear the backlog for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces clear-backlog tenant/cluster/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|||   
+|`-f`, `--force`|Whether to force a clear backlog without prompt|false|
+|`-s`, `--sub`|The subscription name||
+
+
+### `unsubscribe`
+Unsubscribe the given subscription on all destinations on a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces unsubscribe tenant/cluster/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|||   
+|`-s`, `--sub`|The subscription name||
+
+
+### `get-compaction-threshold`
+Get compactionThreshold for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-compaction-threshold tenant/namespace
+```
+
+### `set-compaction-threshold`
+Set compactionThreshold for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces set-compaction-threshold tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-t`, `--threshold`|Maximum number of bytes in a topic backlog before compaction is triggered (eg: 10M, 16G, 3T). 0 disables automatic compaction|0|
+
+
+### `get-offload-threshold`
+Get offloadThreshold for a namespace
+
+Usage
+```bash
+$ pulsar-admin namespaces get-offload-threshold tenant/namespace
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-s`, `--size`|Maximum number of bytes stored in the pulsar cluster for a topic before data will start being automatically offloaded to longterm storage (eg: 10M, 16G, 3T, 100). Negative values disable automatic offload. 0 triggers offloading as soon as possible.|-1|
+
+
+
+## `ns-isolation-policy`
+Operations for managing namespace isolation policies.
+
+Usage
+```bash
+$ pulsar-admin ns-isolation-policy subcommand
+```
+
+Subcommands
+* `set`
+* `get`
+* `list`
+* `delete`
+
+### `set`
+Create/update a namespace isolation policy for a cluster. This operation requires Pulsar superuser privileges.
+
+Usage
+```bash
+$ pulsar-admin ns-isolation-policy set cluster-name policy-name options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--auto-failover-policy-params`|Comma-separated name=value auto failover policy parameters|[]|
+|`--auto-failover-policy-type`|Auto failover policy type name. Currently available options: min_available.|[]|
+|`--namespaces`|Comma-separated namespaces regex list|[]|
+|`--primary`|Comma-separated primary broker regex list|[]|
+|`--secondary`|Comma-separated secondary broker regex list|[]|
+
+
+### `get`
+Get the namespace isolation policy of a cluster. This operation requires Pulsar superuser privileges.
+
+Usage
+```bash
+$ pulsar-admin ns-isolation-policy get cluster-name policy-name
+```
+
+### `list`
+List all namespace isolation policies of a cluster. This operation requires Pulsar superuser privileges.
+
+Usage
+```bash
+$ pulsar-admin ns-isolation-policy list cluster-name
+```
+
+### `delete`
+Delete namespace isolation policy of a cluster. This operation requires superuser privileges.
+
+Usage
+```bash
+$ pulsar-admin ns-isolation-policy delete
+```
+
+
+## `sink`
+
+An interface for managing Pulsar IO sinks (egress data from Pulsar)
+
+Usage
+```bash
+$ pulsar-admin sink subcommand
+```
+
+Subcommands
+* `create`
+* `update`
+* `delete`
+* `localrun`
+* `available-sinks`
+
+
+### `create`
+Submit a Pulsar IO sink connector to run in a Pulsar cluster
+
+Usage
+```bash
+$ pulsar-admin sink create options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--classname`|The sink’s Java class name||
+|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)||
+|`--custom-schema-inputs`|The map of input topics to Schema types or class names (as a JSON string)||
+|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--inputs`|The sink’s input topic(s) (multiple topics can be specified as a comma-separated list)||
+|`--archive`|Path to the archive file for the sink||
+|`--name`|The sink’s name||
+|`--namespace`|The sink’s namespace||
+|`--parallelism`|“The sink’s parallelism factor (i.e. the number of sink instances to run).”||
+|`--processing-guarantees`|“The processing guarantees (aka delivery semantics) applied to the sink. Available values: ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.”||
+|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--sink-config`|Sink config key/values||
+|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration||
+|`--sink-type`|The built-in sinks's connector provider||
+|`--topics-pattern`|TopicsPattern to consume from list of topics under a namespace that match the pattern.||
+|`--tenant`|The sink’s tenant||
+|`--auto-ack`|Let the functions framework manage acking||
+|`--timeout-ms`|The message timeout in milliseconds||
+
+
+### `update`
+Submit a Pulsar IO sink connector to run in a Pulsar cluster
+
+Usage
+```bash
+$ pulsar-admin sink update options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--classname`|The sink’s Java class name||
+|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)||
+|`--custom-schema-inputs`|The map of input topics to Schema types or class names (as a JSON string)||
+|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--inputs`|The sink’s input topic(s) (multiple topics can be specified as a comma-separated list)||
+|`--archive`|Path to the archive file for the sink||
+|`--name`|The sink’s name||
+|`--namespace`|The sink’s namespace||
+|`--parallelism`|“The sink’s parallelism factor (i.e. the number of sink instances to run).”||
+|`--processing-guarantees`|“The processing guarantees (aka delivery semantics) applied to the sink. Available values: ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.”||
+|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--sink-config`|Sink config key/values||
+|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration||
+|`--sink-type`|The built-in sinks's connector provider||
+|`--topics-pattern`|TopicsPattern to consume from list of topics under a namespace that match the pattern.||
+|`--tenant`|The sink’s tenant||
+
+
+### `delete`
+Stops a Pulsar IO sink
+
+Usage
+```bash
+$ pulsar-admin sink delete options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--name`|The name of the function to delete||
+|`--namespace`|The namespace of the function to delete||
+|`--tenant`|The tenant of the function to delete||
+
+
+### `localrun`
+Run the Pulsar sink locally (rather than in the Pulsar cluster)
+
+Usage
+```bash
+$ pulsar-admin sink localrun options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--broker-service-url`|The URL for the Pulsar broker||
+|`--classname`|The sink’s Java class name||
+|`--cpu`|The CPU (in cores) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--custom-serde-inputs`|The map of input topics to SerDe class names (as a JSON string)||
+|`--custom-schema-inputs`|The map of input topics to Schema types or class names (as a JSON string)||
+|`--disk`|The disk (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--inputs`|The sink’s input topic(s) (multiple topics can be specified as a comma-separated list)||
+|`--archive`|Path to the archive file for the sink||
+|`--name`|The sink’s name||
+|`--namespace`|The sink’s namespace||
+|`--parallelism`|“The sink’s parallelism factor (i.e. the number of sink instances to run).”||
+|`--processing-guarantees`|“The processing guarantees (aka delivery semantics) applied to the sink. Available values: ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.”||
+|`--ram`|The RAM (in bytes) that needs to be allocated per sink instance (applicable only to the Docker runtime)||
+|`--sink-config`|Sink config key/values||
+|`--sink-config-file`|The path to a YAML config file specifying the sink’s configuration||
+|`--sink-type`|The built-in sinks's connector provider||
+|`--topics-pattern`|TopicsPattern to consume from list of topics under a namespace that match the pattern.||
+|`--tenant`|The sink’s tenant||
+|`--auto-ack`|Let the functions framework manage acking||
+|`--timeout-ms`|The message timeout in milliseconds||
+
+
+### `available-sinks`
+Get a list of all built-in sink connectors
+
+Usage
+```bash
+$ pulsar-admin sink available-sinks
+```
+
+
+## `source`
+An interface for managing Pulsar IO sources (ingress data into Pulsar)
+
+Usage
+```bash
+$ pulsar-admin source subcommand
+```
+
+Subcommands
+* `create`
+* `update`
+* `delete`
+* `localrun`
+* `available-sources`
+
+
+### `create`
+Submit a Pulsar IO source connector to run in a Pulsar cluster
+
+Usage
+```bash
+$ pulsar-admin source create options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--classname`|The source’s Java class name||
+|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--deserialization-classname`|The SerDe classname for the source||
+|`--destination-topic-name`|The Pulsar topic to which data is sent||
+|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--archive`|The path to the NAR archive for the Source||
+|`--name`|The source’s name||
+|`--namespace`|The source’s namespace||
+|`--parallelism`|The source’s parallelism factor (i.e. the number of source instances to run).||
+|`--processing-guarantees`|“The processing guarantees (aka delivery semantics) applied to the source. Available values: ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.”||
+|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--schema-type`|The schema type (either a builtin schema like 'avro', 'json', etc, or custom Schema class name to be used to encode messages emitted from the source||
+|`--source-type`|One of the built-in source's connector provider||
+|`--source-config`|Source config key/values||
+|`--source-config-file`|The path to a YAML config file specifying the source’s configuration||
+|`--tenant`|The source’s tenant||
+
+
+### `update`
+Update a already submitted Pulsar IO source connector
+
+Usage
+```bash
+$ pulsar-admin source update options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--classname`|The source’s Java class name||
+|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--deserialization-classname`|The SerDe classname for the source||
+|`--destination-topic-name`|The Pulsar topic to which data is sent||
+|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--archive`|The path to the NAR archive for the Source||
+|`--name`|The source’s name||
+|`--namespace`|The source’s namespace||
+|`--parallelism`|The source’s parallelism factor (i.e. the number of source instances to run).||
+|`--processing-guarantees`|“The processing guarantees (aka delivery semantics) applied to the source. Available values: ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.”||
+|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--schema-type`|The schema type (either a builtin schema like 'avro', 'json', etc, or custom Schema class name to be used to encode messages emitted from the source||
+|`--source-type`|One of the built-in source's connector provider||
+|`--source-config`|Source config key/values||
+|`--source-config-file`|The path to a YAML config file specifying the source’s configuration||
+|`--tenant`|The source’s tenant||
+
+
+### `delete`
+Stops a Pulsar IO source
+
+Usage
+```bash
+$ pulsar-admin source delete options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--name`|The name of the function to delete||
+|`--namespace`|The namespace of the function to delete||
+|`--tenant`|The tenant of the function to delete||
+
+
+### `localrun`
+Run the Pulsar source locally (rather than in the Pulsar cluster)
+
+Usage
+```bash
+$ pulsar-admin source localrun options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--classname`|The source’s Java class name||
+|`--cpu`|The CPU (in cores) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--deserialization-classname`|The SerDe classname for the source||
+|`--destination-topic-name`|The Pulsar topic to which data is sent||
+|`--disk`|The disk (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--archive`|The path to the NAR archive for the Source||
+|`--name`|The source’s name||
+|`--namespace`|The source’s namespace||
+|`--parallelism`|The source’s parallelism factor (i.e. the number of source instances to run).||
+|`--processing-guarantees`|“The processing guarantees (aka delivery semantics) applied to the source. Available values: ATLEAST_ONCE, ATMOST_ONCE, EFFECTIVELY_ONCE.”||
+|`--ram`|The RAM (in bytes) that needs to be allocated per source instance (applicable only to the Docker runtime)||
+|`--schema-type`|The schema type (either a builtin schema like 'avro', 'json', etc, or custom Schema class name to be used to encode messages emitted from the source||
+|`--source-type`|One of the built-in source's connector provider||
+|`--source-config`|Source config key/values||
+|`--source-config-file`|The path to a YAML config file specifying the source’s configuration||
+|`--tenant`|The source’s tenant||
+
+
+### `available-sources`
+Get a list of all built-in source connectors
+
+Usage
+```bash
+$ pulsar-admin source available-sources
+```
+
+
+## `topics`
+Operations for managing Pulsar topics (both persistent and non persistent)
+
+Usage
+```bash
+$ pulsar-admin topics subcommand
+```
+
+Subcommands
+* `compact`
+* `compaction-status`
+* `offload`
+* `offload-status`
+* `create-partitioned-topic`
+* `delete-partitioned-topic`
+* `get-partitioned-topic-metadata`
+* `update-partitioned-topic`
+* `list`
+* `list-in-bundle`
+* `terminate`
+* `permissions`
+* `grant-permission`
+* `revoke-permission`
+* `lookup`
+* `bundle-range`
+* `delete`
+* `unload`
+* `subscriptions`
+* `unsubscribe`
+* `stats`
+* `stats-internal`
+* `info-internal`
+* `partitioned-stats`
+* `skip`
+* `skip-all`
+* `expire-messages`
+* `expire-messages-all-subscriptions`
+* `peek-messages`
+* `reset-cursor`
+
+
+### `compact`
+Run compaction on the specified topic (persistent topics only)
+
+Usage
+```
+$ pulsar-admin topics compact persistent://tenant/namespace/topic
+```
+
+### `compaction-status`
+Check the status of a topic compaction (persistent topics only)
+
+Usage
+```bash
+$ pulsar-admin topics compaction-status persistent://tenant/namespace/topic
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-w`, `--wait-complete`|Wait for compaction to complete|false|
+
+
+### `offload`
+Trigger offload of data from a topic to long-term storage (e.g. Amazon S3)
+
+Usage
+```bash
+$ pulsar-admin topics offload persistent://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-s`, `--size-threshold`|The maximum amount of data to keep in BookKeeper for the specific topic||
+
+
+### `offload-status`
+Check the status of data offloading from a topic to long-term storage
+
+Usage
+```bash
+$ pulsar-admin topics offload-status persistent://tenant/namespace/topic op
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-w`, `--wait-complete`|Wait for compaction to complete|false|
+
+
+### `create-partitioned-topic`
+Create a partitioned topic. A partitioned topic must be created before producers can publish to it.
+
+Usage
+```bash
+$ pulsar-admin topics create-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-p`, `--partitions`|The number of partitions for the topic|0|
+
+
+### `delete-partitioned-topic`
+Delete a partitioned topic. This will also delete all the partitions of the topic if they exist.
+
+Usage
+```bash
+$ pulsar-admin topics delete-partitioned-topic {persistent|non-persistent}
+```
+
+### `get-partitioned-topic-metadata`
+Get the partitioned topic metadata. If the topic is not created or is a non-partitioned topic, this will return an empty topic with zero partitions.
+
+Usage
+```bash
+$ pulsar-admin topics get-partitioned-topic-metadata {persistent|non-persistent}://tenant/namespace/topic
+```
+
+### `update-partitioned-topic`
+Update existing non-global partitioned topic. New updating number of partitions must be greater than existing number of partitions.
+
+Usage
+```bash
+$ pulsar-admin topics update-partitioned-topic {persistent|non-persistent}://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-p`, `--partitions`|The number of partitions for the topic|0|
+
+### `list`
+Get the list of topics under a namespace
+
+Usage
+```
+$ pulsar-admin topics list tenant/cluster/namespace
+```
+
+### `list-in-bundle`
+Get a list of non-persistent topics present under a namespace bundle
+
+Usage
+```
+$ pulsar-admin topics list-in-bundle tenant/namespace options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-b`, `--bundle`|The bundle range||
+
+
+### `terminate`
+Terminate a topic (disallow further messages from being published on the topic)
+
+Usage
+```bash
+$ pulsar-admin topics terminate {persistent|non-persistent}://tenant/namespace/topic
+```
+
+### `permissions`
+Get the permissions on a topic. Retrieve the effective permissions for a desination. These permissions are defined by the permissions set at the namespace level combined (union) with any eventual specific permissions set on the topic.
+
+Usage
+```bash
+$ pulsar-admin topics permissions topic
+```
+
+### `grant-permission`
+Grant a new permission to a client role on a single topic
+
+Usage
+```bash
+$ pulsar-admin topics grant-permission {persistent|non-persistent}://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--actions`|Actions to be granted (`produce` or `consume`)||
+|`--role`|The client role to which to grant the permissions||
+
+
+### `revoke-permission`
+Revoke permissions to a client role on a single topic. If the permission was not set at the topic level, but rather at the namespace level, this operation will return an error (HTTP status code 412).
+
+Usage
+```bash
+$ pulsar-admin topics revoke-permission topic
+```
+
+### `lookup`
+Look up a topic from the current serving broker
+
+Usage
+```bash
+$ pulsar-admin topics lookup topic
+```
+
+### `bundle-range`
+Get the namespace bundle which contains the given topic
+
+Usage
+```bash
+$ pulsar-admin topics bundle-range topic
+```
+
+### `delete`
+Delete a topic. The topic cannot be deleted if there are any active subscriptions or producers connected to the topic.
+
+Usage
+```bash
+$ pulsar-admin topics delete topic
+```
+
+### `unload`
+Unload a topic
+
+Usage
+```bash
+$ pulsar-admin topics unload topic
+```
+
+### `subscriptions`
+Get the list of subscriptions on the topic
+
+Usage
+```bash
+$ pulsar-admin topics subscriptions topic
+```
+
+### `unsubscribe`
+Delete a durable subscriber from a topic
+
+Usage
+```bash
+$ pulsar-admin topics unsubscribe topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-s`, `--subscription`|The subscription to delete||
+
+
+### `stats`
+Get the stats for the topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period.
+
+Usage
+```bash
+$ pulsar-admin topics stats topic
+```
+
+### `stats-internal`
+Get the internal stats for the topic
+
+Usage
+```bash
+$ pulsar-admin topics stats-internal topic
+```
+
+### `info-internal`
+Get the internal metadata info for the topic
+
+Usage
+```bash
+$ pulsar-admin topics info-internal topic
+```
+
+### `partitioned-stats`
+Get the stats for the partitioned topic and its connected producers and consumers. All rates are computed over a 1-minute window and are relative to the last completed 1-minute period.
+
+Usage
+```bash
+$ pulsar-admin topics partitioned-stats topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`--per-partition`|Get per-partition stats|false|
+
+
+### `skip`
+Skip some messages for the subscription
+
+Usage
+```bash
+$ pulsar-admin topics skip topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-n`, `--count`|The number of messages to skip|0|
+|`-s`, `--subscription`|The subscription on which to skip messages||
+
+
+### `skip-all`
+Skip all the messages for the subscription
+
+Usage
+```bash
+$ pulsar-admin topics skip-all topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-s`, `--subscription`|The subscription to clear||
+
+
+### `expire-messages`
+Expire messages that are older than the given expiry time (in seconds) for the subscription.
+
+Usage
+```bash
+$ pulsar-admin topics expire-messages topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0|
+|`-s`, `--subscription`|The subscription to skip messages on||
+
+
+### `expire-messages-all-subscriptions`
+Expire messages older than the given expiry time (in seconds) for all subscriptions
+
+Usage
+```bash
+$ pulsar-admin topics expire-messages-all-subscriptions topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-t`, `--expireTime`|Expire messages older than the time (in seconds)|0|
+
+
+### `peek-messages`
+Peek some messages for the subscription.
+
+Usage
+```bash
+$ pulsar-admin topics peek-messages topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-n`, `--count`|The number of messages|0|
+|`-s`, `--subscription`|Subscription to get messages from||
+
+
+### `reset-cursor`
+Reset position for subscription to closest to timestamp
+
+Usage
+```bash
+$ pulsar-admin topics reset-cursor topic options
+```
+
+Options
+|Flag|Description|Default|
+|---|---|---|
+|`-s`, `--subscription`|Subscription to reset position on||
+|`-t`, `--time`|The time, in minutes, to reset back to (or minutes, hours, days, weeks, etc.). Examples: `100m`, `3h`, `2d`, `5w`.||
+
+
+
+## `tenants`
+Operations for managing tenants
+
+Usage
+```bash
+$ pulsar-admin tenants subcommand
+```
+
+Subcommands
+* `list`
+* `get`
+* `create`
+* `update`
+* `delete`
+
+### `list`
+List the existing tenants
+
+Usage
+```bash
+$ pulsar-admin tenants list
+```
+
+### `get`
+Gets the configuration of a tenant
+
+Usage
+```bash
+$ pulsar-admin tenants get tenant-name
+```
+
+### `create`
+Creates a new tenant
+
+Usage
+```bash
+$ pulsar-admin tenants create tenant-name options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-r`, `--admin-roles`|Comma-separated admin roles||
+|`-c`, `--allowed-clusters`|Comma-separated allowed clusters||
+
+### `update`
+Updates a tenant
+
+Usage
+```bash
+$ pulsar-admin tenants update tenant-name options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-r`, `--admin-roles`|Comma-separated admin roles||
+|`-c`, `--allowed-clusters`|Comma-separated allowed clusters||
+
+
+### `delete`
+Deletes an existing tenant
+
+Usage
+```bash
+$ pulsar-admin tenants delete tenant-name
+```
+
+
+## `resource-quotas`
+Operations for managing resource quotas
+
+Usage
+```bash
+$ pulsar-admin resource-quotas subcommand
+```
+
+Subcommands
+* `get`
+* `set`
+* `reset-namespace-bundle-quota`
+
+
+### `get`
+Get the resource quota for a specified namespace bundle, or default quota if no namespace/bundle is specified.
+
+Usage
+```bash
+$ pulsar-admin resource-quotas get options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.||
+|`-n`, `--namespace`|The namespace||
+
+
+### `set`
+Set the resource quota for the specified namespace bundle, or default quota if no namespace/bundle is specified.
+
+Usage
+```bash
+$ pulsar-admin resource-quotas set options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-bi`, `--bandwidthIn`|The expected inbound bandwidth (in bytes/second)|0|
+|`-bo`, `--bandwidthOut`|Expected outbound bandwidth (in bytes/second)0|
+|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.||
+|`-d`, `--dynamic`|Allow to be dynamically re-calculated (or not)|false|
+|`-mem`, `--memory`|Expectred memory usage (in megabytes)|0|
+|`-mi`, `--msgRateIn`|Expected incoming messages per second|0|
+|`-mo`, `--msgRateOut`|Expected outgoing messages per second|0|
+|`-n`, `--namespace`|The namespace as tenant/namespace, for example my-tenant/my-ns. Must be specified together with -b/--bundle.||
+
+
+### `reset-namespace-bundle-quota`
+Reset the specifed namespace bundle's resource quota to a default value.
+
+Usage
+```bash
+$ pulsar-admin resource-quotas reset-namespace-bundle-quota options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`-b`, `--bundle`|A bundle of the form {start-boundary}_{end_boundary}. This must be specified together with -n/--namespace.||
+|`-n`, `--namespace`|The namespace||
+
+
+
+## `schemas`
+Operations related to Schemas associated with Pulsar topics.
+
+Usage
+```
+$ pulsar-admin schemas subcommand
+```
+
+Subcommands
+* `upload`
+* `delete`
+* `get`
+
+
+### `upload`
+Upload the schema definition for a topic
+
+Usage
+```bash
+$ pulsar-admin schemas upload persistent://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--filename`|The path to the schema definition file. An example schema file is available under conf directory.||
+
+
+### `delete`
+Delete the schema definition associated with a topic
+
+Usage
+```bash
+$ pulsar-admin schemas delete persistent://tenant/namespace/topic
+```
+
+
+### `get`
+Retrieve the schema definition assoicated with a topic (at a given version if version is supplied).
+
+Usage
+```bash
+$ pulsar-admin schemas get persistent://tenant/namespace/topic options
+```
+
+Options
+|Flag|Description|Default|
+|----|---|---|
+|`--version`|The version of the schema definition to retrive for a topic.||
+
+
diff --git a/site2/website/versioned_docs/version-2.2.0/security-extending.md b/site2/website/versioned_docs/version-2.2.0/security-extending.md
new file mode 100644
index 0000000000..db63306c9d
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/security-extending.md
@@ -0,0 +1,207 @@
+---
+id: version-2.2.0-security-extending
+title: Extending Authentication and Authorization in Pulsar
+sidebar_label: Extending
+original_id: security-extending
+---
+
+Pulsar provides a way to use custom authentication and authorization mechanisms
+
+## Authentication
+
+Pulsar support mutual TLS and Athenz authentication plugins, and these can be used as described
+in [Security](security-overview.md).
+
+It is possible to use a custom authentication mechanism by providing the implementation in the
+form of two plugins one for the Client library and the other for the Pulsar Broker to validate
+the credentials.
+
+### Client authentication plugin
+
+For client library, you will need to implement `org.apache.pulsar.client.api.Authentication`. This class can then be passed
+when creating a Pulsar client:
+
+```java
+PulsarClient client = PulsarClient.builder()
+    .serviceUrl("pulsar://localhost:6650")
+    .authentication(new MyAuthentication())
+    .build();
+```
+
+For reference, there are 2 interfaces to implement on the client side:
+ * `Authentication` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/Authentication.html
+ * `AuthenticationDataProvider` -> http://pulsar.apache.org/api/client/org/apache/pulsar/client/api/AuthenticationDataProvider.html
+
+
+This in turn will need to provide the client credentials in the form of `org.apache.pulsar.client.api.AuthenticationDataProvider`. This will leave
+the chance to return different kinds of authentication token for different
+type of connection or by passing a certificate chain to use for TLS.
+
+
+Examples for client authentication providers can be found at:
+
+ * Mutual TLS Auth -- https://github.com/apache/pulsar/tree/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/auth
+ * Athenz -- https://github.com/apache/pulsar/tree/master/pulsar-client-auth-athenz/src/main/java/org/apache/pulsar/client/impl/auth
+
+### Broker authentication plugin
+
+On broker side, we need the corresponding plugin to validate the credentials
+passed by the client. Broker can support multiple authentication providers
+at the same time.
+
+In `conf/broker.conf` it's possible to specify a list of valid providers:
+
+```properties
+# Autentication provider name list, which is comma separated list of class names
+authenticationProviders=
+```
+
+There is one single interface to implement `org.apache.pulsar.broker.authentication.AuthenticationProvider`:
+
+```java
+/**
+ * Provider of authentication mechanism
+ */
+public interface AuthenticationProvider extends Closeable {
+
+    /**
+     * Perform initialization for the authentication provider
+     *
+     * @param config
+     *            broker config object
+     * @throws IOException
+     *             if the initialization fails
+     */
+    void initialize(ServiceConfiguration config) throws IOException;
+
+    /**
+     * @return the authentication method name supported by this provider
+     */
+    String getAuthMethodName();
+
+    /**
+     * Validate the authentication for the given credentials with the specified authentication data
+     *
+     * @param authData
+     *            provider specific authentication data
+     * @return the "role" string for the authenticated connection, if the authentication was successful
+     * @throws AuthenticationException
+     *             if the credentials are not valid
+     */
+    String authenticate(AuthenticationDataSource authData) throws AuthenticationException;
+
+}
+```
+
+Example for Broker authentication plugins:
+
+ * Mutual TLS -- https://github.com/apache/pulsar/blob/master/pulsar-broker-common/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderTls.java
+ * Athenz -- https://github.com/apache/pulsar/blob/master/pulsar-broker-auth-athenz/src/main/java/org/apache/pulsar/broker/authentication/AuthenticationProviderAthenz.java
+
+## Authorization
+
+Authorization is the operation that checks whether a particular "role" or "principal" is
+allowed to perform a certain operation.
+
+By default, Pulsar provides an embedded authorization, though it's possible to
+configure a different one through a plugin.
+
+To provide a custom provider, one needs to implement the
+ `org.apache.pulsar.broker.authorization.AuthorizationProvider` interface, have this class in the
+ Pulsar broker classpath and configure it in `conf/broker.conf`:
+
+ ```properties
+ # Authorization provider fully qualified class-name
+ authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider
+ ```
+
+```java
+/**
+ * Provider of authorization mechanism
+ */
+public interface AuthorizationProvider extends Closeable {
+
+    /**
+     * Perform initialization for the authorization provider
+     *
+     * @param config
+     *            broker config object
+     * @param configCache
+     *            pulsar zk configuration cache service
+     * @throws IOException
+     *             if the initialization fails
+     */
+    void initialize(ServiceConfiguration conf, ConfigurationCacheService configCache) throws IOException;
+
+    /**
+     * Check if the specified role has permission to send messages to the specified fully qualified topic name.
+     *
+     * @param topicName
+     *            the fully qualified topic name associated with the topic.
+     * @param role
+     *            the app id used to send messages to the topic.
+     */
+    CompletableFuture<Boolean> canProduceAsync(TopicName topicName, String role,
+            AuthenticationDataSource authenticationData);
+
+    /**
+     * Check if the specified role has permission to receive messages from the specified fully qualified topic name.
+     *
+     * @param topicName
+     *            the fully qualified topic name associated with the topic.
+     * @param role
+     *            the app id used to receive messages from the topic.
+     * @param subscription
+     *            the subscription name defined by the client
+     */
+    CompletableFuture<Boolean> canConsumeAsync(TopicName topicName, String role,
+            AuthenticationDataSource authenticationData, String subscription);
+
+    /**
+     * Check whether the specified role can perform a lookup for the specified topic.
+     *
+     * For that the caller needs to have producer or consumer permission.
+     *
+     * @param topicName
+     * @param role
+     * @return
+     * @throws Exception
+     */
+    CompletableFuture<Boolean> canLookupAsync(TopicName topicName, String role,
+            AuthenticationDataSource authenticationData);
+
+    /**
+     *
+     * Grant authorization-action permission on a namespace to the given client
+     *
+     * @param namespace
+     * @param actions
+     * @param role
+     * @param authDataJson
+     *            additional authdata in json format
+     * @return CompletableFuture
+     * @completesWith <br/>
+     *                IllegalArgumentException when namespace not found<br/>
+     *                IllegalStateException when failed to grant permission
+     */
+    CompletableFuture<Void> grantPermissionAsync(NamespaceName namespace, Set<AuthAction> actions, String role,
+            String authDataJson);
+
+    /**
+     * Grant authorization-action permission on a topic to the given client
+     *
+     * @param topicName
+     * @param role
+     * @param authDataJson
+     *            additional authdata in json format
+     * @return CompletableFuture
+     * @completesWith <br/>
+     *                IllegalArgumentException when namespace not found<br/>
+     *                IllegalStateException when failed to grant permission
+     */
+    CompletableFuture<Void> grantPermissionAsync(TopicName topicName, Set<AuthAction> actions, String role,
+            String authDataJson);
+
+}
+
+```
diff --git a/site2/website/versioned_docs/version-2.2.0/security-tls-transport.md b/site2/website/versioned_docs/version-2.2.0/security-tls-transport.md
new file mode 100644
index 0000000000..10a9537dfb
--- /dev/null
+++ b/site2/website/versioned_docs/version-2.2.0/security-tls-transport.md
@@ -0,0 +1,211 @@
+---
+id: version-2.2.0-security-tls-transport
+title: Transport Encryption using TLS
+sidebar_label: Transport Encryption using TLS
+original_id: security-tls-transport
+---
+
+## TLS Overview
+
+By default, Apache Pulsar clients communicate with the Apache Pulsar service in plain text, which means that all data is sent in the clear. TLS can be used to encrypt this traffic so that it cannot be snooped by a man-in-the-middle attacker.
+
+TLS can be configured for both encryption and authentication. You may configure just TLS transport encryption, which is covered in this guide. TLS authentication is covered [elsewhere](security-tls-authentication.md). Alternatively, you can use [another authentication mechanism](security-athenz.md) on top of TLS transport encryption.
+
+> Note that enabling TLS may have a performance impact due to encryption overhead.
+
+## TLS concepts
+
+TLS is a form of [public key cryptography](https://en.wikipedia.org/wiki/Public-key_cryptography). Encryption is performed using key pairs consisting of a public key and a private key. Messages are encrypted with the public key and can be decrypted with the private key.
+
+To use TLS transport encryption, you need two kinds of key pairs, **server key pairs** and a **certificate authority**.
+
+A third kind of key pair, **client key pairs**, are used for [client authentication](security-tls-authentication.md).
+
+The **certificate authority** private key should be stored in a very secure location (a fully encrypted, disconnected, air gapped computer). The certificate authority public key, the **trust cert**, can be freely shared.
+
+For both client and server key pairs, the administrator first generates a private key and a certificate request. Then the certificate authority private key is used to sign the certificate request, generating a certificate. This certificate is the public key for the server/client key pair.
+
+For TLS transport encryption, the clients can use the **trust cert** to verify that the server they are talking to has a key pair that was signed by the certificate authority. A man-in-the-middle attacker would not have access to the certificate authority, so they couldn't create a server with such a key pair.
+
+For TLS authentication, the server uses the **trust cert** to verify that the client has a key pair that was signed by the certificate authority. The Common Name of the **client cert** is then used as the client's role token (see [Overview](security-overview.md)).
+
+## Creating TLS Certificates
+
+Creating TLS certificates for Pulsar involves creating a [certificate authority](#certificate-authority) (CA), [server certificate](#server-certificate), and [client certificate](#client-certificate).
+
+The following guide is an abridged guide to setting up a certificate authority. For a more detailed guide, there are plenty of resource on the internet. We recommend the [this guide](https://jamielinux.com/docs/openssl-certificate-authority/index.html).
+
+### Certificate authority
+
+The first step is to create the certificate for the CA. The CA will be used to sign both the broker and client certificates, in order to ensure that each party will trust the others. The CA should be stored in a very secure location (ideally completely disconnected from networks, air gapped, and fully encrypted).
+
+Create a directory for your CA, and place [this openssl configuration file](https://github.com/apache/pulsar/tree/master/site2/website/static/examples/openssl.cnf) in the directory. You may want to modify the default answers for company name and department in the configuration file. Export the location of the CA directory to the environment variable, CA_HOME. The configuration file uses this environment variable to find the rest of the files and directories needed for the CA.
+
+```bash
+$ mkdir my-ca
+$ cd my-ca
+$ wget https://github.com/apache/pulsar/tree/master/site2/website/static/examples/openssl.cnf
+$ export CA_HOME=$(pwd)
+```
+
+Create the necessary directories, keys and certs.
+
+```bash
+$ mkdir certs crl newcerts private
+$ chmod 700 private/
+$ touch index.txt
+$ echo 1000 > serial
+$ openssl genrsa -aes256 -out private/ca.key.pem 4096
+$ chmod 400 private/ca.key.pem
+$ openssl req -config openssl.cnf -key private/ca.key.pem \
+      -new -x509 -days 7300 -sha256 -extensions v3_ca \
+      -out certs/ca.cert.pem
+$ chmod 444 certs/ca.cert.pem
+```
+
+After answering the question prompts, this will store CA-related files in the `./my-ca` directory. Within that directory:
+
+* `certs/ca.cert.pem` is the public certificate. It is meant to be distributed to all parties involved.
+* `private/ca.key.pem` is the private key. This is only needed when signing a new certificate for either broker or clients and it must be safely guarded.
+
+### Server certificate
+
+Once a CA certificate has been created, you can create certificate requests and sign them with the CA.
+
+The following commands will ask you a few questions and then create the certificates. When asked for the common name, you should match the hostname of the broker. You could also use a wildcard to match a group of broker hostnames, for example `*.broker.usw.example.com`. This ensures that the same certificate can be reused on multiple machines.
+
+> #### Tips
+> 
+> Sometimes it is not possible or makes no sense to match the hostname,
+> such as when the brokers are created with random hostnames, or you
+> plan to connect to the hosts via their IP. In this case, the client
+> should be configured to disable TLS hostname verification. For more
+> details, see [the host verification section in client configuration](#hostname-verification).
+
+First generate the key.
+```bash
+$ openssl genrsa -out broker.key.pem 2048
+```
+
+The broker expects the key to be in [PKCS 8](https://en.wikipedia.org/wiki/PKCS_8) format, so convert it.
+
+```bash
+$ openssl pkcs8 -topk8 -inform PEM -outform PEM \
+      -in broker.key.pem -out broker.key-pk8.pem -nocrypt
+```
+
+Generate the certificate request...
+
+```bash
+$ openssl req -config openssl.cnf \
+      -key broker.key.pem -new -sha256 -out broker.csr.pem
+```
+
+... and sign it with the certificate authority.
+```bash
+$ openssl ca -config openssl.cnf -extensions server_cert \
+      -days 1000 -notext -md sha256 \
+      -in broker.csr.pem -out broker.cert.pem
+```
+
+At this point, you have a cert, `broker.cert.pem`, and a key, `broker.key-pk8.pem`, which can be used along with `ca.cert.pem` to configure TLS transport encryption for your broker and proxy nodes.
+
+## Broker Configuration
+
+To configure a Pulsar [broker](reference-terminology.md#broker) to use TLS transport encryption, you'll need to make some changes to `broker.conf`, which is located in the `conf` directory of your [Pulsar installation](getting-started-standalone.md).
+
+Add these values to the configuration file (substituting the appropriate certificate paths where necessary):
+
+```properties
+tlsEnabled=true
+tlsCertificateFilePath=/path/to/broker.cert.pem
+tlsKeyFilePath=/path/to/broker.key-pk8.pem
+tlsTrustCertsFilePath=/path/to/ca.cert.pem
+```
+
+> A full list of parameters available in the `conf/broker.conf` file,
+> as well as the default values for those parameters, can be found in [Broker Configuration](reference-configuration.md#broker) 
+
+## Proxy Configuration
+
+Proxies need to configure TLS in two directions, for clients connecting to the proxy, and for the proxy to be able to connect to brokers.
+
+```properties
+# For clients connecting to the proxy
+tlsEnabledInProxy=true
+tlsCertificateFilePath=/path/to/broker.cert.pem
+tlsKeyFilePath=/path/to/broker.key-pk8.pem
+tlsTrustCertsFilePath=/path/to/ca.cert.pem
+
+# For the proxy to connect to brokers
+tlsEnabledWithBroker=true
+brokerClientTrustCertsFilePath=/path/to/ca.cert.pem
+```
+
+## Client configuration
+
+When TLS transport encryption is enabled, you need to configure the client to use ```https://``` and port 8443 for the web service URL, and ```pulsar+ssl://``` and port 6651 for the broker service URL.
+
+As the server certificate you generated above doesn't belong to any of the default trust chains, you also need to either specify the path the **trust cert** (recommended), or tell the client to allow untrusted server certs.
+
+#### Hostname verification
+
+Hostname verification is a TLS security feature whereby a client can refuse to connect to a server if the "CommonName" does not match the hostname to which it is connecting. By default, Pulsar clients disable hostname verification, as it requires that each broker has a DNS record and a unique cert.
+
+Moreover, as the administrator has full control of the certificate authority, it is unlikely that a bad actor would be able to pull off a man-in-the-middle attack. "allowInsecureConnection" allows the client to connect to servers whose cert has not been signed by an approved CA. The client disables it by default, and should always be disabled in production environments. As long as "allowInsecureConnection" is disabled, a man-in-the-middle attack would require that the attacker has access to the CA.
+
+One scenario where you may want to enable hostname verification is where you have multiple proxy nodes behind a VIP, and the VIP has a DNS record, for example, pulsar.mycompany.com. In this case, you can generate a TLS cert with pulsar.mycompany.com as the "CommonName," and then enable hostname verification on the client.
+
+The examples below show hostname verification being disabled for the Java client, though you can be omit this as the client disables it by default. C++/python clients do now allow this to be configured at the moment.
+
+### CLI tools
+
+[Command-line tools](reference-cli-tools.md) like [`pulsar-admin`](reference-cli-tools#pulsar-admin), [`pulsar-perf`](reference-cli-tools#pulsar-perf), and [`pulsar-client`](reference-cli-tools#pulsar-client) use the `conf/client.conf` config file in a Pulsar installation.
+
+You'll need to add the following parameters to that file to use TLS transport with Pulsar's CLI tools:
+
+```properties
+webServiceUrl=https://broker.example.com:8443/
+brokerServiceUrl=pulsar+ssl://broker.example.com:6651/
+useTls=true
+tlsAllowInsecureConnection=false
+tlsTrustCertsFilePath=/path/to/ca.cert.pem
+tlsEnableHostnameVerification=false
+```
+
+### Java client
+
+```java
+import org.apache.pulsar.client.api.PulsarClient;
+
+PulsarClient client = PulsarClient.builder()
+    .serviceUrl("pulsar+ssl://broker.example.com:6651/")
+    .enableTls(true)
+    .tlsTrustCertsFilePath("/path/to/ca.cert.pem")
+    .enableTlsHostnameVerification(false) // false by default, in any case
+    .allowTlsInsecureConnection(false) // false by default, in any case
+    .build();
+```
+
+### Python client
+
+```python
+from pulsar import Client
+
+client = Client("pulsar+ssl://broker.example.com:6651/",
+                tls_trust_certs_file_path="/path/to/ca.cert.pem",
+                tls_allow_insecure_connection=False) // defaults to false from v2.2.0 onwards
+```
+
+### C++ client
+
+```c++
+#include <pulsar/Client.h>
+
+pulsar::ClientConfiguration config;
+config.setUseTls(true);
+config.setTlsTrustCertsFilePath("/path/to/ca.cert.pem");
+config.setTlsAllowInsecureConnection(false); // defaults to false from v2.2.0 onwards
+
+pulsar::Client client("pulsar+ssl://broker.example.com:6651/", config);
+```
diff --git a/site2/website/versions.json b/site2/website/versions.json
index bdeecc7eb8..e05d087c05 100644
--- a/site2/website/versions.json
+++ b/site2/website/versions.json
@@ -1,4 +1,5 @@
 [
+  "2.2.0",
   "2.1.1-incubating",
   "2.1.0-incubating"
 ]


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message