distributedlog-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From si...@apache.org
Subject [49/64] [partial] incubator-distributedlog git commit: delete the content from old site
Date Tue, 13 Sep 2016 09:00:28 GMT
http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/implementation/storage.txt
----------------------------------------------------------------------
diff --git a/_sources/implementation/storage.txt b/_sources/implementation/storage.txt
deleted file mode 100644
index ed2bba1..0000000
--- a/_sources/implementation/storage.txt
+++ /dev/null
@@ -1,313 +0,0 @@
-Storage
-=======
-
-This describes some implementation details of storage layer.
-
-Ensemble Placement Policy
--------------------------
-
-`EnsemblePlacementPolicy` encapsulates the algorithm that bookkeeper client uses to select a number of bookies from the
-cluster as an ensemble for storing data. The algorithm is typically based on the data input as well as the network
-topology properties.
-
-By default, BookKeeper offers a `RackawareEnsemblePlacementPolicy` for placing the data across racks within a
-datacenter, and a `RegionAwareEnsemblePlacementPolicy` for placing the data across multiple datacenters.
-
-How does EnsemblePlacementPolicy work?
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The interface of `EnsemblePlacementPolicy` is described as below.
-
-::
-
-    public interface EnsemblePlacementPolicy {
-
-        /**
-         * Initialize the policy.
-         *
-         * @param conf client configuration
-         * @param optionalDnsResolver dns resolver
-         * @param hashedWheelTimer timer
-         * @param featureProvider feature provider
-         * @param statsLogger stats logger
-         * @param alertStatsLogger stats logger for alerts
-         */
-        public EnsemblePlacementPolicy initialize(ClientConfiguration conf,
-                                                  Optional<DNSToSwitchMapping> optionalDnsResolver,
-                                                  HashedWheelTimer hashedWheelTimer,
-                                                  FeatureProvider featureProvider,
-                                                  StatsLogger statsLogger,
-                                                  AlertStatsLogger alertStatsLogger);
-
-        /**
-         * Uninitialize the policy
-         */
-        public void uninitalize();
-
-        /**
-         * A consistent view of the cluster (what bookies are available as writable, what bookies are available as
-         * readonly) is updated when any changes happen in the cluster.
-         *
-         * @param writableBookies
-         *          All the bookies in the cluster available for write/read.
-         * @param readOnlyBookies
-         *          All the bookies in the cluster available for readonly.
-         * @return the dead bookies during this cluster change.
-         */
-        public Set<BookieSocketAddress> onClusterChanged(Set<BookieSocketAddress> writableBookies,
-                                                         Set<BookieSocketAddress> readOnlyBookies);
-
-        /**
-         * Choose <i>numBookies</i> bookies for ensemble. If the count is more than the number of available
-         * nodes, {@link BKNotEnoughBookiesException} is thrown.
-         *
-         * @param ensembleSize
-         *          Ensemble Size
-         * @param writeQuorumSize
-         *          Write Quorum Size
-         * @param excludeBookies
-         *          Bookies that should not be considered as targets.
-         * @return list of bookies chosen as targets.
-         * @throws BKNotEnoughBookiesException if not enough bookies available.
-         */
-        public ArrayList<BookieSocketAddress> newEnsemble(int ensembleSize, int writeQuorumSize, int ackQuorumSize,
-                                                          Set<BookieSocketAddress> excludeBookies) throws BKNotEnoughBookiesException;
-
-        /**
-         * Choose a new bookie to replace <i>bookieToReplace</i>. If no bookie available in the cluster,
-         * {@link BKNotEnoughBookiesException} is thrown.
-         *
-         * @param bookieToReplace
-         *          bookie to replace
-         * @param excludeBookies
-         *          bookies that should not be considered as candidate.
-         * @return the bookie chosen as target.
-         * @throws BKNotEnoughBookiesException
-         */
-        public BookieSocketAddress replaceBookie(int ensembleSize, int writeQuorumSize, int ackQuorumSize,
-                                                 Collection<BookieSocketAddress> currentEnsemble, BookieSocketAddress bookieToReplace,
-                                                 Set<BookieSocketAddress> excludeBookies) throws BKNotEnoughBookiesException;
-
-        /**
-         * Reorder the read sequence of a given write quorum <i>writeSet</i>.
-         *
-         * @param ensemble
-         *          Ensemble to read entries.
-         * @param writeSet
-         *          Write quorum to read entries.
-         * @param bookieFailureHistory
-         *          Observed failures on the bookies
-         * @return read sequence of bookies
-         */
-        public List<Integer> reorderReadSequence(ArrayList<BookieSocketAddress> ensemble,
-                                                 List<Integer> writeSet, Map<BookieSocketAddress, Long> bookieFailureHistory);
-
-
-        /**
-         * Reorder the read last add confirmed sequence of a given write quorum <i>writeSet</i>.
-         *
-         * @param ensemble
-         *          Ensemble to read entries.
-         * @param writeSet
-         *          Write quorum to read entries.
-         * @param bookieFailureHistory
-         *          Observed failures on the bookies
-         * @return read sequence of bookies
-         */
-        public List<Integer> reorderReadLACSequence(ArrayList<BookieSocketAddress> ensemble,
-                                                List<Integer> writeSet, Map<BookieSocketAddress, Long> bookieFailureHistory);
-    }
-
-The methods in this interface covers three parts - 1) initialization and uninitialization; 2) how to choose bookies to
-place data; and 3) how to choose bookies to do speculative reads.
-
-Initialization and uninitialization
-___________________________________
-
-The ensemble placement policy is constructed by jvm reflection during constructing bookkeeper client. After the
-`EnsemblePlacementPolicy` is constructed, bookkeeper client will call `#initialize` to initialize the placement policy.
-
-The `#initialize` method takes a few resources from bookkeeper for instantiating itself. These resources include:
-
-1. `ClientConfiguration` : The client configuration that used for constructing the bookkeeper client. The implementation of the placement policy could obtain its settings from this configuration.
-2. `DNSToSwitchMapping`: The DNS resolver for the ensemble policy to build the network topology of the bookies cluster. It is optional.
-3. `HashedWheelTimer`: A hashed wheel timer that could be used for timing related work. For example, a stabilize network topology could use it to delay network topology changes to reduce impacts of flapping bookie registrations due to zk session expires.
-4. `FeatureProvider`: A feature provider that the policy could use for enabling or disabling its offered features. For example, a region-aware placement policy could offer features to disable placing data to a specific region at runtime.
-5. `StatsLogger`: A stats logger for exposing stats.
-6. `AlertStatsLogger`: An alert stats logger for exposing critical stats that needs to be alerted.
-
-The ensemble placement policy is a single instance per bookkeeper client. The instance will be `#uninitialize` when
-closing the bookkeeper client. The implementation of a placement policy should be responsible for releasing all the
-resources that allocated during `#initialize`.
-
-How to choose bookies to place
-______________________________
-
-The bookkeeper client discovers list of bookies from zookeeper via `BookieWatcher` - whenever there are bookie changes,
-the ensemble placement policy will be notified with new list of bookies via `onClusterChanged(writableBookie, readOnlyBookies)`.
-The implementation of the ensemble placement policy will react on those changes to build new network topology. Subsequent
-operations like `newEnsemble` or `replaceBookie` hence can operate on the new network topology.
-
-newEnsemble(ensembleSize, writeQuorumSize, ackQuorumSize, excludeBookies)
-    Choose `ensembleSize` bookies for ensemble. If the count is more than the number of available nodes,
-    `BKNotEnoughBookiesException` is thrown.
-
-replaceBookie(ensembleSize, writeQuorumSize, ackQuorumSize, currentEnsemble, bookieToReplace, excludeBookies)
-    Choose a new bookie to replace `bookieToReplace`. If no bookie available in the cluster,
-    `BKNotEnoughBookiesException` is thrown.
-
-
-Both `RackAware` and `RegionAware` placement policies are `TopologyAware` policies. They build a `NetworkTopology` on
-responding bookie changes, use it for ensemble placement and ensure rack/region coverage for write quorums - a write
-quorum should be covered by at least two racks or regions.
-
-Network Topology
-^^^^^^^^^^^^^^^^
-
-The network topology is presenting a cluster of bookies in a tree hierarchical structure. For example, a bookie cluster
-may be consists of many data centers (aka regions) filled with racks of machines. In this tree structure, leaves
-represent bookies and inner nodes represent switches/routes that manage traffic in/out of regions or racks.
-
-For example, there are 3 bookies in region `A`. They are `bk1`, `bk2` and `bk3`. And their network locations are
-`/region-a/rack-1/bk1`, `/region-a/rack-1/bk2` and `/region-a/rack-2/bk3`. So the network topology will look like below:
-
-::
-
-              root
-               |
-           region-a
-             /  \
-        rack-1  rack-2
-         /  \       \
-       bk1  bk2     bk3
-
-Another example, there are 4 bookies spanning in two regions `A` and `B`. They are `bk1`, `bk2`, `bk3` and `bk4`. And
-their network locations are `/region-a/rack-1/bk1`, `/region-a/rack-1/bk2`, `/region-b/rack-2/bk3` and `/region-b/rack-2/bk4`.
-The network topology will look like below:
-
-::
-
-                    root
-                    /  \
-             region-a  region-b
-                |         |
-              rack-1    rack-2
-               / \       / \
-             bk1  bk2  bk3  bk4
-
-The network location of each bookie is resolved by a `DNSResolver` (interface is described as below). The `DNSResolver`
-resolves a list of DNS-names or IP-addresses into a list of network locations. The network location that is returned
-must be a network path of the form `/region/rack`, where `/` is the root, and `region` is the region id representing
-the data center where `rack` is located. The network topology of the bookie cluster would determine the number of
-components in the network path.
-
-::
-
-    /**
-     * An interface that must be implemented to allow pluggable
-     * DNS-name/IP-address to RackID resolvers.
-     *
-     */
-    @Beta
-    public interface DNSToSwitchMapping {
-        /**
-         * Resolves a list of DNS-names/IP-addresses and returns back a list of
-         * switch information (network paths). One-to-one correspondence must be
-         * maintained between the elements in the lists.
-         * Consider an element in the argument list - x.y.com. The switch information
-         * that is returned must be a network path of the form /foo/rack,
-         * where / is the root, and 'foo' is the switch where 'rack' is connected.
-         * Note the hostname/ip-address is not part of the returned path.
-         * The network topology of the cluster would determine the number of
-         * components in the network path.
-         * <p/>
-         *
-         * If a name cannot be resolved to a rack, the implementation
-         * should return {@link NetworkTopology#DEFAULT_RACK}. This
-         * is what the bundled implementations do, though it is not a formal requirement
-         *
-         * @param names the list of hosts to resolve (can be empty)
-         * @return list of resolved network paths.
-         * If <i>names</i> is empty, the returned list is also empty
-         */
-        public List<String> resolve(List<String> names);
-
-        /**
-         * Reload all of the cached mappings.
-         *
-         * If there is a cache, this method will clear it, so that future accesses
-         * will get a chance to see the new data.
-         */
-        public void reloadCachedMappings();
-    }
-
-By default, the network topology responds to bookie changes immediately. That means if a bookie's znode appears in  or
-disappears from zookeeper, the network topology will add the bookie or remove the bookie immediately. It introduces
-instability when bookie's zookeeper registration becomes flapping. In order to address this, there is a `StabilizeNetworkTopology`
-which delays removing bookies from network topology if they disappear from zookeeper. It could be enabled by setting
-the following option.
-
-::
-
-    # enable stabilize network topology by setting it to a positive value.
-    bkc.networkTopologyStabilizePeriodSeconds=10
-
-
-RackAware and RegionAware
-^^^^^^^^^^^^^^^^^^^^^^^^^
-
-`RackAware` placement policy basically just chooses bookies from different racks in the built network topology. It
-guarantees that a write quorum will cover at least two racks.
-
-`RegionAware` placement policy is a hierarchical placement policy, which it chooses equal-sized bookies from regions, and
-within each region it uses `RackAware` placement policy to choose bookies from racks. For example, if there is 3 regions -
-`region-a`, `region-b` and `region-c`, an application want to allocate a 15-bookies ensemble. First, it would figure
-out there are 3 regions and it should allocate 5 bookies from each region. Second, for each region, it would use
-`RackAware` placement policy to choose 5 bookies.
-
-How to choose bookies to do speculative reads?
-______________________________________________
-
-`reorderReadSequence` and `reorderReadLACSequence` are two methods exposed by the placement policy, to help client
-determine a better read sequence according to the network topology and the bookie failure history.
-
-In `RackAware` placement policy, the reads will be tried in following sequence:
-
-- bookies are writable and didn't experience failures before
-- bookies are writable and experienced failures before
-- bookies are readonly
-- bookies already disappeared from network topology
-
-In `RegionAware` placement policy, the reads will be tried in similar following sequence as `RackAware` placement policy.
-There is a slight different on trying writable bookies: after trying every 2 bookies from local region, it would try
-a bookie from remote region. Hence it would achieve low latency even there is network issues within local region.
-
-How to enable different EnsemblePlacementPolicy?
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Users could configure using different ensemble placement policies by setting following options in distributedlog
-configuration files.
-
-::
-
-    # enable rack-aware ensemble placement policy
-    bkc.ensemblePlacementPolicy=org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicy
-    # enable region-aware ensemble placement policy
-    bkc.ensemblePlacementPolicy=org.apache.bookkeeper.client.RegionAwareEnsemblePlacementPolicy
-
-The network topology of bookies built by either `RackawareEnsemblePlacementPolicy` or `RegionAwareEnsemblePlacementPolicy`
-is done via a `DNSResolver`. The default `DNSResolver` is a script based DNS resolver. It reads the configuration
-parameters, executes any defined script, handles errors and resolves domain names to network locations. The script
-is configured via following settings in distributedlog configuration.
-
-::
-
-    bkc.networkTopologyScriptFileName=/path/to/dns/resolver/script
-
-Alternatively, the `DNSResolver` could be configured in following settings and loaded via reflection. `DNSResolverForRacks`
-is a good example to check out for customizing your dns resolver based our network environments.
-
-::
-
-    bkEnsemblePlacementDnsResolverClass=com.twitter.distributedlog.net.DNSResolverForRacks
-

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/implementation/writeproxy.txt
----------------------------------------------------------------------
diff --git a/_sources/implementation/writeproxy.txt b/_sources/implementation/writeproxy.txt
deleted file mode 100644
index e69de29..0000000

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/index.txt
----------------------------------------------------------------------
diff --git a/_sources/index.txt b/_sources/index.txt
deleted file mode 100644
index 72b9c69..0000000
--- a/_sources/index.txt
+++ /dev/null
@@ -1,23 +0,0 @@
-.. markdowninclude:: ../README.md
-
-Documentation
-=============
-
-.. toctree::
-   :maxdepth: 2
-
-   download
-   basics/main
-   api/main
-   configuration/main
-   considerations/main
-   architecture/main
-   design/main
-   globalreplicatedlog/main
-   implementation/main
-   operations/main
-   performance/main
-   references/main
-   tutorials/main
-   developer/main
-   faq

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/operations/bookkeeper.txt
----------------------------------------------------------------------
diff --git a/_sources/operations/bookkeeper.txt b/_sources/operations/bookkeeper.txt
deleted file mode 100644
index 5a35ba9..0000000
--- a/_sources/operations/bookkeeper.txt
+++ /dev/null
@@ -1,193 +0,0 @@
-BookKeeper
-==========
-
-For reliable BookKeeper service, you should deploy BookKeeper in a cluster.
-
-Run from bookkeeper source
---------------------------
-
-The version of BookKeeper that DistributedLog depends on is not the official opensource version.
-It is twitter's production version `4.3.4-TWTTR`, which is available in `https://github.com/twitter/bookkeeper`. 
-We are working actively with BookKeeper community to merge all twitter's changes back to the community.
-
-The major changes in Twitter's bookkeeper includes:
-
-- BOOKKEEPER-670_: Long poll reads and LastAddConfirmed piggyback. It is to reduce the tailing read latency.
-- BOOKKEEPER-759_: Delay ensemble change if it doesn't break ack quorum constraint. It is to reduce the write latency on bookie failures.
-- BOOKKEEPER-757_: Ledger recovery improvements, to reduce the latency on ledger recovery.
-- Misc improvements on bookie recovery and bookie storage.
-
-.. _BOOKKEEPER-670: https://issues.apache.org/jira/browse/BOOKKEEPER-670
-.. _BOOKKEEPER-759: https://issues.apache.org/jira/browse/BOOKKEEPER-759
-.. _BOOKKEEPER-757: https://issues.apache.org/jira/browse/BOOKKEEPER-757
-
-To build bookkeeper, run:
-
-1. First checkout the bookkeeper source code from twitter's branch.
-
-.. code-block:: bash
-
-    $ git clone https://github.com/twitter/bookkeeper.git bookkeeper   
-
-
-2. Build the bookkeeper package:
-
-.. code-block:: bash
-
-    $ cd bookkeeper 
-    $ mvn clean package assembly:single -DskipTests
-
-However, since `bookkeeper-server` is one of the dependency of `distributedlog-service`.
-You could simply run bookkeeper using same set of scripts provided in `distributedlog-service`.
-In the following sections, we will describe how to run bookkeeper using the scripts provided in
-`distributedlog-service`.
-
-Run from distributedlog source
-------------------------------
-
-Build
-+++++
-
-First of all, build DistributedLog:
-
-.. code-block:: bash
-
-    $ mvn clean install -DskipTests
-
-
-Configuration
-+++++++++++++
-
-The configuration file `bookie.conf` under `distributedlog-service/conf` is a template of production
-configuration to run a bookie node. Most of the configuration settings are good for production usage.
-You might need to configure following settings according to your environment and hardware platform.
-
-Port
-^^^^
-
-By default, the service port is `3181`, where the bookie server listens on. You can change the port
-to whatever port you like by modifying the following setting.
-
-::
-
-    bookiePort=3181
-
-
-Disks
-^^^^^
-
-You need to configure following settings according to the disk layout of your hardware. It is recommended
-to put `journalDirectory` under a separated disk from others for performance. It is okay to set
-`indexDirectories` to be same as `ledgerDirectories`. However, it is recommended to put `indexDirectories`
-to a SSD driver for better performance.
-
-::
-    
-    # Directory Bookkeeper outputs its write ahead log
-    journalDirectory=/tmp/data/bk/journal
-
-    # Directory Bookkeeper outputs ledger snapshots
-    ledgerDirectories=/tmp/data/bk/ledgers
-
-    # Directory in which index files will be stored.
-    indexDirectories=/tmp/data/bk/ledgers
-
-
-To better understand how bookie nodes work, please check bookkeeper_ website for more details.
-
-ZooKeeper
-^^^^^^^^^
-
-You need to configure following settings to point the bookie to the zookeeper server that it is using.
-You need to make sure `zkLedgersRootPath` exists before starting the bookies.
-
-::
-   
-    # Root zookeeper path to store ledger metadata
-    # This parameter is used by zookeeper-based ledger manager as a root znode to
-    # store all ledgers.
-    zkLedgersRootPath=/messaging/bookkeeper/ledgers
-    # A list of one of more servers on which zookeeper is running.
-    zkServers=localhost:2181
-
-
-Stats Provider
-^^^^^^^^^^^^^^
-
-Bookies use `StatsProvider` to expose its metrics. The `StatsProvider` is a pluggable library to
-adopt to various stats collecting systems. Please check :doc:`monitoring` for more details.
-
-::
-    
-    # stats provide - use `codahale` metrics library
-    statsProviderClass=org.apache.bookkeeper.stats.CodahaleMetricsServletProvider
-
-    ### Following settings are stats provider related settings
-
-    # Exporting codahale stats in http port `9001`
-    codahaleStatsHttpPort=9001
-
-
-Index Settings
-^^^^^^^^^^^^^^
-
-- `pageSize`: size of a index page in ledger cache, in bytes. If there are large number
-  of ledgers and each ledger has fewer entries, smaller index page would improve memory usage.
-- `pageLimit`: The maximum number of index pages in ledger cache. If nummber of index pages
-  reaches the limitation, bookie server starts to swap some ledgers from memory to disk.
-  Increase this value when swap becomes more frequent. But make sure `pageLimit*pageSize`
-  should not be more than JVM max memory limitation.
-
-
-Journal Settings
-^^^^^^^^^^^^^^^^
-
-- `journalMaxGroupWaitMSec`: The maximum wait time for group commit. It is valid only when
-  `journalFlushWhenQueueEmpty` is false.
-- `journalFlushWhenQueueEmpty`: Flag indicates whether to flush/sync journal. If it is `true`,
-  bookie server will sync journal when there is no other writes in the journal queue.
-- `journalBufferedWritesThreshold`: The maximum buffered writes for group commit, in bytes.
-  It is valid only when `journalFlushWhenQueueEmpty` is false.
-- `journalBufferedEntriesThreshold`: The maximum buffered writes for group commit, in entries.
-  It is valid only when `journalFlushWhenQueueEmpty` is false.
-
-Setting `journalFlushWhenQueueEmpty` to `true` will produce low latency when the traffic is low.
-However, the latency varies a lost when the traffic is increased. So it is recommended to set
-`journalMaxGroupWaitMSec`, `journalBufferedEntriesThreshold` and `journalBufferedWritesThreshold`
-to reduce the number of fsyncs made to journal disk, to achieve sustained low latency.
-
-Thread Settings
-^^^^^^^^^^^^^^^
-
-It is recommended to configure following settings to align with the cpu cores of the hardware.
-
-::
-    
-    numAddWorkerThreads=4
-    numJournalCallbackThreads=4
-    numReadWorkerThreads=4
-    numLongPollWorkerThreads=4
-
-Run 
-+++
-
-As `bookkeeper-server` is shipped as part of `distributedlog-service`, you could use the `dlog-daemon.sh`
-script to start `bookie` as daemon thread.
-
-Start the bookie:
-
-.. code-block:: bash
-
-    $ ./distributedlog-service/bin/dlog-daemon.sh start bookie --conf /path/to/bookie/conf
-
-
-Stop the bookie:
-
-.. code-block:: bash
-
-    $ ./distributedlog-service/bin/dlog-daemon.sh stop bookie
-
-
-Please check bookkeeper_ website for more details.
-
-.. _bookkeeper: http://bookkeeper.apache.org/

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/operations/deployment.txt
----------------------------------------------------------------------
diff --git a/_sources/operations/deployment.txt b/_sources/operations/deployment.txt
deleted file mode 100644
index 461ac95..0000000
--- a/_sources/operations/deployment.txt
+++ /dev/null
@@ -1,534 +0,0 @@
-Cluster Setup & Deployment
-==========================
-
-This section describes how to run DistributedLog in `distributed` mode.
-To run a cluster with DistributedLog, you need a Zookeeper cluster and a Bookkeeper cluster.
-
-Build
------
-
-To build DistributedLog, run:
-
-.. code-block:: bash
-
-   mvn clean install -DskipTests
-
-
-Or run `./scripts/snapshot` to build the release packages from current source. The released
-packages contain the binaries for running `distributedlog-service`, `distributedlog-benchmark`
-and `distributedlog-tutorials`.
-
-NOTE: we run following instructions from distributedlog source code after running `mvn clean install`.
-And assume `DLOG_HOME` is the directory of distributedlog source.
-
-Zookeeper
----------
-
-(If you already have a zookeeper cluster running, you could skip this section.)
-
-We could use the `dlog-daemon.sh` and the `zookeeper.conf.template` to demonstrate run a 1-node
-zookeeper ensemble locally.
-
-Create a `zookeeper.conf` from the `zookeeper.conf.template`.
-
-.. code-block:: bash
-
-    $ cp distributedlog-service/conf/zookeeper.conf.template distributedlog-service/conf/zookeeper.conf
-
-Configure the settings in `zookeeper.conf`. By default, it will use `/tmp/data/zookeeper` for storing
-the zookeeper data. Let's create the data directories for zookeeper.
-
-.. code-block:: bash
-
-    $ mkdir -p /tmp/data/zookeeper/txlog
-
-Once the data directory is created, we need to assign `myid` for this zookeeper node.
-
-.. code-block:: bash
-
-    $ echo "1" > /tmp/data/zookeeper/myid
-
-Start the zookeeper daemon using `dlog-daemon.sh`.
-
-.. code-block:: bash
-
-    $ ./distributedlog-service/bin/dlog-daemon.sh start zookeeper ${DL_HOME}/distributedlog-service/conf/zookeeper.conf
-
-You could verify the zookeeper setup using `zkshell`.
-
-.. code-block:: bash
-
-    // ./distributedlog-service/bin/dlog zkshell ${zkservers}
-    $ ./distributedlog-service/bin/dlog zkshell localhost:2181
-    Connecting to localhost:2181
-    Welcome to ZooKeeper!
-    JLine support is enabled
-
-    WATCHER::
-
-    WatchedEvent state:SyncConnected type:None path:null
-    [zk: localhost:2181(CONNECTED) 0] ls /
-    [zookeeper]
-    [zk: localhost:2181(CONNECTED) 1]
-
-Please refer to the :doc:`zookeeper` for more details on setting up zookeeper cluster.
-
-Bookkeeper
-----------
-
-(If you already have a bookkeeper cluster running, you could skip this section.)
-
-We could use the `dlog-daemon.sh` and the `bookie.conf.template` to demonstrate run a 3-nodes
-bookkeeper cluster locally.
-
-Create a `bookie.conf` from the `bookie.conf.template`. Since we are going to run a 3-nodes
-bookkeeper cluster locally. Let's make three copies of `bookie.conf.template`.
-
-.. code-block:: bash
-
-    $ cp distributedlog-service/conf/bookie.conf.template distributedlog-service/conf/bookie-1.conf
-    $ cp distributedlog-service/conf/bookie.conf.template distributedlog-service/conf/bookie-2.conf
-    $ cp distributedlog-service/conf/bookie.conf.template distributedlog-service/conf/bookie-3.conf
-
-Configure the settings in the bookie configuraiont files.
-
-First of all, choose the zookeeper cluster that the bookies will use and set `zkServers` in
-the configuration files.
-
-::
-    
-    zkServers=localhost:2181
-
-Choose the zookeeper path to store bookkeeper metadata and set `zkLedgersRootPath` in the configuration
-files. Let's use `/messaging/bookkeeper/ledgers` in this instruction.
-
-::
-
-    zkLedgersRootPath=/messaging/bookkeeper/ledgers
-
-
-Format bookkeeper metadata
-++++++++++++++++++++++++++
-
-(NOTE: only format bookkeeper metadata when first time setting up the bookkeeper cluster.)
-
-The bookkeeper shell doesn't automatically create the `zkLedgersRootPath` when running `metaformat`.
-So using `zkshell` to create the `zkLedgersRootPath`.
-
-::
-
-    $ ./distributedlog-service/bin/dlog zkshell localhost:2181
-    Connecting to localhost:2181
-    Welcome to ZooKeeper!
-    JLine support is enabled
-
-    WATCHER::
-
-    WatchedEvent state:SyncConnected type:None path:null
-    [zk: localhost:2181(CONNECTED) 0] create /messaging ''
-    Created /messaging
-    [zk: localhost:2181(CONNECTED) 1] create /messaging/bookkeeper ''
-    Created /messaging/bookkeeper
-    [zk: localhost:2181(CONNECTED) 2] create /messaging/bookkeeper/ledgers ''
-    Created /messaging/bookkeeper/ledgers
-    [zk: localhost:2181(CONNECTED) 3]
-
-
-If the `zkLedgersRootPath`, run `metaformat` to format the bookkeeper metadata.
-
-::
-    
-    $ BOOKIE_CONF=${DL_HOME}/distributedlog-service/conf/bookie-1.conf ./distributedlog-service/bin/dlog bkshell metaformat
-    Are you sure to format bookkeeper metadata ? (Y or N) Y
-
-Add Bookies
-+++++++++++
-
-Once the bookkeeper metadata is formatted, it is ready to add bookie nodes to the cluster.
-
-Configure Ports
-^^^^^^^^^^^^^^^
-
-Configure the ports that used by bookies.
-
-bookie-1:
-
-::
-   
-    # Port that bookie server listen on
-    bookiePort=3181
-    # Exporting codahale stats
-    185 codahaleStatsHttpPort=9001
-
-bookie-2:
-
-::
-   
-    # Port that bookie server listen on
-    bookiePort=3182
-    # Exporting codahale stats
-    185 codahaleStatsHttpPort=9002
-
-bookie-3:
-
-::
-   
-    # Port that bookie server listen on
-    bookiePort=3183
-    # Exporting codahale stats
-    185 codahaleStatsHttpPort=9003
-
-Configure Disk Layout
-^^^^^^^^^^^^^^^^^^^^^
-
-Configure the disk directories used by a bookie server by setting following options.
-
-::
-    
-    # Directory Bookkeeper outputs its write ahead log
-    journalDirectory=/tmp/data/bk/journal
-    # Directory Bookkeeper outputs ledger snapshots
-    ledgerDirectories=/tmp/data/bk/ledgers
-    # Directory in which index files will be stored.
-    indexDirectories=/tmp/data/bk/ledgers
-
-As we are configuring a 3-nodes bookkeeper cluster, we modify the following settings as below:
-
-bookie-1:
-
-::
-    
-    # Directory Bookkeeper outputs its write ahead log
-    journalDirectory=/tmp/data/bk-1/journal
-    # Directory Bookkeeper outputs ledger snapshots
-    ledgerDirectories=/tmp/data/bk-1/ledgers
-    # Directory in which index files will be stored.
-    indexDirectories=/tmp/data/bk-1/ledgers
-
-bookie-2:
-
-::
-    
-    # Directory Bookkeeper outputs its write ahead log
-    journalDirectory=/tmp/data/bk-2/journal
-    # Directory Bookkeeper outputs ledger snapshots
-    ledgerDirectories=/tmp/data/bk-2/ledgers
-    # Directory in which index files will be stored.
-    indexDirectories=/tmp/data/bk-2/ledgers
-
-bookie-3:
-
-::
-    
-    # Directory Bookkeeper outputs its write ahead log
-    journalDirectory=/tmp/data/bk-3/journal
-    # Directory Bookkeeper outputs ledger snapshots
-    ledgerDirectories=/tmp/data/bk-3/ledgers
-    # Directory in which index files will be stored.
-    indexDirectories=/tmp/data/bk-3/ledgers
-
-Format bookie
-^^^^^^^^^^^^^
-
-Once the disk directories are configured correctly in the configuration file, use
-`bkshell bookieformat` to format the bookie.
-
-::
-    
-    BOOKIE_CONF=${DL_HOME}/distributedlog-service/conf/bookie-1.conf ./distributedlog-service/bin/dlog bkshell bookieformat
-    BOOKIE_CONF=${DL_HOME}/distributedlog-service/conf/bookie-2.conf ./distributedlog-service/bin/dlog bkshell bookieformat
-    BOOKIE_CONF=${DL_HOME}/distributedlog-service/conf/bookie-3.conf ./distributedlog-service/bin/dlog bkshell bookieformat
-
-
-Start bookie
-^^^^^^^^^^^^
-
-Start the bookie using `dlog-daemon.sh`.
-
-::
-    
-    SERVICE_PORT=3181 ./distributedlog-service/bin/dlog-daemon.sh start bookie --conf ${DL_HOME}/distributedlog-service/conf/bookie-1.conf
-    SERVICE_PORT=3182 ./distributedlog-service/bin/dlog-daemon.sh start bookie --conf ${DL_HOME}/distributedlog-service/conf/bookie-2.conf
-    SERVICE_PORT=3183 ./distributedlog-service/bin/dlog-daemon.sh start bookie --conf ${DL_HOME}/distributedlog-service/conf/bookie-3.conf
-    
-Verify whether the bookie is setup correctly. You could simply check whether the bookie is showed up in
-zookeeper `zkLedgersRootPath`/available znode.
-
-::
-    
-    $ ./distributedlog-service/bin/dlog zkshell localhost:2181
-    Connecting to localhost:2181
-    Welcome to ZooKeeper!
-    JLine support is enabled
-
-    WATCHER::
-
-    WatchedEvent state:SyncConnected type:None path:null
-    [zk: localhost:2181(CONNECTED) 0] ls /messaging/bookkeeper/ledgers/available
-    [127.0.0.1:3181, 127.0.0.1:3182, 127.0.0.1:3183, readonly]
-    [zk: localhost:2181(CONNECTED) 1]
-
-
-Or check if the bookie is exposing the stats at port `codahaleStatsHttpPort`.
-
-::
-    
-    // ping the service
-    $ curl localhost:9001/ping
-    pong
-    // checking the stats
-    curl localhost:9001/metrics?pretty=true
-
-Stop bookie
-^^^^^^^^^^^
-
-Stop the bookie using `dlog-daemon.sh`.
-
-::
-    
-    $ ./distributedlog-service/bin/dlog-daemon.sh stop bookie
-    // Example:
-    $ SERVICE_PORT=3181 ./distributedlog-service/bin/dlog-daemon.sh stop bookie
-    doing stop bookie ...
-    stopping bookie
-    Shutdown is in progress... Please wait...
-    Shutdown completed.
-
-Turn bookie to readonly
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Start the bookie in `readonly` mode.
-
-::
-    
-    $ SERVICE_PORT=3181 ./distributedlog-service/bin/dlog-daemon.sh start bookie --conf ${DL_HOME}/distributedlog-service/conf/bookie-1.conf --readonly
-
-Verify if the bookie is running in `readonly` mode.
-
-::
-    
-    $ ./distributedlog-service/bin/dlog zkshell localhost:2181
-    Connecting to localhost:2181
-    Welcome to ZooKeeper!
-    JLine support is enabled
-
-    WATCHER::
-
-    WatchedEvent state:SyncConnected type:None path:null
-    [zk: localhost:2181(CONNECTED) 0] ls /messaging/bookkeeper/ledgers/available
-    [127.0.0.1:3182, 127.0.0.1:3183, readonly]
-    [zk: localhost:2181(CONNECTED) 1] ls /messaging/bookkeeper/ledgers/available/readonly
-    [127.0.0.1:3181]
-    [zk: localhost:2181(CONNECTED) 2]
-
-Please refer to the :doc:`bookkeeper` for more details on setting up bookkeeper cluster.
-
-Create Namespace
-----------------
-
-After setting up a zookeeper cluster and a bookkeeper cluster, you could provision DL namespaces
-for applications to use.
-
-Provisioning a DistributedLog namespace is accomplished via the `bind` command available in `dlog tool`.
-
-Namespace is bound by writing bookkeeper environment settings (e.g. the ledger path, bkLedgersZkPath,
-or the set of Zookeeper servers used by bookkeeper, bkZkServers) as metadata in the zookeeper path of
-the namespace DL URI. The DL library resolves the DL URI to determine which bookkeeper cluster it
-should read and write to. 
-
-The namespace binding has following features:
-
-- `Inheritance`: suppose `distributedlog://<zkservers>/messaging/distributedlog` is bound to bookkeeper
-  cluster `X`. All the streams created under `distributedlog://<zkservers>/messaging/distributedlog`,
-  will write to bookkeeper cluster `X`.
-- `Override`: suppose `distributedlog://<zkservers>/messaging/distributedlog` is bound to bookkeeper
-  cluster `X`. You want streams under `distributedlog://<zkservers>/messaging/distributedlog/S` write
-  to bookkeeper cluster `Y`. You could just bind `distributedlog://<zkservers>/messaging/distributedlog/S`
-  to bookkeeper cluster `Y`. The binding to `distributedlog://<zkservers>/messaging/distributedlog/S`
-  only affects streams under `distributedlog://<zkservers>/messaging/distributedlog/S`.
-
-Create namespace binding using `dlog tool`. For example, we create a namespace
-`distributedlog://127.0.0.1:2181/messaging/distributedlog/mynamespace` pointing to the
-bookkeeper cluster we just created above.
-
-::
-    
-    $ distributedlog-service/bin/dlog admin bind \\
-        -dlzr 127.0.0.1:2181 \\
-        -dlzw 127.0.0.1:2181 \\
-        -s 127.0.0.1:2181 \\
-        -bkzr 127.0.0.1:2181 \\
-        -l /messaging/bookkeeper/ledgers \\
-        -i false \\
-        -r true \\
-        -c \\
-        distributedlog://127.0.0.1:2181/messaging/distributedlog/mynamespace
-
-    No bookkeeper is bound to distributedlog://127.0.0.1:2181/messaging/distributedlog/mynamespace
-    Created binding on distributedlog://127.0.0.1:2181/messaging/distributedlog/mynamespace.
-
-
-- Configure the zookeeper cluster used for storing DistributedLog metadata: `-dlzr` and `-dlzw`.
-  Ideally `-dlzr` and `-dlzw` would be same the zookeeper server in distributedlog namespace uri.
-  However to scale zookeeper reads, the zookeeper observers sometimes are added in a different
-  domain name than participants. In such case, configuring `-dlzr` and `-dlzw` to different
-  zookeeper domain names would help isolating zookeeper write and read traffic.
-- Configure the zookeeper cluster used by bookkeeper for storing the metadata : `-bkzr` and `-s`.
-  Similar as `-dlzr` and `-dlzw`, you could configure the namespace to use different zookeeper
-  domain names for readers and writers to access bookkeeper metadatadata.
-- Configure the bookkeeper ledgers path: `-l`.
-- Configure the zookeeper path to store DistributedLog metadata. It is implicitly included as part
-  of namespace URI.
-
-Write Proxy
------------
-
-A write proxy consists of multiple write proxies. They don't store any state locally. So they are
-mostly stateless and can be run as many as you can.
-
-Configuration
-+++++++++++++
-
-Different from bookkeeper, DistributedLog tries not to configure any environment related settings
-in configuration files. Any environment related settings are stored and configured via `namespace binding`.
-The configuration file should contain non-environment related settings.
-
-There is a `write_proxy.conf` template file available under `distributedlog-service` module.
-
-Run write proxy
-+++++++++++++++
-
-A write proxy could be started using `dlog-daemon.sh` script under `distributedlog-service`.
-
-::
-    
-    WP_SHARD_ID=${WP_SHARD_ID} WP_SERVICE_PORT=${WP_SERVICE_PORT} WP_STATS_PORT=${WP_STATS_PORT} ./distributedlog-service/bin/dlog-daemon.sh start writeproxy
-
-- `WP_SHARD_ID`: A non-negative integer. You don't need to guarantee uniqueness of shard id, as it is just an
-  indicator to the client for routing the requests. If you are running the `write proxy` using a cluster scheduler
-  like `aurora`, you could easily obtain a shard id and use that to configure `WP_SHARD_ID`.
-- `WP_SERVICE_PORT`: The port that write proxy listens on.
-- `WP_STATS_PORT`: The port that write proxy exposes stats to a http endpoint.
-
-Please check `distributedlog-service/conf/dlogenv.sh` for more environment variables on configuring write proxy.
-
-- `WP_CONF_FILE`: The path to the write proxy configuration file.
-- `WP_NAMESPACE`: The distributedlog namespace that the write proxy is serving for.
-
-For example, we start 3 write proxies locally and point to the namespace created above.
-
-::
-    
-    $ WP_SHARD_ID=1 WP_SERVICE_PORT=4181 WP_STATS_PORT=20001 ./distributedlog-service/bin/dlog-daemon.sh start writeproxy
-    $ WP_SHARD_ID=2 WP_SERVICE_PORT=4182 WP_STATS_PORT=20002 ./distributedlog-service/bin/dlog-daemon.sh start writeproxy
-    $ WP_SHARD_ID=3 WP_SERVICE_PORT=4183 WP_STATS_PORT=20003 ./distributedlog-service/bin/dlog-daemon.sh start writeproxy
-
-The write proxy will announce itself to the zookeeper path `.write_proxy` under the dl namespace path.
-
-We could verify that the write proxy is running correctly by checking the zookeeper path or checking its stats port.
-
-::
-    $ ./distributedlog-service/bin/dlog zkshell localhost:2181
-    Connecting to localhost:2181
-    Welcome to ZooKeeper!
-    JLine support is enabled
-
-    WATCHER::
-
-    WatchedEvent state:SyncConnected type:None path:null
-    [zk: localhost:2181(CONNECTED) 0] ls /messaging/distributedlog/mynamespace/.write_proxy
-    [member_0000000000, member_0000000001, member_0000000002]
-
-
-::
-    
-    $ curl localhost:20001/ping
-    pong
-
-
-Add and Remove Write Proxies
-++++++++++++++++++++++++++++
-
-Removing a write proxy is pretty straightforward by just killing the process.
-
-::
-    
-    WP_SHARD_ID=1 WP_SERVICE_PORT=4181 WP_STATS_PORT=10001 ./distributedlog-service/bin/dlog-daemon.sh stop writeproxy
-
-
-Adding a new write proxy is just adding a new host and starting the write proxy
-process as described above.
-
-Write Proxy Naming
-++++++++++++++++++
-
-The `dlog-daemon.sh` script starts the write proxy by announcing it to the `.write_proxy` path under
-the dl namespace. So you could use `zk!<zkservers>!/<namespace_path>/.write_proxy` as the finagle name
-to access the write proxy cluster. It is `zk!127.0.0.1:2181!/messaging/distributedlog/mynamespace/.write_proxy`
-in the above example.
-
-Verify the setup
-++++++++++++++++
-
-You could verify the write proxy cluster by running tutorials over the setup cluster.
-
-Create 10 streams.
-
-::
-    
-    $ ./distributedlog-service/bin/dlog tool create -u distributedlog://127.0.0.1:2181/messaging/distributedlog/mynamespace -r stream- -e 0-10
-    You are going to create streams : [stream-0, stream-1, stream-2, stream-3, stream-4, stream-5, stream-6, stream-7, stream-8, stream-9, stream-10] (Y or N) Y
-
-
-Tail read from the 10 streams.
-
-::
-    
-    $ ./distributedlog-tutorials/distributedlog-basic/bin/runner run c.twitter.distributedlog.basic.MultiReader distributedlog://127.0.0.1:2181/messaging/distributedlog/mynamespace stream-0,stream-1,stream-2,stream-3,stream-4,stream-5,stream-6,stream-7,stream-8,stream-9,stream-10
-
-
-Run record generator over some streams
-
-::
-    
-    $ ./distributedlog-tutorials/distributedlog-basic/bin/runner run com.twitter.distributedlog.basic.RecordGenerator 'zk!127.0.0.1:2181!/messaging/distributedlog/mynamespace/.write_proxy' stream-0 100
-    $ ./distributedlog-tutorials/distributedlog-basic/bin/runner run com.twitter.distributedlog.basic.RecordGenerator 'zk!127.0.0.1:2181!/messaging/distributedlog/mynamespace/.write_proxy' stream-1 100
-
-
-Check the terminal running `MultiReader`. You will see similar output as below:
-
-::
-    
-    """
-    Received record DLSN{logSegmentSequenceNo=1, entryId=21044, slotId=0} from stream stream-0
-    """
-    record-1464085079105
-    """
-    Received record DLSN{logSegmentSequenceNo=1, entryId=21046, slotId=0} from stream stream-0
-    """
-    record-1464085079113
-    """
-    Received record DLSN{logSegmentSequenceNo=1, entryId=9636, slotId=0} from stream stream-1
-    """
-    record-1464085079110
-    """
-    Received record DLSN{logSegmentSequenceNo=1, entryId=21048, slotId=0} from stream stream-0
-    """
-    record-1464085079125
-    """
-    Received record DLSN{logSegmentSequenceNo=1, entryId=9638, slotId=0} from stream stream-1
-    """
-    record-1464085079121
-    """
-    Received record DLSN{logSegmentSequenceNo=1, entryId=21050, slotId=0} from stream stream-0
-    """
-    record-1464085079133
-    """
-    Received record DLSN{logSegmentSequenceNo=1, entryId=9640, slotId=0} from stream stream-1
-    """
-    record-1464085079130
-    """
-
-
-
-Please refer to the :doc:`performance` for more details on tuning performance.

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/operations/hardware.txt
----------------------------------------------------------------------
diff --git a/_sources/operations/hardware.txt b/_sources/operations/hardware.txt
deleted file mode 100644
index b36b1c8..0000000
--- a/_sources/operations/hardware.txt
+++ /dev/null
@@ -1,120 +0,0 @@
-Hardware
-========
-
-Figure 1 describes the data flow of DistributedLog. Write traffic comes to `Write Proxy`
-and the data is replicated in `RF` (replication factor) ways to `BookKeeper`. BookKeeper
-stores the replicated data and keeps the data for a given retention period. The data is
-read by `Read Proxy` and fanout to readers.
-
-In such layered architecture, each layer has its own responsibilities and different resource
-requirements. It makes the capacity and cost model much clear and users could scale
-different layers independently.
-
-.. figure:: ../images/costmodel.png
-   :align: center
-
-   Figure 1. DistributedLog Cost Model
-
-Metrics
-~~~~~~~
-
-There are different metrics measuring the capability of a service instance in each layer
-(e.g a `write proxy` node, a `bookie` storage node, a `read proxy` node and such). These metrics
-can be `rps` (requests per second), `bps` (bits per second), `number of streams` that a instance
-can support, and latency requirements. `bps` is the best and simple factor on measuring the
-capability of current distributedlog architecture.
-
-Write Proxy
-~~~~~~~~~~~
-
-Write Proxy (WP) is a stateless serving service that writes and replicates fan-in traffic into BookKeeper.
-The capability of a write proxy instance is purely dominated by the *OUTBOUND* network bandwidth,
-which is reflected as incoming `Write Throughput` and `Replication Factor`.
-
-Calculating the capacity of Write Proxy (number of instances of write proxies) is pretty straightforward.
-The formula is listed as below.
-
-::
-
-    Number of Write Proxies = (Write Throughput) * (Replication Factor) / (Write Proxy Outbound Bandwidth)
-
-As it is bandwidth bound, we'd recommend using machines that have high network bandwith (e.g 10Gb NIC).
-
-The cost estimation is also straightforward.
-
-::
-
-    Bandwidth TCO ($/day/MB) = (Write Proxy TCO) / (Write Proxy Outbound Bandwidth)
-    Cost of write proxies = (Write Throughput) * (Replication Factor) / (Bandwidth TCO)
-
-CPUs
-^^^^
-
-DistributedLog is not CPU bound. You can run an instance with 8 or 12 cores just fine.
-
-Memories
-^^^^^^^^
-
-There's a fair bit of caching. Consider running with at least 8GB of memory.
-
-Disks
-^^^^^
-
-This is a stateless process, disk performances are not relevant.
-
-Network
-^^^^^^^
-
-Depending on your throughput, you might be better off running this with 10Gb NIC. In this scenario, you can easily achieves 350MBps of writes.
-
-
-BookKeeper
-~~~~~~~~~~
-
-BookKeeper is the log segment store, which is a stateful service. There are two factors to measure the
-capability of a Bookie instance: `bandwidth` and `storage`. The bandwidth is majorly dominated by the
-outbound traffic from write proxy, which is `(Write Throughput) * (Replication Factor)`. The storage is
-majorly dominated by the traffic and also `Retention Period`.
-
-Calculating the capacity of BookKeeper (number of instances of bookies) is a bit more complicated than Write
-Proxy. The total number of instances is the maximum number of the instances of bookies calculated using
-`bandwidth` and `storage`.
-
-::
-
-    Number of bookies based on bandwidth = (Write Throughput) * (Replication Factor) / (Bookie Inbound Bandwidth)
-    Number of bookies based on storage = (Write Throughput) * (Replication Factor) * (Replication Factor) / (Bookie disk space)
-    Number of bookies = maximum((number of bookies based on bandwidth), (number of bookies based on storage))
-
-We should consider both bandwidth and storage when choosing the hardware for bookies. There are several rules to follow:
-- A bookie should have multiple disks.
-- The number of disks used as journal disks should have similar I/O bandwidth as its *INBOUND* network bandwidth. For example, if you plan to use a disk for journal which I/O bandwidth is around 100MBps, a 1Gb NIC is a better choice than 10Gb NIC.
-- The number of disks used as ledger disks should be large enough to hold data if retention period is typical long.
-
-The cost estimation is straightforward based on the number of bookies estimated above.
-
-::
-
-    Cost of bookies = (Number of bookies) * (Bookie TCO)
-
-Read Proxy
-~~~~~~~~~~
-
-Similar as Write Proxy, Read Proxy is also dominated by *OUTBOUND* bandwidth, which is reflected as incoming `Write Throughput` and `Fanout Factor`.
-
-Calculating the capacity of Read Proxy (number of instances of read proxies) is also pretty straightforward.
-The formula is listed as below.
-
-::
-
-    Number of Read Proxies = (Write Throughput) * (Fanout Factor) / (Read Proxy Outbound Bandwidth)
-
-As it is bandwidth bound, we'd recommend using machines that have high network bandwith (e.g 10Gb NIC).
-
-The cost estimation is also straightforward.
-
-::
-
-    Bandwidth TCO ($/day/MB) = (Read Proxy TCO) / (Read Proxy Outbound Bandwidth)
-    Cost of read proxies = (Write Throughput) * (Fanout Factor) / (Bandwidth TCO)
-

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/operations/main.txt
----------------------------------------------------------------------
diff --git a/_sources/operations/main.txt b/_sources/operations/main.txt
deleted file mode 100644
index 6eb2a96..0000000
--- a/_sources/operations/main.txt
+++ /dev/null
@@ -1,13 +0,0 @@
-Deployment & Administration
-===========================
-
-.. toctree::
-   :maxdepth: 1
-
-   deployment
-   operations
-   performance
-   hardware
-   monitoring
-   zookeeper
-   bookkeeper

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/operations/monitoring.txt
----------------------------------------------------------------------
diff --git a/_sources/operations/monitoring.txt b/_sources/operations/monitoring.txt
deleted file mode 100644
index d01caf9..0000000
--- a/_sources/operations/monitoring.txt
+++ /dev/null
@@ -1,378 +0,0 @@
-Monitoring
-==========
-
-DistributedLog uses the stats library provided by Apache BookKeeper for reporting metrics in
-both the server and the client. This can be configured to report stats using pluggable stats
-provider to integrate with your monitoring system.
-
-Stats Provider
-~~~~~~~~~~~~~~
-
-`StatsProvider` is a provider that provides different kinds of stats logger for different scopes.
-The provider is also responsible for reporting its managed metrics.
-
-::
-
-    // Create the stats provider
-    StatsProvider statsProvider = ...;
-    // Start the stats provider
-    statsProvider.start(conf);
-    // Stop the stats provider
-    statsProvider.stop();
-
-Stats Logger
-____________
-
-A scoped `StatsLogger` is a stats logger that records 3 kinds of statistics
-under a given `scope`.
-
-A `StatsLogger` could be either created by obtaining from stats provider with
-the scope name:
-
-::
-
-    StatsProvider statsProvider = ...;
-    StatsLogger statsLogger = statsProvider.scope("test-scope");
-
-Or created by obtaining from a stats logger with a sub scope name:
-
-::
-
-    StatsLogger rootStatsLogger = ...;
-    StatsLogger subStatsLogger = rootStatsLogger.scope("sub-scope");
-
-All the metrics in a stats provider are managed in a hierarchical of scopes.
-
-::
-
-    // all stats recorded by `rootStatsLogger` are under 'root'
-    StatsLogger rootStatsLogger = statsProvider.scope("root");
-    // all stats recorded by 'subStatsLogger1` are under 'root/scope1'
-    StatsLogger subStatsLogger1 = statsProvider.scope("scope1");
-    // all stats recorded by 'subStatsLogger2` are under 'root/scope2'
-    StatsLogger subStatsLogger2 = statsProvider.scope("scope2");
-
-Counters
-++++++++
-
-A `Counter` is a cumulative metric that represents a single numerical value. A **counter**
-is typically used to count requests served, tasks completed, errors occurred, etc. Counters
-should not be used to expose current counts of items whose number can also go down, e.g.
-the number of currently running tasks. Use `Gauges` for this use case.
-
-To change a counter, use:
-
-::
-    
-    StatsLogger statsLogger = ...;
-    Counter births = statsLogger.getCounter("births");
-    // increment the counter
-    births.inc();
-    // decrement the counter
-    births.dec();
-    // change the counter by delta
-    births.add(-10);
-    // reset the counter
-    births.reset();
-
-Gauges
-++++++
-
-A `Gauge` is a metric that represents a single numerical value that can arbitrarily go up and down.
-
-Gauges are typically used for measured values like temperatures or current memory usage, but also
-"counts" that can go up and down, like the number of running tasks.
-
-To define a gauge, stick the following code somewhere in the initialization:
-
-::
-
-    final AtomicLong numPendingRequests = new AtomicLong(0L);
-    StatsLogger statsLogger = ...;
-    statsLogger.registerGauge(
-        "num_pending_requests",
-        new Gauge<Number>() {
-            @Override
-            public Number getDefaultValue() {
-                return 0;
-            }
-            @Override
-            public Number getSample() {
-                return numPendingRequests.get();
-            }
-        });
-
-The gauge must always return a numerical value when sampling.
-
-Metrics (OpStats)
-+++++++++++++++++
-
-A `OpStats` is a set of metrics that represents the statistics of an `operation`. Those metrics
-include `success` or `failure` of the operations and its distribution (also known as `Histogram`).
-It is usually used for timing.
-
-::
-
-    StatsLogger statsLogger = ...;
-    OpStatsLogger writeStats = statsLogger.getOpStatsLogger("writes");
-    long writeLatency = ...;
-
-    // register success op
-    writeStats.registerSuccessfulEvent(writeLatency);
-
-    // register failure op
-    writeStats.registerFailedEvent(writeLatency);
-
-Available Stats Providers
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-All the available stats providers are listed as below:
-
-* Twitter Science Stats (deprecated)
-* Twitter Ostrich Stats (deprecated)
-* Twitter Finagle Stats
-* Codahale Stats
-
-Twitter Science Stats
-_____________________
-
-Use following dependency to enable Twitter science stats provider.
-
-::
-
-   <dependency>
-     <groupId>org.apache.bookkeeper.stats</groupId>
-     <artifactId>twitter-science-provider</artifactId>
-     <version>${bookkeeper.version}</version>
-   </dependency>
-
-Construct the stats provider for clients.
-
-::
-
-    StatsProvider statsProvider = new TwitterStatsProvider();
-    DistributedLogConfiguration conf = ...;
-
-    // starts the stats provider (optional)
-    statsProvider.start(conf);
-
-    // all the dl related stats are exposed under "dlog"
-    StatsLogger statsLogger = statsProvider.getStatsLogger("dlog");
-    DistributedLogNamespace namespace = DistributedLogNamespaceBuilder.newBuilder()
-        .uri(...)
-        .conf(conf)
-        .statsLogger(statsLogger)
-        .build();
-
-    ...
-
-    // stop the stats provider (optional)
-    statsProvider.stop();
-
-
-Expose the stats collected by the stats provider by configuring following settings:
-
-::
-
-    // enable exporting the stats
-    statsExport=true
-    // exporting the stats at port 8080
-    statsHttpPort=8080
-
-
-If exporting stats is enabled, all the stats are exported by the http endpoint.
-You could curl the http endpoint to check the stats.
-
-::
-
-    curl -s <host>:8080/vars
-
-
-check ScienceStats_ for more details.
-
-.. _ScienceStats: https://github.com/twitter/commons/tree/master/src/java/com/twitter/common/stats
-
-Twitter Ostrich Stats
-_____________________
-
-Use following dependency to enable Twitter ostrich stats provider.
-
-::
-
-   <dependency>
-     <groupId>org.apache.bookkeeper.stats</groupId>
-     <artifactId>twitter-ostrich-provider</artifactId>
-     <version>${bookkeeper.version}</version>
-   </dependency>
-
-Construct the stats provider for clients.
-
-::
-
-    StatsProvider statsProvider = new TwitterOstrichProvider();
-    DistributedLogConfiguration conf = ...;
-
-    // starts the stats provider (optional)
-    statsProvider.start(conf);
-
-    // all the dl related stats are exposed under "dlog"
-    StatsLogger statsLogger = statsProvider.getStatsLogger("dlog");
-    DistributedLogNamespace namespace = DistributedLogNamespaceBuilder.newBuilder()
-        .uri(...)
-        .conf(conf)
-        .statsLogger(statsLogger)
-        .build();
-
-    ...
-
-    // stop the stats provider (optional)
-    statsProvider.stop();
-
-
-Expose the stats collected by the stats provider by configuring following settings:
-
-::
-
-    // enable exporting the stats
-    statsExport=true
-    // exporting the stats at port 8080
-    statsHttpPort=8080
-
-
-If exporting stats is enabled, all the stats are exported by the http endpoint.
-You could curl the http endpoint to check the stats.
-
-::
-
-    curl -s <host>:8080/stats.txt
-
-
-check Ostrich_ for more details.
-
-.. _Ostrich: https://github.com/twitter/ostrich
-
-Twitter Finagle Metrics
-_______________________
-
-Use following dependency to enable bridging finagle stats receiver to bookkeeper's stats provider.
-All the stats exposed by the stats provider will be collected by finagle stats receiver and exposed
-by Twitter's admin service.
-
-::
-
-   <dependency>
-     <groupId>org.apache.bookkeeper.stats</groupId>
-     <artifactId>twitter-finagle-provider</artifactId>
-     <version>${bookkeeper.version}</version>
-   </dependency>
-
-Construct the stats provider for clients.
-
-::
-
-    StatsReceiver statsReceiver = ...; // finagle stats receiver
-    StatsProvider statsProvider = new FinagleStatsProvider(statsReceiver);
-    DistributedLogConfiguration conf = ...;
-
-    // the stats provider does nothing on start.
-    statsProvider.start(conf);
-
-    // all the dl related stats are exposed under "dlog"
-    StatsLogger statsLogger = statsProvider.getStatsLogger("dlog");
-    DistributedLogNamespace namespace = DistributedLogNamespaceBuilder.newBuilder()
-        .uri(...)
-        .conf(conf)
-        .statsLogger(statsLogger)
-        .build();
-
-    ...
-
-    // the stats provider does nothing on stop.
-    statsProvider.stop();
-
-
-check `finagle metrics library`__ for more details on how to expose the stats.
-
-.. _TwitterServer: https://twitter.github.io/twitter-server/Migration.html
-
-__ TwitterServer_
-
-Codahale Metrics
-________________
-
-Use following dependency to enable Twitter ostrich stats provider.
-
-::
-
-   <dependency>
-     <groupId>org.apache.bookkeeper.stats</groupId>
-     <artifactId>codahale-metrics-provider</artifactId>
-     <version>${bookkeeper.version}</version>
-   </dependency>
-
-Construct the stats provider for clients.
-
-::
-
-    StatsProvider statsProvider = new CodahaleMetricsProvider();
-    DistributedLogConfiguration conf = ...;
-
-    // starts the stats provider (optional)
-    statsProvider.start(conf);
-
-    // all the dl related stats are exposed under "dlog"
-    StatsLogger statsLogger = statsProvider.getStatsLogger("dlog");
-    DistributedLogNamespace namespace = DistributedLogNamespaceBuilder.newBuilder()
-        .uri(...)
-        .conf(conf)
-        .statsLogger(statsLogger)
-        .build();
-
-    ...
-
-    // stop the stats provider (optional)
-    statsProvider.stop();
-
-
-Expose the stats collected by the stats provider in different ways by configuring following settings.
-Check Codehale_ on how to configuring report endpoints.
-
-::
-
-    // How frequent report the stats
-    codahaleStatsOutputFrequencySeconds=...
-    // The prefix string of codahale stats
-    codahaleStatsPrefix=...
-
-    //
-    // Report Endpoints
-    //
-
-    // expose the stats to Graphite
-    codahaleStatsGraphiteEndpoint=...
-    // expose the stats to CSV files
-    codahaleStatsCSVEndpoint=...
-    // expose the stats to Slf4j logging
-    codahaleStatsSlf4jEndpoint=...
-    // expose the stats to JMX endpoint
-    codahaleStatsJmxEndpoint=...
-
-
-check Codehale_ for more details.
-
-.. _Codehale: https://dropwizard.github.io/metrics/3.1.0/
-
-Enable Stats Provider on Bookie Servers
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The stats provider used by *Bookie Servers* is configured by setting the following option.
-
-::
-
-    // class of stats provider
-    statsProviderClass="org.apache.bookkeeper.stats.CodahaleMetricsProvider"
-
-Metrics
-~~~~~~~
-
-Check the :doc:`../references/metrics` reference page for the metrics exposed by DistributedLog.

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/operations/operations.txt
----------------------------------------------------------------------
diff --git a/_sources/operations/operations.txt b/_sources/operations/operations.txt
deleted file mode 100644
index 6a8061e..0000000
--- a/_sources/operations/operations.txt
+++ /dev/null
@@ -1,204 +0,0 @@
-DistributedLog Operations
-=========================
-
-Feature Provider
-~~~~~~~~~~~~~~~~
-
-DistributedLog uses a `feature-provider` library provided by Apache BookKeeper for managing features
-dynamically at runtime. It is a feature-flag_ system used to proportionally control what features
-are enabled for the system. In other words, it is a way of altering the control in a system without
-restarting it. It can be used during all stages of development, its most visible use case is on
-production. For instance, during a production release, you can enable or disable individual features,
-control the data flow through the system, thereby minimizing risk of system failure in real time.
-
-.. _feature-flag: https://en.wikipedia.org/wiki/Feature_toggle
-
-This `feature-provider` interface is pluggable and easy to integrate with any configuration management
-system.
-
-API
-___
-
-`FeatureProvider` is a provider that manages features under different scopes. The provider is responsible
-for loading features dynamically at runtime. A `Feature` is a numeric flag that control how much percentage
-of this feature will be available to the system - the number is called `availability`.
-
-::
-
-    Feature.name() => returns the name of this feature
-    Feature.availability() => returns the availability of this feature
-    Feature.isAvailable() => returns true if its availability is larger than 0; otherwise false
-
-
-It is easy to obtain a feature from the provider by just providing a feature name.
-
-::
-
-    FeatureProvider provider = ...;
-    Feature feature = provider.getFeature("feature1"); // returns the feature named 'feature1'
-
-    
-The `FeatureProvider` is scopable to allow creating features in a hierarchical way. For example, if a system
-is comprised of two subsystems, one is *cache*, while the other one is *storage*. so the features belong to
-different subsystems can be created under different scopes.
-
-::
-
-    FeatureProvider provider = ...;
-    FeatureProvider cacheFeatureProvider = provider.scope("cache");
-    FeatureProvider storageFeatureProvider = provider.scope("storage");
-    Feature writeThroughFeature = cacheFeatureProvider.getFeature("write_through");
-    Feature duralWriteFeature = storageFeatureProvider.getFeature("dural_write");
-
-    // so the available features under `provider` are: (assume scopes are separated by '.')
-    // - 'cache.write_through'
-    // - 'storage.dural_write'
-
-
-The feature provider could be passed to `DistributedLogNamespaceBuilder` when building the namespace,
-thereby it would be used for controlling the features exposed under `DistributedLogNamespace`.
-
-::
-
-    FeatureProvider rootProvider = ...;
-    FeatureProvider dlFeatureProvider = rootProvider.scope("dlog");
-    DistributedLogNamespace namespace = DistributedLogNamespaceBuilder.newBuilder()
-        .uri(uri)
-        .conf(conf)
-        .featureProvider(dlFeatureProvider)
-        .build();
-
-
-The feature provider is loaded by reflection on distributedlog write proxy server. You could specify
-the feature provider class name as below. Otherwise it would use `DefaultFeatureProvider`, which disables
-all the features by default.
-
-::
-
-    featureProviderClass=com.twitter.distributedlog.feature.DynamicConfigurationFeatureProvider
-
-
-
-Configuration Based Feature Provider
-____________________________________
-
-Beside `DefaultFeatureProvider`, distributedlog also provides a file-based feature provider - it loads
-the features from properties files.
-
-All the features and their availabilities are configured in properties file format. For example,
-
-::
-
-    cache.write_through=100
-    storage.dural_write=0
-
-
-You could configure `featureProviderClass` in distributedlog configuration file by setting it to
-`com.twitter.distributedlog.feature.DynamicConfigurationFeatureProvider` to enable file-based feature
-provider. The feature provider will load the features from two files, one is base config file configured
-by `fileFeatureProviderBaseConfigPath`, while the other one is overlay config file configured by
-`fileFeatureProviderOverlayConfigPath`. Current implementation doesn't differentiate these two files
-too much other than the `overlay` config will override the settings in `base` config. It is recommended
-to have a base config file for storing the default availability values for your system and dynamically
-adjust the availability values in overlay config file.
-
-::
-
-    featureProviderClass=com.twitter.distributedlog.feature.DynamicConfigurationFeatureProvider
-    fileFeatureProviderBaseConfigPath=/path/to/base/config
-    fileFeatureProviderOverlayConfigPath=/path/to/overlay/config
-    // how frequent we reload the config files
-    dynamicConfigReloadIntervalSec=60
-
-
-Available Features
-__________________
-
-Check the :doc:`../references/features` reference page for the features exposed by DistributedLog.
-
-`dlog`
-~~~~~~
-
-A CLI is provided for inspecting DistributedLog streams and metadata.
-
-.. code:: bash
-
-   dlog
-   JMX enabled by default
-   Usage: dlog <command>
-   where command is one of:
-       local               Run distributedlog sandbox
-       example             Run distributedlog example
-       tool                Run distributedlog tool
-       proxy_tool          Run distributedlog proxy tool to interact with proxies
-       balancer            Run distributedlog balancer
-       admin               Run distributedlog admin tool
-       help                This help message
-
-   or command is the full name of a class with a defined main() method.
-
-   Environment variables:
-       DLOG_LOG_CONF        Log4j configuration file (default $HOME/src/distributedlog/distributedlog-service/conf/log4j.properties)
-       DLOG_EXTRA_OPTS      Extra options to be passed to the jvm
-       DLOG_EXTRA_CLASSPATH Add extra paths to the dlog classpath
-
-These variable can also be set in conf/dlogenv.sh
-
-Create a stream
-_______________
-
-To create a stream:
-
-.. code:: bash
-
-   dlog tool create -u <DL URI> -r <STREAM PREFIX> -e <STREAM EXPRESSION>
-
-
-List the streams
-________________
-
-To list all the streams under a given DistributedLog namespace:
-
-.. code:: bash
-
-   dlog tool list -u <DL URI>
-
-Show stream's information
-_________________________
-
-To view the metadata associated with a stream:
-
-.. code:: bash
-
-   dlog tool show -u <DL URI> -s <STREAM NAME>
-
-
-Dump a stream
-_____________
-
-To dump the items inside a stream:
-
-.. code:: bash
-
-   dlog tool dump -u <DL URI> -s <STREAM NAME> -o <START TXN ID> -l <NUM RECORDS>
-
-Delete a stream
-_______________
-
-To delete a stream, run:
-
-.. code:: bash
-
-   dlog tool delete -u <DL URI> -s <STREAM NAME>
-
-
-Truncate a stream
-_________________
-
-Truncate the streams under a given DistributedLog namespace. You could specify a filter to match the streams that you want to truncate.
-
-There is a difference between the ``truncate`` and ``delete`` command. When you issue a ``truncate``, the data will be purge without removing the streams. A ``delete`` will delete the stream. You can pass the flag ``-delete`` to the ``truncate`` command to also delete the streams.
-
-.. code:: bash
-
-   dlog tool truncate -u <DL URI>

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/operations/performance.txt
----------------------------------------------------------------------
diff --git a/_sources/operations/performance.txt b/_sources/operations/performance.txt
deleted file mode 100644
index caac8ad..0000000
--- a/_sources/operations/performance.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Performance Tuning
-==================
-
-(describe how to tune performance, critical settings)

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/operations/zookeeper.txt
----------------------------------------------------------------------
diff --git a/_sources/operations/zookeeper.txt b/_sources/operations/zookeeper.txt
deleted file mode 100644
index a0d65a5..0000000
--- a/_sources/operations/zookeeper.txt
+++ /dev/null
@@ -1,88 +0,0 @@
-ZooKeeper
-=========
-
-To run a DistributedLog ensemble, you'll need a set of Zookeeper
-nodes. There is no constraints on the number of Zookeeper nodes you
-need. One node is enough to run your cluster, but for reliability
-purpose, you should run at least 3 nodes.
-
-Version
--------
-
-DistributedLog leverages zookeepr `multi` operations for metadata updates.
-So the minimum version of zookeeper is 3.4.*. We recommend to run stable
-zookeeper version `3.4.8`.
-
-Run ZooKeeper from distributedlog source
-----------------------------------------
-
-Since `zookeeper` is one of the dependency of `distributedlog-service`. You could simply
-run `zookeeper` servers using same set of scripts provided in `distributedlog-service`.
-In the following sections, we will describe how to run zookeeper using the scripts provided
-in `distributedlog-service`.
-
-Build
-+++++
-
-First of all, build DistributedLog:
-
-.. code-block:: bash
-
-    $ mvn clean install -DskipTests
-
-Configuration
-+++++++++++++
-
-The configuration file `zookeeper.conf.template` under `distributedlog-service/conf` is a template of
-production configuration to run a zookeeper node. Most of the configuration settings are good for
-production usage. You might need to configure following settings according to your environment and
-hardware platform.
-
-Ensemble
-^^^^^^^^
-
-You need to configure the zookeeper servers form this ensemble as below:
-
-::
-    
-    server.1=127.0.0.1:2710:3710:participant;0.0.0.0:2181
-
-
-Please check zookeeper_ website for more configurations.
-
-Disks
-^^^^^
-
-You need to configure following settings according to the disk layout of your hardware.
-It is recommended to put `dataLogDir` under a separated disk from others for performance.
-
-::
-    
-    # the directory where the snapshot is stored.
-    dataDir=/tmp/data/zookeeper
-    
-    # where txlog  are written
-    dataLogDir=/tmp/data/zookeeper/txlog
-
-
-Run
-+++
-
-As `zookeeper` is shipped as part of `distributedlog-service`, you could use the `dlog-daemon.sh`
-script to start `zookeeper` as daemon thread.
-
-Start the zookeeper:
-
-.. code-block:: bash
-
-    $ ./distributedlog-service/bin/dlog-daemon.sh start zookeeper /path/to/zookeeper.conf
-
-Stop the zookeeper:
-
-.. code-block:: bash
-
-    $ ./distributedlog-service/bin/dlog-daemon.sh stop zookeeper
-
-Please check zookeeper_ website for more details.
-
-.. _zookeeper: http://zookeeper.apache.org/

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/performance/main.txt
----------------------------------------------------------------------
diff --git a/_sources/performance/main.txt b/_sources/performance/main.txt
deleted file mode 100644
index 59820c2..0000000
--- a/_sources/performance/main.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-Performance
-===========
-
-(performance results and benchmark)

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/references/configuration.txt
----------------------------------------------------------------------
diff --git a/_sources/references/configuration.txt b/_sources/references/configuration.txt
deleted file mode 100644
index 53f684d..0000000
--- a/_sources/references/configuration.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Configuration Settings
-======================

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/references/features.txt
----------------------------------------------------------------------
diff --git a/_sources/references/features.txt b/_sources/references/features.txt
deleted file mode 100644
index 45c92d8..0000000
--- a/_sources/references/features.txt
+++ /dev/null
@@ -1,30 +0,0 @@
-Features
-========
-
-BookKeeper Features
--------------------
-
-*<scope>* is the scope value of the FeatureProvider passed to BookKeeperClient builder. in DistributedLog write proxy, the *<scope>* is 'bkc'.
-
-- *<scope>.repp_disable_durability_enforcement*: Feature to disable durability enforcement on region aware data placement policy. It is a feature that applied for global replicated log only. If the availability value is larger than zero, the region aware data placement policy will *NOT* enfore region-wise durability. That says if a *Log* is writing to region A, B, C with write quorum size *15* and ack quorum size *9*. If the availability value of this feature is zero, it requires *9*
-  acknowledges from bookies from at least two regions. If the availability value of this feature is larger than zero, the enforcement is *disabled* and it could acknowledge after receiving *9* acknowledges from whatever regions. By default the availability is zero. Turning on this value to tolerant multiple region failures.
-
-- *<scope>.disable_ensemble_change*: Feature to disable ensemble change on DistributedLog writers. If the availability value of this feature is larger than zero, it would disable ensemble change on writers. It could be used for toleranting zookeeper outage.
-
-- *<scope>.<region>.disallow_bookie_placement*: Feature to disallow choosing a bookie replacement from a given *region* when ensemble changing. It is a feature that applied for global replicated log. If the availability value is larger than zero, the writer (write proxy) will stop choosing a bookie from *<region>* when ensemble changing. It is useful to blackout a region dynamically.
-
-DistributedLog Features
------------------------
-
-*<scope>* is the scope value of the FeatureProvider passed to DistributedLogNamespace builder. in DistributedLog write proxy, the *<scope>* is 'dl'.
-
-- *<scope>.disable_logsegment_rolling*: Feature to disable log segment rolling. If the availability value is larger than zero, the writer (write proxy) will stop rolling to new log segments and keep writing to current log segments. It is a useful feature to tolerant zookeeper outage.
-
-- *<scope>.disable_write_limit*: Feature to disable write limiting. If the availability value is larger than zero, the writer (write proxy) will disable write limiting. It is used to control write limiting dynamically.
-
-Write Proxy Features
---------------------
-
-- *region_stop_accept_new_stream*: Feature to disable accepting new streams in current region. It is a feature that applied for global replicated log only. If the availability value is larger than zero, the write proxies will stop accepting new streams and throw RegionAvailable exception to client. Hence client will know this region is stopping accepting new streams. Client will be forced to send requests to other regions. It is a feature used for ownership failover between regions.
-- *service_rate_limit_disabled*: Feature to disable service rate limiting. If the availability value is larger than zero, the write proxies will disable rate limiting.
-- *service_checksum_disabled*: Feature to disable service request checksum validation. If the availability value is larger than zero, the write proxies will disable request checksum validation.

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/1bd00e9a/_sources/references/main.txt
----------------------------------------------------------------------
diff --git a/_sources/references/main.txt b/_sources/references/main.txt
deleted file mode 100644
index 5b65d87..0000000
--- a/_sources/references/main.txt
+++ /dev/null
@@ -1,11 +0,0 @@
-References
-===========
-
-This page keeps references on configuration settings, metrics and features that exposed in DistributedLog.
-
-.. toctree::
-   :maxdepth: 2
-
-   configuration
-   metrics
-   features


Mime
View raw message