helix-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ka...@apache.org
Subject [03/31] Redesign documentation for 0.6.2, 0.7.0, and trunk
Date Thu, 02 Jan 2014 00:14:03 GMT
http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/Concepts.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/Concepts.md b/site-releases/trunk/src/site/markdown/Concepts.md
deleted file mode 100644
index fa5d0ba..0000000
--- a/site-releases/trunk/src/site/markdown/Concepts.md
+++ /dev/null
@@ -1,275 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Concepts</title>
-</head>
-
-Concepts
-----------------------------
-
-Helix is based on the idea that a given task has the following attributes associated with it:
-
-* _Location of the task_. For example it runs on Node N1
-* _State_. For example, it is running, stopped etc.
-
-In Helix terminology, a task is referred to as a _resource_.
-
-### IdealState
-
-IdealState simply allows one to map tasks to location and state. A standard way of expressing this in Helix:
-
-```
-  "TASK_NAME" : {
-    "LOCATION" : "STATE"
-  }
-
-```
-Consider a simple case where you want to launch a task \'myTask\' on node \'N1\'. The IdealState for this can be expressed as follows:
-
-```
-{
-  "id" : "MyTask",
-  "mapFields" : {
-    "myTask" : {
-      "N1" : "ONLINE",
-    }
-  }
-}
-```
-### Partition
-
-If this task get too big to fit on one box, you might want to divide it into subtasks. Each subtask is referred to as a _partition_ in Helix. Let\'s say you want to divide the task into 3 subtasks/partitions, the IdealState can be changed as shown below. 
-
-\'myTask_0\', \'myTask_1\', \'myTask_2\' are logical names representing the partitions of myTask. Each tasks runs on N1, N2 and N3 respectively.
-
-```
-{
-  "id" : "myTask",
-  "simpleFields" : {
-    "NUM_PARTITIONS" : "3",
-  }
- "mapFields" : {
-    "myTask_0" : {
-      "N1" : "ONLINE",
-    },
-    "myTask_1" : {
-      "N2" : "ONLINE",
-    },
-    "myTask_2" : {
-      "N3" : "ONLINE",
-    }
-  }
-}
-```
-
-### Replica
-
-Partitioning allows one to split the data/task into multiple subparts. But let\'s say the request rate for each partition increases. The common solution is to have multiple copies for each partition. Helix refers to the copy of a partition as a _replica_.  Adding a replica also increases the availability of the system during failures. One can see this methodology employed often in search systems. The index is divided into shards, and each shard has multiple copies.
-
-Let\'s say you want to add one additional replica for each task. The IdealState can simply be changed as shown below. 
-
-For increasing the availability of the system, it\'s better to place the replica of a given partition on different nodes.
-
-```
-{
-  "id" : "myIndex",
-  "simpleFields" : {
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-  },
- "mapFields" : {
-    "myIndex_0" : {
-      "N1" : "ONLINE",
-      "N2" : "ONLINE"
-    },
-    "myIndex_1" : {
-      "N2" : "ONLINE",
-      "N3" : "ONLINE"
-    },
-    "myIndex_2" : {
-      "N3" : "ONLINE",
-      "N1" : "ONLINE"
-    }
-  }
-}
-```
-
-### State 
-
-Now let\'s take a slightly more complicated scenario where a task represents a database.  Unlike an index which is in general read-only, a database supports both reads and writes. Keeping the data consistent among the replicas is crucial in distributed data stores. One commonly applied technique is to assign one replica as the MASTER and remaining replicas as SLAVEs. All writes go to the MASTER and are then replicated to the SLAVE replicas.
-
-Helix allows one to assign different states to each replica. Let\'s say you have two MySQL instances N1 and N2, where one will serve as MASTER and another as SLAVE. The IdealState can be changed to:
-
-```
-{
-  "id" : "myDB",
-  "simpleFields" : {
-    "NUM_PARTITIONS" : "1",
-    "REPLICAS" : "2",
-  },
-  "mapFields" : {
-    "myDB" : {
-      "N1" : "MASTER",
-      "N2" : "SLAVE",
-    }
-  }
-}
-
-```
-
-
-### State Machine and Transitions
-
-IdealState allows one to exactly specify the desired state of the cluster. Given an IdealState, Helix takes up the responsibility of ensuring that the cluster reaches the IdealState.  The Helix _controller_ reads the IdealState and then commands each Participant to take appropriate actions to move from one state to another until it matches the IdealState.  These actions are referred to as _transitions_ in Helix.
-
-The next logical question is:  how does the _controller_ compute the transitions required to get to IdealState?  This is where the finite state machine concept comes in. Helix allows applications to plug in a finite state machine.  A state machine consists of the following:
-
-* State: Describes the role of a replica
-* Transition: An action that allows a replica to move from one state to another, thus changing its role.
-
-Here is an example of MasterSlave state machine:
-
-```
-          OFFLINE  | SLAVE  |  MASTER  
-         _____________________________
-        |          |        |         |
-OFFLINE |   N/A    | SLAVE  | SLAVE   |
-        |__________|________|_________|
-        |          |        |         |
-SLAVE   |  OFFLINE |   N/A  | MASTER  |
-        |__________|________|_________|
-        |          |        |         |
-MASTER  | SLAVE    | SLAVE  |   N/A   |
-        |__________|________|_________|
-
-```
-
-Helix allows each resource to be associated with one state machine. This means you can have one resource as an index and another as a database in the same cluster. One can associate each resource with a state machine as follows:
-
-```
-{
-  "id" : "myDB",
-  "simpleFields" : {
-    "NUM_PARTITIONS" : "1",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  },
-  "mapFields" : {
-    "myDB" : {
-      "N1" : "MASTER",
-      "N2" : "SLAVE",
-    }
-  }
-}
-
-```
-
-### Current State
-
-CurrentState of a resource simply represents its actual state at a Participant. In the below example:
-
-* INSTANCE_NAME: Unique name representing the process
-* SESSION_ID: ID that is automatically assigned every time a process joins the cluster
-
-```
-{
-  "id":"MyResource"
-  ,"simpleFields":{
-    ,"SESSION_ID":"13d0e34675e0002"
-    ,"INSTANCE_NAME":"node1"
-    ,"STATE_MODEL_DEF":"MasterSlave"
-  }
-  ,"mapFields":{
-    "MyResource_0":{
-      "CURRENT_STATE":"SLAVE"
-    }
-    ,"MyResource_1":{
-      "CURRENT_STATE":"MASTER"
-    }
-    ,"MyResource_2":{
-      "CURRENT_STATE":"MASTER"
-    }
-  }
-}
-```
-Each node in the cluster has its own CurrentState.
-
-### External View
-
-In order to communicate with the Participants, external clients need to know the current state of each of the Participants. The external clients are referred to as Spectators. In order to make the life of Spectator simple, Helix provides an ExternalView that is an aggregated view of the current state across all nodes. The ExternalView has a similar format as IdealState.
-
-```
-{
-  "id":"MyResource",
-  "mapFields":{
-    "MyResource_0":{
-      "N1":"SLAVE",
-      "N2":"MASTER",
-      "N3":"OFFLINE"
-    },
-    "MyResource_1":{
-      "N1":"MASTER",
-      "N2":"SLAVE",
-      "N3":"ERROR"
-    },
-    "MyResource_2":{
-      "N1":"MASTER",
-      "N2":"SLAVE",
-      "N3":"SLAVE"
-    }
-  }
-}
-```
-
-### Rebalancer
-
-The core component of Helix is the Controller which runs the Rebalancer algorithm on every cluster event. Cluster events can be one of the following:
-
-* Nodes start/stop and soft/hard failures
-* New nodes are added/removed
-* Ideal state changes
-
-There are few more examples such as configuration changes, etc.  The key takeaway: there are many ways to trigger the rebalancer.
-
-When a rebalancer is run it simply does the following:
-
-* Compares the IdealState and current state
-* Computes the transitions required to reach the IdealState
-* Issues the transitions to each Participant
-
-The above steps happen for every change in the system. Once the current state matches the IdealState, the system is considered stable which implies \'IdealState = CurrentState = ExternalView\'
-
-### Dynamic IdealState
-
-One of the things that makes Helix powerful is that IdealState can be changed dynamically. This means one can listen to cluster events like node failures and dynamically change the ideal state. Helix will then take care of triggering the respective transitions in the system.
-
-Helix comes with a few algorithms to automatically compute the IdealState based on the constraints. For example, if you have a resource of 3 partitions and 2 replicas, Helix can automatically compute the IdealState based on the nodes that are currently active. See the [tutorial](./tutorial_rebalance.html) to find out more about various execution modes of Helix like FULL_AUTO, SEMI_AUTO and CUSTOMIZED. 
-
-
-
-
-
-
-
-
-
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/Quickstart.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/Quickstart.md b/site-releases/trunk/src/site/markdown/Quickstart.md
index 353024f..ec4752b 100644
--- a/site-releases/trunk/src/site/markdown/Quickstart.md
+++ b/site-releases/trunk/src/site/markdown/Quickstart.md
@@ -21,18 +21,26 @@ under the License.
   <title>Quickstart</title>
 </head>
 
+Quickstart
+---------
+
 Get Helix
 ---------
 
-First, let\'s get Helix, either build it, or download.
+First, let\'s get Helix. Either build it, or download it.
 
 ### Build
 
-    git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-    cd incubator-helix
-    ./build
-    cd helix-core/target/helix-core-pkg/bin //This folder contains all the scripts used in following sections
-    chmod +x *
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn install package -DskipTests
+# This folder contains quickstart.sh and start-helix-participant.sh
+cd helix-examples/target/helix-examples-pkg/bin
+chmod +x *
+# This folder contains helix-admin.sh, start-standalone-zookeeper.sh, and run-helix-controller.sh
+cd ../../../../helix-core/target/helix-core-pkg/bin
+```
 
 Overview
 --------
@@ -45,12 +53,12 @@ Let\'s Do It
 
 Helix provides command line interfaces to set up the cluster and view the cluster state. The best way to understand how Helix views a cluster is to build a cluster.
 
-#### First, get to the tools directory
+### Get to the Tools Directory
 
-If you built the code
+If you built the code:
 
 ```
-cd incubator-helix/helix-core/target/helix-core-pkg/bin
+cd helix/incubator-helix/helix-examples/target/helix-examples-pkg/bin
 ```
 
 If you downloaded the release package, extract it.
@@ -69,66 +77,72 @@ You can observe the components working together in this demo, which does the fol
 * Kill the third node (Helix takes care of failover)
 * Show the cluster state.  Note that the two surviving nodes take over mastership of the partitions from the failed node
 
-##### Run the demo
+### Run the Demo
 
 ```
-cd incubator-helix/helix-core/target/helix-core-pkg/bin
+cd helix/incubator-helix/helix-examples/target/helix-examples-pkg/bin
 ./quickstart.sh
 ```
 
-##### 2 nodes are set up and the partitions rebalanced
+#### The Initial Setup
+
+2 nodes are set up and the partitions are rebalanced.
 
 The cluster state is as follows:
 
 ```
 CLUSTER STATE: After starting 2 nodes
-	                     localhost_12000	localhost_12001	
-	       MyResource_0	M			S		
-	       MyResource_1	S			M		
-	       MyResource_2	M			S		
-	       MyResource_3	M			S		
-	       MyResource_4	S			M  
-	       MyResource_5	S			M  
+                localhost_12000    localhost_12001
+MyResource_0           M                  S
+MyResource_1           S                  M
+MyResource_2           M                  S
+MyResource_3           M                  S
+MyResource_4           S                  M
+MyResource_5           S                  M
 ```
 
 Note there is one master and one slave per partition.
 
-##### A third node is added and the cluster rebalanced
+#### Add a Node
+
+A third node is added and the cluster is rebalanced.
 
 The cluster state changes to:
 
 ```
 CLUSTER STATE: After adding a third node
-                 	       localhost_12000	    localhost_12001	localhost_12002	
-	       MyResource_0	    S			  M		      S		
-	       MyResource_1	    S			  S		      M	 
-	       MyResource_2	    M			  S	              S  
-	       MyResource_3	    S			  S                   M  
-	       MyResource_4	    M			  S	              S  
-	       MyResource_5	    S			  M                   S  
+               localhost_12000    localhost_12001    localhost_12002
+MyResource_0          S                  M                  S
+MyResource_1          S                  S                  M
+MyResource_2          M                  S                  S
+MyResource_3          S                  S                  M
+MyResource_4          M                  S                  S
+MyResource_5          S                  M                  S
 ```
 
 Note there is one master and _two_ slaves per partition.  This is expected because there are three nodes.
 
-##### Finally, a node is killed to simulate a failure
+#### Kill a Node
+
+Finally, a node is killed to simulate a failure
 
 Helix makes sure each partition has a master.  The cluster state changes to:
 
 ```
 CLUSTER STATE: After the 3rd node stops/crashes
-                	       localhost_12000	  localhost_12001	localhost_12002	
-	       MyResource_0	    S			M		      -		
-	       MyResource_1	    S			M		      -	 
-	       MyResource_2	    M			S	              -  
-	       MyResource_3	    M			S                     -  
-	       MyResource_4	    M			S	              -  
-	       MyResource_5	    S			M                     -  
+               localhost_12000    localhost_12001    localhost_12002
+MyResource_0          S                  M                  -
+MyResource_1          S                  M                  -
+MyResource_2          M                  S                  -
+MyResource_3          M                  S                  -
+MyResource_4          M                  S                  -
+MyResource_5          S                  M                  -
 ```
 
 
 Long Version
 ------------
-Now you can run the same steps by hand.  In the detailed version, we\'ll do the following:
+Now you can run the same steps by hand.  In this detailed version, we\'ll do the following:
 
 * Define a cluster
 * Add two nodes to the cluster
@@ -137,20 +151,22 @@ Now you can run the same steps by hand.  In the detailed version, we\'ll do the
 * Expand the cluster: add a few nodes and rebalance the partitions
 * Failover: stop a node and verify the mastership transfer
 
-### Install and Start Zookeeper
+### Install and Start ZooKeeper
 
 Zookeeper can be started in standalone mode or replicated mode.
 
-More info is available at 
+More information is available at
 
 * http://zookeeper.apache.org/doc/r3.3.3/zookeeperStarted.html
 * http://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html#sc_zkMulitServerSetup
 
 In this example, let\'s start zookeeper in local mode.
 
-##### start zookeeper locally on port 2199
+#### Start ZooKeeper Locally on Port 2199
 
-    ./start-standalone-zookeeper.sh 2199 &
+```
+./start-standalone-zookeeper.sh 2199 &
+```
 
 ### Define the Cluster
 
@@ -160,62 +176,74 @@ zookeeper_address is of the format host:port e.g localhost:2199 for standalone o
 
 Next, we\'ll set up a cluster MYCLUSTER cluster with these attributes:
 
-* 3 instances running on localhost at ports 12913,12914,12915 
-* One database named myDB with 6 partitions 
+* 3 instances running on localhost at ports 12913,12914,12915
+* One database named myDB with 6 partitions
 * Each partition will have 3 replicas with 1 master, 2 slaves
-* zookeeper running locally at localhost:2199
+* ZooKeeper running locally at localhost:2199
 
-##### Create the cluster MYCLUSTER
-    ## helix-admin.sh --zkSvr <zk_address> --addCluster <clustername> 
-    ./helix-admin.sh --zkSvr localhost:2199 --addCluster MYCLUSTER 
+#### Create the Cluster MYCLUSTER
 
-##### Add nodes to the cluster
+```
+# ./helix-admin.sh --zkSvr <zk_address> --addCluster <clustername>
+./helix-admin.sh --zkSvr localhost:2199 --addCluster MYCLUSTER
+```
+
+### Add Nodes to the Cluster
 
 In this case we\'ll add three nodes: localhost:12913, localhost:12914, localhost:12915
 
-    ## helix-admin.sh --zkSvr <zk_address>  --addNode <clustername> <host:port>
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12913
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12914
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12915
+```
+# helix-admin.sh --zkSvr <zk_address>  --addNode <clustername> <host:port>
+./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12913
+./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12914
+./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12915
+```
 
-#### Define the resource and partitioning
+### Define the Resource and Partitioning
 
-In this example, the resource is a database, partitioned 6 ways.  (In a production system, it\'s common to over-partition for better load balancing.  Helix has been used in production to manage hundreds of databases each with 10s or 100s of partitions running on 10s of physical nodes.)
+In this example, the resource is a database, partitioned 6 ways. Note that in a production system, it\'s common to over-partition for better load balancing.  Helix has been used in production to manage hundreds of databases each with 10s or 100s of partitions running on 10s of physical nodes.
 
-##### Create a database with 6 partitions using the MasterSlave state model. 
+#### Create a Database with 6 Partitions using the MasterSlave State Model
 
 Helix ensures there will be exactly one master for each partition.
 
-    ## helix-admin.sh --zkSvr <zk_address> --addResource <clustername> <resourceName> <numPartitions> <StateModelName>
-    ./helix-admin.sh --zkSvr localhost:2199 --addResource MYCLUSTER myDB 6 MasterSlave
-   
-##### Now we can let Helix assign partitions to nodes. 
+```
+# helix-admin.sh --zkSvr <zk_address> --addResource <clustername> <resourceName> <numPartitions> <StateModelName>
+./helix-admin.sh --zkSvr localhost:2199 --addResource MYCLUSTER myDB 6 MasterSlave
+```
 
-This command will distribute the partitions amongst all the nodes in the cluster. In this example, each partition has 3 replicas.
+#### Let Helix Assign Partitions to Nodes
 
-    ## helix-admin.sh --zkSvr <zk_address> --rebalance <clustername> <resourceName> <replication factor>
-    ./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+This command will distribute the partitions amongst all the nodes in the cluster. In this example, each partition has 3 replicas.
 
-Now the cluster is defined in Zookeeper.  The nodes (localhost:12913, localhost:12914, localhost:12915) and resource (myDB, with 6 partitions using the MasterSlave model).  And the _ideal state_ has been calculated, assuming a replication factor of 3.
+```
+# helix-admin.sh --zkSvr <zk_address> --rebalance <clustername> <resourceName> <replication factor>
+./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+```
 
-##### Start the Helix Controller
+Now the cluster is defined in ZooKeeper.  The nodes (localhost:12913, localhost:12914, localhost:12915) and resource (myDB, with 6 partitions using the MasterSlave model) are all properly configured.  And the _IdealState_ has been calculated, assuming a replication factor of 3.
 
-Now that the cluster is defined in Zookeeper, the Helix controller can manage the cluster.
+### Start the Helix Controller
 
-    ## Start the cluster manager, which will manage MYCLUSTER
-    ./run-helix-controller.sh --zkSvr localhost:2199 --cluster MYCLUSTER 2>&1 > /tmp/controller.log &
+Now that the cluster is defined in ZooKeeper, the Helix controller can manage the cluster.
 
-##### Start up the cluster to be managed
+```
+# Start the cluster manager, which will manage MYCLUSTER
+./run-helix-controller.sh --zkSvr localhost:2199 --cluster MYCLUSTER 2>&1 > /tmp/controller.log &
+```
 
-We\'ve started up Zookeeper, defined the cluster, the resources, the partitioning, and started up the Helix controller.  Next, we\'ll start up the nodes of the system to be managed.  Each node is a Participant, which is an instance of the system component to be managed.  Helix assigns work to Participants, keeps track of their roles and health, and takes action when a node fails.
+### Start up the Cluster to be Managed
 
-    # start up each instance.  These are mock implementations that are actively managed by Helix
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12913 --stateModelType MasterSlave 2>&1 > /tmp/participant_12913.log 
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12914 --stateModelType MasterSlave 2>&1 > /tmp/participant_12914.log
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12915 --stateModelType MasterSlave 2>&1 > /tmp/participant_12915.log
+We\'ve started up ZooKeeper, defined the cluster, the resources, the partitioning, and started up the Helix controller.  Next, we\'ll start up the nodes of the system to be managed.  Each node is a Participant, which is an instance of the system component to be managed.  Helix assigns work to Participants, keeps track of their roles and health, and takes action when a node fails.
 
+```
+# start up each instance.  These are mock implementations that are actively managed by Helix
+./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12913 --stateModelType MasterSlave 2>&1 > /tmp/participant_12913.log
+./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12914 --stateModelType MasterSlave 2>&1 > /tmp/participant_12914.log
+./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12915 --stateModelType MasterSlave 2>&1 > /tmp/participant_12915.log
+```
 
-#### Inspect the Cluster
+### Inspect the Cluster
 
 Now, let\'s see the Helix view of our cluster.  We\'ll work our way down as follows:
 
@@ -228,17 +256,17 @@ Clusters -> MYCLUSTER -> instances -> instance detail
 A single Helix controller can manage multiple clusters, though so far, we\'ve only defined one cluster.  Let\'s see:
 
 ```
-## List existing clusters
-./helix-admin.sh --zkSvr localhost:2199 --listClusters        
+# List existing clusters
+./helix-admin.sh --zkSvr localhost:2199 --listClusters
 
 Existing clusters:
 MYCLUSTER
 ```
-                                       
-Now, let\'s see the Helix view of MYCLUSTER
+
+Now, let\'s see the Helix view of MYCLUSTER:
 
 ```
-## helix-admin.sh --zkSvr <zk_address> --listClusterInfo <clusterName> 
+# helix-admin.sh --zkSvr <zk_address> --listClusterInfo <clusterName>
 ./helix-admin.sh --zkSvr localhost:2199 --listClusterInfo MYCLUSTER
 
 Existing resources in cluster MYCLUSTER:
@@ -249,11 +277,10 @@ localhost_12914
 localhost_12913
 ```
 
-
-Let\'s look at the details of an instance
+Let\'s look at the details of an instance:
 
 ```
-## ./helix-admin.sh --zkSvr <zk_address> --listInstanceInfo <clusterName> <InstanceName>    
+# ./helix-admin.sh --zkSvr <zk_address> --listInstanceInfo <clusterName> <InstanceName>
 ./helix-admin.sh --zkSvr localhost:2199 --listInstanceInfo MYCLUSTER localhost_12913
 
 InstanceConfig: {
@@ -270,11 +297,11 @@ InstanceConfig: {
 }
 ```
 
-    
-##### Query info of a resource
+
+#### Query Information about a Resource
 
 ```
-## helix-admin.sh --zkSvr <zk_address> --listResourceInfo <clusterName> <resourceName>
+# helix-admin.sh --zkSvr <zk_address> --listResourceInfo <clusterName> <resourceName>
 ./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
 
 IdealState for myDB:
@@ -321,6 +348,7 @@ IdealState for myDB:
     "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
   },
   "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO",
     "REBALANCE_MODE" : "SEMI_AUTO",
     "NUM_PARTITIONS" : "6",
     "REPLICAS" : "3",
@@ -374,30 +402,38 @@ ExternalView for myDB:
 
 Now, let\'s look at one of the partitions:
 
-    ## helix-admin.sh --zkSvr <zk_address> --listPartitionInfo <clusterName> <resource> <partition> 
-    ./helix-admin.sh --zkSvr localhost:2199 --listPartitionInfo MYCLUSTER myDB myDB_0
+```
+# helix-admin.sh --zkSvr <zk_address> --listResourceInfo <clusterName> <partition>
+./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo mycluster myDB_0
+```
 
-#### Expand the Cluster
+### Expand the Cluster
 
 Next, we\'ll show how Helix does the work that you\'d otherwise have to build into your system.  When you add capacity to your cluster, you want the work to be evenly distributed.  In this example, we started with 3 nodes, with 6 partitions.  The partitions were evenly balanced, 2 masters and 4 slaves per node. Let\'s add 3 more nodes: localhost:12916, localhost:12917, localhost:12918
 
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12916
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12917
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12918
+```
+./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12916
+./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12917
+./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12918
+```
 
 And start up these instances:
 
-    # start up each instance.  These are mock implementations that are actively managed by Helix
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12916 --stateModelType MasterSlave 2>&1 > /tmp/participant_12916.log
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12917 --stateModelType MasterSlave 2>&1 > /tmp/participant_12917.log
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12918 --stateModelType MasterSlave 2>&1 > /tmp/participant_12918.log
+```
+# start up each instance.  These are mock implementations that are actively managed by Helix
+./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12916 --stateModelType MasterSlave 2>&1 > /tmp/participant_12916.log
+./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12917 --stateModelType MasterSlave 2>&1 > /tmp/participant_12917.log
+./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12918 --stateModelType MasterSlave 2>&1 > /tmp/participant_12918.log
+```
 
 
 And now, let Helix do the work for you.  To shift the work, simply rebalance.  After the rebalance, each node will have one master and two slaves.
 
-    ./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+```
+./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+```
 
-#### View the cluster
+### View the Cluster
 
 OK, let\'s see how it looks:
 
@@ -449,6 +485,7 @@ IdealState for myDB:
     "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
   },
   "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO",
     "REBALANCE_MODE" : "SEMI_AUTO",
     "NUM_PARTITIONS" : "6",
     "REPLICAS" : "3",
@@ -502,7 +539,7 @@ ExternalView for myDB:
 
 Mission accomplished.  The partitions are nicely balanced.
 
-#### How about Failover?
+### How about Failover?
 
 Building a fault tolerant system isn\'t trivial, but with Helix, it\'s easy.  Helix detects a failed instance, and triggers mastership transfer automatically.
 
@@ -558,6 +595,7 @@ IdealState for myDB:
     "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
   },
   "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO",
     "REBALANCE_MODE" : "SEMI_AUTO",
     "NUM_PARTITIONS" : "6",
     "REPLICAS" : "3",
@@ -607,15 +645,17 @@ ExternalView for myDB:
 
 As we\'ve seen in this Quickstart, Helix takes care of partitioning, load balancing, elasticity, failure detection and recovery.
 
-##### ZooInspector
+### ZooInspector
 
 You can view all of the underlying data by going direct to zookeeper.  Use ZooInspector that comes with zookeeper to browse the data. This is a java applet (make sure you have X windows)
 
 To start zooinspector run the following command from <zk_install_directory>/contrib/ZooInspector
-      
-    java -cp zookeeper-3.3.3-ZooInspector.jar:lib/jtoaster-1.0.4.jar:../../lib/log4j-1.2.15.jar:../../zookeeper-3.3.3.jar org.apache.zookeeper.inspector.ZooInspector
 
-#### Next
+```
+java -cp zookeeper-3.3.3-ZooInspector.jar:lib/jtoaster-1.0.4.jar:../../lib/log4j-1.2.15.jar:../../zookeeper-3.3.3.jar org.apache.zookeeper.inspector.ZooInspector
+```
+
+### Next
 
 Now that you understand the idea of Helix, read the [tutorial](./Tutorial.html) to learn how to choose the right state model and constraints for your system, and how to implement it.  In many cases, the built-in features meet your requirements.  And best of all, Helix is a customizable framework, so you can plug in your own behavior, while retaining the automation provided by Helix.
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/Tutorial.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/Tutorial.md b/site-releases/trunk/src/site/markdown/Tutorial.md
index ee5a393..397af38 100644
--- a/site-releases/trunk/src/site/markdown/Tutorial.md
+++ b/site-releases/trunk/src/site/markdown/Tutorial.md
@@ -30,7 +30,7 @@ Convention: we first cover the _basic_ approach, which is the easiest to impleme
 
 ### Prerequisites
 
-1. Read [Concepts/Terminology](./Concepts.html) and [Architecture](./Architecture.html)
+1. Read [Concepts/Terminology](../../Concepts.html) and [Architecture](../../Architecture.html)
 2. Read the [Quickstart guide](./Quickstart.html) to learn how Helix models and manages a cluster
 3. Install Helix source.  See: [Quickstart](./Quickstart.html) for the steps.
 
@@ -54,17 +54,17 @@ Convention: we first cover the _basic_ approach, which is the easiest to impleme
 
 First, we need to set up the system.  Let\'s walk through the steps in building a distributed system using Helix. We will show how to do this using both the Java admin interface, as well as the [cluster accessor](./tutorial_accessors.html) interface. You can choose either interface depending on which most closely matches your needs.
 
-### Start Zookeeper
+#### Start ZooKeeper
 
-This starts a zookeeper in standalone mode. For production deployment, see [Apache Zookeeper](http://zookeeper.apache.org) for instructions.
+This starts a zookeeper in standalone mode. For production deployment, see [Apache ZooKeeper](http://zookeeper.apache.org) for instructions.
 
 ```
-    ./start-standalone-zookeeper.sh 2199 &
+./start-standalone-zookeeper.sh 2199 &
 ```
 
-### Create a cluster
+#### Create a Cluster
 
-Creating a cluster will define the cluster in appropriate znodes on zookeeper.   
+Creating a cluster will define the cluster in appropriate ZNodes on ZooKeeper.
 
 Using the Java accessor API:
 
@@ -99,13 +99,13 @@ OR
 Using the command-line interface:
 
 ```
-    ./helix-admin.sh --zkSvr localhost:2199 --addCluster helix-demo 
+./helix-admin.sh --zkSvr localhost:2199 --addCluster helix-demo
 ```
 
 
-### Configure the nodes of the cluster
+#### Configure the Nodes of the Cluster
 
-First we\'ll add new nodes to the cluster, then configure the nodes in the cluster. Each node in the cluster must be uniquely identifiable. 
+First we\'ll add new nodes to the cluster, then configure the nodes in the cluster. Each node in the cluster must be uniquely identifiable.
 The most commonly used convention is hostname_port.
 
 ```
@@ -149,27 +149,27 @@ for (int i = 0; i < NUM_NODES; i++)
 }
 ```
 
-### Configure the resource
+#### Configure the Resource
 
 A _resource_ represents the actual task performed by the nodes. It can be a database, index, topic, queue or any other processing entity.
 A _resource_ can be divided into many sub-parts known as _partitions_.
 
 
-#### Define the _state model_ and _constraints_
+##### Define the State Model and Constraints
 
-For scalability and fault tolerance, each partition can have one or more replicas. 
-The _state model_ allows one to declare the system behavior by first enumerating the various STATES, and the TRANSITIONS between them.
+For scalability and fault tolerance, each partition can have one or more replicas.
+The __state model__ allows one to declare the system behavior by first enumerating the various STATES, and the TRANSITIONS between them.
 A simple model is ONLINE-OFFLINE where ONLINE means the task is active and OFFLINE means it\'s not active.
-You can also specify how many replicas must be in each state, these are known as _constraints_.
+You can also specify how many replicas must be in each state, these are known as __constraints__.
 For example, in a search system, one might need more than one node serving the same index to handle the load.
 
-The allowed states: 
+The allowed states:
 
 * MASTER
 * SLAVE
 * OFFLINE
 
-The allowed transitions: 
+The allowed transitions:
 
 * OFFLINE to SLAVE
 * SLAVE to OFFLINE
@@ -206,7 +206,7 @@ builder.addTransition(MASTER, SLAVE);
 builder.upperBound(MASTER, 1);
 
 // dynamic constraint: R means it should be derived based on the replication factor for the cluster
-// this allows a different replication factor for each resource without 
+// this allows a different replication factor for each resource without
 // having to define a new state model
 //
 builder.dynamicUpperBound(SLAVE, "R");
@@ -225,10 +225,10 @@ OR
 admin.addStateModelDef(CLUSTER_NAME, STATE_MODEL_NAME, stateModelDefinition);
 ```
 
-#### Assigning partitions to nodes
+##### Assigning Partitions to Nodes
 
-The final goal of Helix is to ensure that the constraints on the state model are satisfied. 
-Helix does this by assigning a STATE to a partition (such as MASTER, SLAVE), and placing it on a particular node.
+The final goal of Helix is to ensure that the constraints on the state model are satisfied.
+Helix does this by assigning a __state__ to a partition (such as MASTER, SLAVE), and placing it on a particular node.
 
 There are 3 assignment modes Helix can operate on
 
@@ -245,7 +245,7 @@ int NUM_PARTITIONS = 6;
 int NUM_REPLICAS = 2;
 ResourceId resourceId = resourceId.from("MyDB");
 
-SemiAutoRebalancerContext context = new SemiAutoRebalancerContext.Builder(resourceId)
+SemiAutoRebalancerConfig config = new SemiAutoRebalancerConfig.Builder(resourceId)
   .replicaCount(NUM_REPLICAS).addPartitions(NUM_PARTITIONS)
   .stateModelDefId(stateModelDefinition.getStateModelDefId())
   .addPreferenceList(partition1Id, preferenceList) // preferred locations of each partition
@@ -253,10 +253,16 @@ SemiAutoRebalancerContext context = new SemiAutoRebalancerContext.Builder(resour
   .build();
 
 // or add all preference lists at once if desired (map of PartitionId to List of ParticipantId)
-context.setPreferenceLists(preferenceLists);
+config.setPreferenceLists(preferenceLists);
 
 // or generate a default set of preference lists given the set of all participants
-context.generateDefaultConfiguration(stateModelDefinition, participantIdSet);
+config.generateDefaultConfiguration(stateModelDefinition, participantIdSet);
+
+// add the resource to the cluster
+ResourceConfig resourceConfig = new ResourceConfig.Builder(resourceId)
+  .rebalancerConfig(config)
+  .build();
+clusterAccessor.addResourceToCluster(resourceConfig);
 ```
 
 OR
@@ -278,7 +284,7 @@ idealState.setPreferenceList(partitionId, preferenceList); // preferred location
 idealState.getRecord().setListFields(preferenceLists);
 admin.setResourceIdealState(CLUSTER_NAME, RESOURCE_NAME, idealState);
 
-// or generate a default set of preference lists 
+// or generate a default set of preference lists
 admin.rebalance(CLUSTER_NAME, RESOURCE_NAME, NUM_REPLICAS);
 ```
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/UseCases.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/UseCases.md b/site-releases/trunk/src/site/markdown/UseCases.md
deleted file mode 100644
index 001b012..0000000
--- a/site-releases/trunk/src/site/markdown/UseCases.md
+++ /dev/null
@@ -1,113 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Use Cases</title>
-</head>
-
-
-# Use cases at LinkedIn
-
-At LinkedIn Helix framework is used to manage 3 distributed data systems which are quite different from each other.
-
-* Espresso
-* Databus
-* Search As A Service
-
-## Espresso
-
-Espresso is a distributed, timeline consistent, scal- able, document store that supports local secondary indexing and local transactions. 
-Espresso databases are horizontally partitioned into a number of partitions, with each partition having a certain number of replicas 
-distributed across the storage nodes.
-Espresso designates one replica of each partition as master and the rest as slaves; only one master may exist for each partition at any time.
-Espresso enforces timeline consistency where only the master of a partition can accept writes to its records, and all slaves receive and 
-apply the same writes through a replication stream. 
-For load balancing, both master and slave partitions are assigned evenly across all storage nodes. 
-For fault tolerance, it adds the constraint that no two replicas of the same partition may be located on the same node.
-
-### State model
-Espresso follows a Master-Slave state model. A replica can be in Offline,Slave or Master state. 
-The state machine table describes the next state given the Current State, Final State
-
-```
-          OFFLINE  | SLAVE  |  MASTER  
-         _____________________________
-        |          |        |         |
-OFFLINE |   N/A    | SLAVE  | SLAVE   |
-        |__________|________|_________|
-        |          |        |         |
-SLAVE   |  OFFLINE |   N/A  | MASTER  |
-        |__________|________|_________|
-        |          |        |         |
-MASTER  | SLAVE    | SLAVE  |   N/A   |
-        |__________|________|_________|
-
-```
-
-### Constraints
-* Max number of replicas in Master state:1
-* Execution mode AUTO. i.e on node failure no new replicas will be created. Only the State of remaining replicas will be changed.
-* Number of mastered partitions on each node must be approximately same.
-* The above constraint must be satisfied when a node fails or a new node is added.
-* When new nodes are added the number of partitions moved must be minimized.
-* When new nodes are added the max number of OFFLINE-SLAVE transitions that can happen concurrently on new node is X.
-
-## Databus
-
-Databus is a change data capture (CDC) system that provides a common pipeline for transporting events 
-from LinkedIn primary databases to caches within various applications.
-Databus deploys a cluster of relays that pull the change log from multiple databases and 
-let consumers subscribe to the change log stream. Each Databus relay connects to one or more database servers and 
-hosts a certain subset of databases (and partitions) from those database servers. 
-
-For a large partitioned database (e.g. Espresso), the change log is consumed by a bank of consumers. 
-Each databus partition is assigned to a consumer such that partitions are evenly distributed across consumers and each partition is
-assigned to exactly one consumer at a time. The set of consumers may grow over time, and consumers may leave the group due to planned or unplanned 
-outages. In these cases, partitions must be reassigned, while maintaining balance and the single consumer-per-partition invariant.
-
-### State model
-Databus consumers follow a simple Offline-Online state model.
-The state machine table describes the next state given the Current State, Final State
-
-<pre><code>
-          OFFLINE  | ONLINE |   
-         ___________________|
-        |          |        |
-OFFLINE |   N/A    | ONLINE |
-        |__________|________|
-        |          |        |
-ONLINE  |  OFFLINE |   N/A  |
-        |__________|________|
-
-
-</code></pre>
-
-
-## Search As A Service
-
-LinkedIn´┐Żs Search-as-a-service lets internal customers define custom indexes on a chosen dataset 
-and then makes those indexes searchable via a service API. The index service runs on a cluster of machines. 
-The index is broken into partitions and each partition has a configured number of replicas.
-Each cluster server runs an instance of the Sensei system (an online index store) and hosts index partitions. 
-Each new indexing service gets assigned to a set of servers, and the partition replicas must be evenly distributed across those servers.
-
-### State model
-![Helix Design](images/bootstrap_statemodel.gif) 
-
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/index.md b/site-releases/trunk/src/site/markdown/index.md
index 2eae374..923ddbc 100644
--- a/site-releases/trunk/src/site/markdown/index.md
+++ b/site-releases/trunk/src/site/markdown/index.md
@@ -18,21 +18,14 @@ under the License.
 -->
 
 <head>
-  <title>Home</title>
+  <title>Helix Trunk Documentation</title>
 </head>
 
-Navigating the Documentation
-----------------------------
+### Get Helix
 
-### Conceptual Understanding
+[Building](./Building.html)
 
-[Concepts / Terminology](./Concepts.html)
-
-[Architecture](./Architecture.html)
-
-### Hands-on Helix
-
-[Getting Helix](./Building.html)
+### Hands-On
 
 [Quickstart](./Quickstart.html)
 
@@ -50,7 +43,7 @@ Navigating the Documentation
 
 [Service discovery](./recipes/service_discovery.html)
 
-[Distributed Task DAG Execution](./recipes/task_dag_execution.html)
+[Distributed task DAG execution](./recipes/task_dag_execution.html)
 
 [User-Defined Rebalancer Example](./recipes/user_def_rebalancer.html)
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/recipes/lock_manager.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/recipes/lock_manager.md b/site-releases/trunk/src/site/markdown/recipes/lock_manager.md
index 252ace7..124e7bd 100644
--- a/site-releases/trunk/src/site/markdown/recipes/lock_manager.md
+++ b/site-releases/trunk/src/site/markdown/recipes/lock_manager.md
@@ -16,21 +16,21 @@ KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
 -->
-Distributed lock manager
+Distributed Lock Manager
 ------------------------
-Distributed locks are used to synchronize accesses shared resources. Most applications use Zookeeper to model the distributed locks. 
+Distributed locks are used to synchronize accesses shared resources. Most applications today use ZooKeeper to model distributed locks.
 
-The simplest way to model a lock using zookeeper is (See Zookeeper leader recipe for an exact and more advanced solution)
+The simplest way to model a lock using ZooKeeper is (See ZooKeeper leader recipe for an exact and more advanced solution)
 
-* Each process tries to create an emphemeral node.
-* If can successfully create it then, it acquires the lock
-* Else it will watch on the znode and try to acquire the lock again if the current lock holder disappears 
+* Each process tries to create an emphemeral node
+* If the node is successfully created, the process acquires the lock
+* Otherwise, it will watch the ZNode and try to acquire the lock again if the current lock holder disappears
 
-This is good enough if there is only one lock. But in practice, an application will need many such locks. Distributing and managing the locks among difference process becomes challenging. Extending such a solution to many locks will result in
+This is good enough if there is only one lock. But in practice, an application will need many such locks. Distributing and managing the locks among difference process becomes challenging. Extending such a solution to many locks will result in:
 
-* Uneven distribution of locks among nodes, the node that starts first will acquire all the lock. Nodes that start later will be idle.
-* When a node fails, how the locks will be distributed among remaining nodes is not predicable. 
-* When new nodes are added the current nodes dont relinquish the locks so that new nodes can acquire some locks
+* Uneven distribution of locks among nodes; the node that starts first will acquire all the locks. Nodes that start later will be idle.
+* When a node fails, how the locks will be distributed among remaining nodes is not predicable.
+* When new nodes are added the current nodes don\'t relinquish the locks so that new nodes can acquire some locks
 
 In other words we want a system to satisfy the following requirements.
 
@@ -38,15 +38,15 @@ In other words we want a system to satisfy the following requirements.
 * If a node fails, the locks that were acquired by that node should be evenly distributed among other nodes
 * If nodes are added, locks must be evenly re-distributed among nodes.
 
-Helix provides a simple and elegant solution to this problem. Simply specify the number of locks and Helix will ensure that above constraints are satisfied. 
+Helix provides a simple and elegant solution to this problem. Simply specify the number of locks and Helix will ensure that above constraints are satisfied.
 
-To quickly see this working run the lock-manager-demo script where 12 locks are evenly distributed among three nodes, and when a node fails, the locks get re-distributed among remaining two nodes. Note that Helix does not re-shuffle the locks completely, instead it simply distributes the locks relinquished by dead node among 2 remaining nodes evenly.
+To quickly see this working run the `lock-manager-demo` script where 12 locks are evenly distributed among three nodes, and when a node fails, the locks get re-distributed among remaining two nodes. Note that Helix does not re-shuffle the locks completely, instead it simply distributes the locks relinquished by dead node among 2 remaining nodes evenly.
 
 ----------------------------------------------------------------------------------------
 
-#### Short version
- This version starts multiple threads with in same process to simulate a multi node deployment. Try the long version to get a better idea of how it works.
- 
+### Short Version
+This version starts multiple threads within the same process to simulate a multi node deployment. Try the long version to get a better idea of how it works.
+
 ```
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
 cd incubator-helix
@@ -56,10 +56,10 @@ chmod +x *
 ./lock-manager-demo
 ```
 
-##### Output
+#### Output
 
 ```
-./lock-manager-demo 
+./lock-manager-demo
 STARTING localhost_12000
 STARTING localhost_12002
 STARTING localhost_12001
@@ -117,83 +117,74 @@ lock-group_9    localhost_12001
 
 ----------------------------------------------------------------------------------------
 
-#### Long version
+### Long version
 This provides more details on how to setup the cluster and where to plugin application code.
 
-##### start zookeeper
+#### Start ZooKeeper
 
 ```
 ./start-standalone-zookeeper 2199
 ```
 
-##### Create a cluster
+#### Create a Cluster
 
 ```
 ./helix-admin --zkSvr localhost:2199 --addCluster lock-manager-demo
 ```
 
-##### Create a lock group
+#### Create a Lock Group
 
-Create a lock group and specify the number of locks in the lock group. 
+Create a lock group and specify the number of locks in the lock group.
 
 ```
-./helix-admin --zkSvr localhost:2199  --addResource lock-manager-demo lock-group 6 OnlineOffline FULL_AUTO
+./helix-admin --zkSvr localhost:2199  --addResource lock-manager-demo lock-group 6 OnlineOffline AUTO_REBALANCE
 ```
 
-##### Start the nodes
+#### Start the Nodes
 
-Create a Lock class that handles the callbacks. 
+Create a Lock class that handles the callbacks.
 
 ```
-
-public class Lock extends StateModel
-{
+public class Lock extends StateModel {
   private String lockName;
 
-  public Lock(String lockName)
-  {
+  public Lock(String lockName) {
     this.lockName = lockName;
   }
 
-  public void lock(Message m, NotificationContext context)
-  {
+  public void lock(Message m, NotificationContext context) {
     System.out.println(" acquired lock:"+ lockName );
   }
 
-  public void release(Message m, NotificationContext context)
-  {
+  public void release(Message m, NotificationContext context) {
     System.out.println(" releasing lock:"+ lockName );
   }
 
 }
-
 ```
 
-LockFactory that creates the lock
- 
+and a LockFactory that creates Locks
+
 ```
-public class LockFactory extends StateModelFactory<Lock>{
-    
-    /* Instantiates the lock handler, one per lockName*/
-    public Lock create(String lockName)
-    {
+public class LockFactory extends StateModelFactory<Lock> {
+    /* Instantiates the lock handler, one per lockName */
+    public Lock create(String lockName) {
         return new Lock(lockName);
-    }   
+    }
 }
 ```
 
-At node start up, simply join the cluster and helix will invoke the appropriate callbacks on Lock instance. One can start any number of nodes and Helix detects that a new node has joined the cluster and re-distributes the locks automatically.
+At node start up, simply join the cluster and Helix will invoke the appropriate callbacks on the appropriate Lock instance. One can start any number of nodes and Helix detects that a new node has joined the cluster and re-distributes the locks automatically.
 
 ```
-public class LockProcess{
-
-  public static void main(String args){
+public class LockProcess {
+  public static void main(String args) {
     String zkAddress= "localhost:2199";
     String clusterName = "lock-manager-demo";
     //Give a unique id to each process, most commonly used format hostname_port
     String instanceName ="localhost_12000";
     ZKHelixAdmin helixAdmin = new ZKHelixAdmin(zkAddress);
-    //configure the instance and provide some metadata 
+    //configure the instance and provide some metadata
     InstanceConfig config = new InstanceConfig(instanceName);
     config.setHostName("localhost");
     config.setPort("12000");
@@ -207,47 +198,38 @@ public class LockProcess{
     manager.getStateMachineEngine().registerStateModelFactory("OnlineOffline", modelFactory);
     manager.connect();
     Thread.currentThread.join();
-    }
-
+  }
 }
 ```
 
-##### Start the controller
+#### Start the Controller
 
-Controller can be started either as a separate process or can be embedded within each node process
+The controller can be started either as a separate process or can be embedded within each node process
 
-###### Separate process
-This is recommended when number of nodes in the cluster >100. For fault tolerance, you can run multiple controllers on different boxes.
+##### Separate Process
+This is recommended when number of nodes in the cluster \> 100. For fault tolerance, you can run multiple controllers on different boxes.
 
 ```
 ./run-helix-controller --zkSvr localhost:2199 --cluster lock-manager-demo 2>&1 > /tmp/controller.log &
 ```
 
-###### Embedded within the node process
+##### Embedded Within the Node Process
 This is recommended when the number of nodes in the cluster is less than 100. To start a controller from each process, simply add the following lines to MyClass
 
 ```
-public class LockProcess{
-
-  public static void main(String args){
+public class LockProcess {
+  public static void main(String args) {
     String zkAddress= "localhost:2199";
     String clusterName = "lock-manager-demo";
-    .
-    .
+    // .
+    // .
     manager.connect();
     HelixManager controller;
-    controller = HelixControllerMain.startHelixController(zkAddress, 
+    controller = HelixControllerMain.startHelixController(zkAddress,
                                                           clusterName,
-                                                          "controller", 
+                                                          "controller",
                                                           HelixControllerMain.STANDALONE);
     Thread.currentThread.join();
   }
 }
 ```
-
-----------------------------------------------------------------------------------------
-
-
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/recipes/rabbitmq_consumer_group.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/recipes/rabbitmq_consumer_group.md b/site-releases/trunk/src/site/markdown/recipes/rabbitmq_consumer_group.md
index 9edc2cb..7a65542 100644
--- a/site-releases/trunk/src/site/markdown/recipes/rabbitmq_consumer_group.md
+++ b/site-releases/trunk/src/site/markdown/recipes/rabbitmq_consumer_group.md
@@ -19,36 +19,35 @@ under the License.
 
 
 RabbitMQ Consumer Group
-=======================
+-----------------------
 
-[RabbitMQ](http://www.rabbitmq.com/) is a well known Open source software the provides robust messaging for applications.
+[RabbitMQ](http://www.rabbitmq.com/) is well-known open source software the provides robust messaging for applications.
 
-One of the commonly implemented recipes using this software is a work queue.  http://www.rabbitmq.com/tutorials/tutorial-four-java.html describes the use case where
+One of the commonly implemented recipes using this software is a work queue.  [http://www.rabbitmq.com/tutorials/tutorial-four-java.html](http://www.rabbitmq.com/tutorials/tutorial-four-java.html) describes the use case where
 
-* A producer sends a message with a routing key. 
-* The message is routed to the queue whose binding key exactly matches the routing key of the message.	
+* A producer sends a message with a routing key
+* The message is routed to the queue whose binding key exactly matches the routing key of the message
 * There are multiple consumers and each consumer is interested in processing only a subset of the messages by binding to the interested keys
 
 The example provided [here](http://www.rabbitmq.com/tutorials/tutorial-four-java.html) describes how multiple consumers can be started to process all the messages.
 
-While this works, in production systems one needs the following 
+While this works, in production systems one needs the following:
 
-* Ability to handle failures: when a consumers fails another consumer must be started or the other consumers must start processing these messages that should have been processed by the failed consumer.
-* When the existing consumers cannot keep up with the task generation rate, new consumers will be added. The tasks must be redistributed among all the consumers. 
+* Ability to handle failures: when a consumer fails, another consumer must be started or the other consumers must start processing these messages that should have been processed by the failed consumer
+* When the existing consumers cannot keep up with the task generation rate, new consumers will be added. The tasks must be redistributed among all the consumers
 
 In this recipe, we demonstrate handling of consumer failures and new consumer additions using Helix.
 
-Mapping this usecase to Helix is pretty easy as the binding key/routing key is equivalent to a partition. 
+Mapping this usecase to Helix is pretty easy as the binding key/routing key is equivalent to a partition.
 
-Let's take an example. Lets say the queue has 6 partitions, and we have 2 consumers to process all the queues. 
-What we want is all 6 queues to be evenly divided among 2 consumers. 
+Let's take an example. Lets say the queue has 6 partitions, and we have 2 consumers to process all the queues.
+What we want is all 6 queues to be evenly divided among 2 consumers.
 Eventually when the system scales, we add more consumers to keep up. This will make each consumer process tasks from 2 queues.
-Now let's say that a consumer failed which reduces the number of active consumers to 2. This means each consumer must process 3 queues.
+Now let's say that a consumer failed, reducing the number of active consumers to 2. This means each consumer must process 3 queues.
 
-We showcase how such a dynamic App can be developed using Helix. Even though we use rabbitmq as the pub/sub system one can extend this solution to other pub/sub systems.
+We showcase how such a dynamic application can be developed using Helix. Even though we use RabbitMQ as the pub/sub system one can extend this solution to other pub/sub systems.
 
-Try it
-======
+### Try It
 
 ```
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
@@ -62,63 +61,60 @@ chmod +x $HELIX_PKG_ROOT/bin/*
 chmod +x $HELIX_RABBITMQ_ROOT/bin/*
 ```
 
-
-Install Rabbit MQ
-----------------
+#### Install RabbitMQ
 
 Setting up RabbitMQ on a local box is straightforward. You can find the instructions here
 http://www.rabbitmq.com/download.html
 
-Start ZK
---------
-Start zookeeper at port 2199
+#### Start ZK
+
+Start ZooKeeper at port 2199
 
 ```
 $HELIX_PKG_ROOT/bin/start-standalone-zookeeper 2199
 ```
 
-Setup the consumer group cluster
---------------------------------
-This will setup the cluster by creating a "rabbitmq-consumer-group" cluster and adds a "topic" with "6" queues. 
+#### Setup the Consumer Group Cluster
+
+This will setup the cluster by creating a "rabbitmq-consumer-group" cluster and adds a "topic" with "6" queues.
 
 ```
-$HELIX_RABBITMQ_ROOT/bin/setup-cluster.sh localhost:2199 
+$HELIX_RABBITMQ_ROOT/bin/setup-cluster.sh localhost:2199
 ```
 
-Add consumers
--------------
-Start 2 consumers in 2 different terminals. Each consumer is given a unique id.
+#### Add Consumers
+
+Start 2 consumers in 2 different terminals. Each consumer is given a unique ID.
 
 ```
 //start-consumer.sh zookeeperAddress (e.g. localhost:2181) consumerId , rabbitmqServer (e.g. localhost)
-$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 0 localhost 
-$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 1 localhost 
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 0 localhost
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 1 localhost
 
 ```
 
-Start HelixController
---------------------
+#### Start the Helix Controller
+
 Now start a Helix controller that starts managing the "rabbitmq-consumer-group" cluster.
 
 ```
 $HELIX_RABBITMQ_ROOT/bin/start-cluster-manager.sh localhost:2199
 ```
 
-Send messages to the Topic
---------------------------
+#### Send Messages to the Topic
 
-Start sending messages to the topic. This script randomly selects a routing key (1-6) and sends the message to topic. 
+Start sending messages to the topic. This script randomly selects a routing key (1-6) and sends the message to topic.
 Based on the key, messages gets routed to the appropriate queue.
 
 ```
 $HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 20
 ```
 
-After running this, you should see all 20 messages being processed by 2 consumers. 
+After running this, you should see all 20 messages being processed by 2 consumers.
 
-Add another consumer
---------------------
-Once a new consumer is started, helix detects it. In order to balance the load between 3 consumers, it deallocates 1 partition from the existing consumers and allocates it to the new consumer. We see that
+#### Add Another Consumer
+
+Once a new consumer is started, Helix detects it. In order to balance the load between 3 consumers, it deallocates 1 partition from the existing consumers and allocates it to the new consumer. We see that
 each consumer is now processing only 2 queues.
 Helix makes sure that old nodes are asked to stop consuming before the new consumer is asked to start consuming for a given partition. But the transitions for each partition can happen in parallel.
 
@@ -126,7 +122,7 @@ Helix makes sure that old nodes are asked to stop consuming before the new consu
 $HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 2 localhost
 ```
 
-Send messages again to the topic.
+Send messages again to the topic
 
 ```
 $HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 100
@@ -134,94 +130,83 @@ $HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 100
 
 You should see that messages are now received by all 3 consumers.
 
-Stop a consumer
----------------
+#### Stop a Consumer
+
 In any terminal press CTRL^C and notice that Helix detects the consumer failure and distributes the 2 partitions that were processed by failed consumer to the remaining 2 active consumers.
 
 
-How does it work
-================
+### How does this work?
 
-Find the entire code [here](https://git-wip-us.apache.org/repos/asf?p=incubator-helix.git;a=tree;f=recipes/rabbitmq-consumer-group/src/main/java/org/apache/helix/recipes/rabbitmq). 
- 
-Cluster setup
--------------
-This step creates znode on zookeeper for the cluster and adds the state model. We use online offline state model since there is no need for other states. The consumer is either processing a queue or it is not.
+Find the entire code [here](https://git-wip-us.apache.org/repos/asf?p=incubator-helix.git;a=tree;f=recipes/rabbitmq-consumer-group/src/main/java/org/apache/helix/recipes/rabbitmq).
 
-It creates a resource called "rabbitmq-consumer-group" with 6 partitions. The execution mode is set to FULL_AUTO. This means that the Helix controls the assignment of partition to consumers and automatically distributes the partitions evenly among the active consumers. When a consumer is added or removed, it ensures that a minimum number of partitions are shuffled.
+#### Cluster Setup
 
-```
-      zkclient = new ZkClient(zkAddr, ZkClient.DEFAULT_SESSION_TIMEOUT,
-          ZkClient.DEFAULT_CONNECTION_TIMEOUT, new ZNRecordSerializer());
-      ZKHelixAdmin admin = new ZKHelixAdmin(zkclient);
-      
-      // add cluster
-      admin.addCluster(clusterName, true);
+This step creates ZNode on ZooKeeper for the cluster and adds the state model. We use online offline state model since there is no need for other states. The consumer is either processing a queue or it is not.
 
-      // add state model definition
-      StateModelConfigGenerator generator = new StateModelConfigGenerator();
-      admin.addStateModelDef(clusterName, "OnlineOffline",
-          new StateModelDefinition(generator.generateConfigForOnlineOffline()));
+It creates a resource called "rabbitmq-consumer-group" with 6 partitions. The execution mode is set to AUTO_REBALANCE. This means that the Helix controls the assignment of partition to consumers and automatically distributes the partitions evenly among the active consumers. When a consumer is added or removed, it ensures that a minimum number of partitions are shuffled.
 
-      // add resource "topic" which has 6 partitions
-      String resourceName = "rabbitmq-consumer-group";
-      admin.addResource(clusterName, resourceName, 6, "OnlineOffline", "FULL_AUTO");
 ```
+zkclient = new ZkClient(zkAddr, ZkClient.DEFAULT_SESSION_TIMEOUT,
+    ZkClient.DEFAULT_CONNECTION_TIMEOUT, new ZNRecordSerializer());
+ZKHelixAdmin admin = new ZKHelixAdmin(zkclient);
+
+// add cluster
+admin.addCluster(clusterName, true);
 
-Starting the consumers
-----------------------
-The only thing consumers need to know is the zkaddress, cluster name and consumer id. It does not need to know anything else.
+// add state model definition
+StateModelConfigGenerator generator = new StateModelConfigGenerator();
+admin.addStateModelDef(clusterName, "OnlineOffline",
+    new StateModelDefinition(generator.generateConfigForOnlineOffline()));
 
+// add resource "topic" which has 6 partitions
+String resourceName = "rabbitmq-consumer-group";
+admin.addResource(clusterName, resourceName, 6, "OnlineOffline", "AUTO_REBALANCE");
 ```
-   _manager =
-          HelixManagerFactory.getZKHelixManager(_clusterName,
-                                                _consumerId,
-                                                InstanceType.PARTICIPANT,
-                                                _zkAddr);
 
-      StateMachineEngine stateMach = _manager.getStateMachineEngine();
-      ConsumerStateModelFactory modelFactory =
-          new ConsumerStateModelFactory(_consumerId, _mqServer);
-      stateMach.registerStateModelFactory("OnlineOffline", modelFactory);
+### Starting the Consumers
 
-      _manager.connect();
+The only thing consumers need to know is the ZooKeeper address, cluster name and consumer ID. It does not need to know anything else.
 
 ```
+_manager = HelixManagerFactory.getZKHelixManager(_clusterName,
+                                                 _consumerId,
+                                                 InstanceType.PARTICIPANT,
+                                                 _zkAddr);
 
-Once the consumer has registered the statemodel and the controller is started, the consumer starts getting callbacks (onBecomeOnlineFromOffline) for the partition it needs to host. All it needs to do as part of the callback is to start consuming messages from the appropriate queue. Similarly, when the controller deallocates a partitions from a consumer, it fires onBecomeOfflineFromOnline for the same partition. 
-As a part of this transition, the consumer will stop consuming from a that queue.
+StateMachineEngine stateMach = _manager.getStateMachineEngine();
+ConsumerStateModelFactory modelFactory =
+    new ConsumerStateModelFactory(_consumerId, _mqServer);
+stateMach.registerStateModelFactory("OnlineOffline", modelFactory);
 
+_manager.connect();
 ```
- @Transition(to = "ONLINE", from = "OFFLINE")
-  public void onBecomeOnlineFromOffline(Message message, NotificationContext context)
-  {
-    LOG.debug(_consumerId + " becomes ONLINE from OFFLINE for " + _partition);
-
-    if (_thread == null)
-    {
-      LOG.debug("Starting ConsumerThread for " + _partition + "...");
-      _thread = new ConsumerThread(_partition, _mqServer, _consumerId);
-      _thread.start();
-      LOG.debug("Starting ConsumerThread for " + _partition + " done");
-
-    }
-  }
-
-  @Transition(to = "OFFLINE", from = "ONLINE")
-  public void onBecomeOfflineFromOnline(Message message, NotificationContext context)
-      throws InterruptedException
-  {
-    LOG.debug(_consumerId + " becomes OFFLINE from ONLINE for " + _partition);
 
-    if (_thread != null)
-    {
-      LOG.debug("Stopping " + _consumerId + " for " + _partition + "...");
+Once the consumer has registered the state model and the controller is started, the consumer starts getting callbacks (onBecomeOnlineFromOffline) for the partition it needs to host. All it needs to do as part of the callback is to start consuming messages from the appropriate queue. Similarly, when the controller deallocates a partitions from a consumer, it fires onBecomeOfflineFromOnline for the same partition.
+As a part of this transition, the consumer will stop consuming from a that queue.
 
-      _thread.interrupt();
-      _thread.join(2000);
-      _thread = null;
-      LOG.debug("Stopping " +  _consumerId + " for " + _partition + " done");
+```
+@Transition(to = "ONLINE", from = "OFFLINE")
+public void onBecomeOnlineFromOffline(Message message, NotificationContext context) {
+  LOG.debug(_consumerId + " becomes ONLINE from OFFLINE for " + _partition);
+  if (_thread == null) {
+    LOG.debug("Starting ConsumerThread for " + _partition + "...");
+    _thread = new ConsumerThread(_partition, _mqServer, _consumerId);
+    _thread.start();
+    LOG.debug("Starting ConsumerThread for " + _partition + " done");
 
-    }
   }
-```
\ No newline at end of file
+}
+
+@Transition(to = "OFFLINE", from = "ONLINE")
+public void onBecomeOfflineFromOnline(Message message, NotificationContext context)
+    throws InterruptedException {
+  LOG.debug(_consumerId + " becomes OFFLINE from ONLINE for " + _partition);
+  if (_thread != null) {
+    LOG.debug("Stopping " + _consumerId + " for " + _partition + "...");
+    _thread.interrupt();
+    _thread.join(2000);
+    _thread = null;
+    LOG.debug("Stopping " +  _consumerId + " for " + _partition + " done");
+  }
+}
+```

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/recipes/rsync_replicated_file_store.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/recipes/rsync_replicated_file_store.md b/site-releases/trunk/src/site/markdown/recipes/rsync_replicated_file_store.md
index f8a74a0..24cca63 100644
--- a/site-releases/trunk/src/site/markdown/recipes/rsync_replicated_file_store.md
+++ b/site-releases/trunk/src/site/markdown/recipes/rsync_replicated_file_store.md
@@ -17,25 +17,25 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Near real time rsync replicated file system
-===========================================
+Near-Realtime Rsync Replicated File System
+------------------------------------------
 
-Quickdemo
----------
+### Quick Demo
 
 * This demo starts 3 instances with id's as ```localhost_12001, localhost_12002, localhost_12003```
 * Each instance stores its files under ```/tmp/<id>/filestore```
-* ``` localhost_12001 ``` is designated as the master and ``` localhost_12002 and localhost_12003``` are the slaves.
-* Files written to master are replicated to the slaves automatically. In this demo, a.txt and b.txt are written to ```/tmp/localhost_12001/filestore``` and it gets replicated to other folders.
-* When the master is stopped, ```localhost_12002``` is promoted to master. 
+* ```localhost_12001``` is designated as the master, and ```localhost_12002``` and ```localhost_12003``` are the slaves
+* Files written to the master are replicated to the slaves automatically. In this demo, a.txt and b.txt are written to ```/tmp/localhost_12001/filestore``` and they get replicated to other folders.
+* When the master is stopped, ```localhost_12002``` is promoted to master.
 * The other slave ```localhost_12003``` stops replicating from ```localhost_12001``` and starts replicating from new master ```localhost_12002```
 * Files written to new master ```localhost_12002``` are replicated to ```localhost_12003```
-* In the end state of this quick demo, ```localhost_12002``` is the master and ```localhost_12003``` is the slave. Manually create files under ```/tmp/localhost_12002/filestore``` and see that appears in ```/tmp/localhost_12003/filestore```
-* Ignore the interrupted exceptions on the console :-).
+* In the end state of this quick demo, ```localhost_12002``` is the master and ```localhost_12003``` is the slave. Manually create files under ```/tmp/localhost_12002/filestore``` and see that appear in ```/tmp/localhost_12003/filestore```
+* Ignore the interrupted exceptions on the console :-)
 
 
 ```
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
 cd recipes/rsync-replicated-file-system/
 mvn clean install package -DskipTests
 cd target/rsync-replicated-file-system-pkg/bin
@@ -44,103 +44,99 @@ chmod +x *
 
 ```
 
-Overview
---------
+### Overview
 
-There are many applications that require storage for storing large number of relatively small data files. Examples include media stores to store small videos, images, mail attachments etc. Each of these objects is typically kilobytes, often no larger than a few megabytes. An additional distinguishing feature of these usecases is also that files are typically only added or deleted, rarely updated. When there are updates, they are rare and do not have any concurrency requirements.
+There are many applications that require storage for storing large number of relatively small data files. Examples include media stores to store small videos, images, mail attachments etc. Each of these objects is typically kilobytes, often no larger than a few megabytes. An additional distinguishing feature of these use cases is that files are typically only added or deleted, rarely updated. When there are updates, they do not have any concurrency requirements.
+
+These are much simpler requirements than what general purpose distributed file system have to satisfy; these would include concurrent access to files, random access for reads and updates, posix compliance, and others. To satisfy those requirements, general DFSs are also pretty complex that are expensive to build and maintain.
 
-These are much simpler requirements than what general purpose distributed file system have to satisfy including concurrent access to files, random access for reads and updates, posix compliance etc. To satisfy those requirements, general DFSs are also pretty complex that are expensive to build and maintain.
- 
 A different implementation of a distributed file system includes HDFS which is inspired by Google's GFS. This is one of the most widely used distributed file system that forms the main data storage platform for Hadoop. HDFS is primary aimed at processing very large data sets and distributes files across a cluster of commodity servers by splitting up files in fixed size chunks. HDFS is not particularly well suited for storing a very large number of relatively tiny files.
 
 ### File Store
 
 It's possible to build a vastly simpler system for the class of applications that have simpler requirements as we have pointed out.
 
-* Large number of files but each file is relatively small.
-* Access is limited to create, delete and get entire files.
-* No updates to files that are already created (or it's feasible to delete the old file and create a new one).
- 
+* Large number of files but each file is relatively small
+* Access is limited to create, delete and get entire files
+* No updates to files that are already created (or it's feasible to delete the old file and create a new one)
+
 
 We call this system a Partitioned File Store (PFS) to distinguish it from other distributed file systems. This system needs to provide the following features:
 
 * CRD access to large number of small files
-* Scalability: Files should be distributed across a large number of commodity servers based on the storage requirement.
-* Fault-tolerance: Each file should be replicated on multiple servers so that individual server failures do not reduce availability.
-* Elasticity: It should be possible to add capacity to the cluster easily.
- 
+* Scalability: Files should be distributed across a large number of commodity servers based on the storage requirement
+* Fault-tolerance: Each file should be replicated on multiple servers so that individual server failures do not reduce availability
+* Elasticity: It should be possible to add capacity to the cluster easily
 
-Apache Helix is a generic cluster management framework that makes it very easy to provide the scalability, fault-tolerance and elasticity features. 
-Rsync can be easily used as a replication channel between servers so that each file gets replicated on multiple servers.
 
-Design
-------
+Apache Helix is a generic cluster management framework that makes it very easy to provide scalability, fault-tolerance and elasticity features.
+rsync can be easily used as a replication channel between servers so that each file gets replicated on multiple servers.
 
-High level 
+### Design
 
-* Partition the file system based on the file name. 
-* At any time a single writer can write, we call this a master.
-* For redundancy, we need to have additional replicas called slave. Slaves can optionally serve reads.
-* Slave replicates data from the master.
-* When a master fails, slave gets promoted to master.
+#### High Level
 
-### Transaction log
+* Partition the file system based on the file name
+* At any time a single writer can write, we call this a master
+* For redundancy, we need to have additional replicas called slave. Slaves can optionally serve reads
+* Slave replicates data from the master
+* When a master fails, a slave gets promoted to master
 
-Every write on the master will result in creation/deletion of one or more files. In order to maintain timeline consistency slaves need to apply the changes in the same order. 
-To facilitate this, the master logs each transaction in a file and each transaction is associated with an 64 bit id in which the 32 LSB represents a sequence number and MSB represents the generation number.
-Sequence gets incremented on every transaction and and generation is increment when a new master is elected. 
+#### Transaction Log
 
-### Replication
+Every write on the master will result in creation/deletion of one or more files. In order to maintain timeline consistency slaves need to apply the changes in the same order
+To facilitate this, the master logs each transaction in a file and each transaction is associated with an 64 bit ID in which the 32 LSB represents a sequence number and MSB represents the generation number
+The sequence number gets incremented on every transaction and the generation is incremented when a new master is elected
 
-Replication is required to slave to keep up with the changes on the master. Every time the slave applies a change it checkpoints the last applied transaction id. 
-During restarts, this allows the slave to pull changes from the last checkpointed id. Similar to master, the slave logs each transaction to the transaction logs but instead of generating new transaction id, it uses the same id generated by the master.
+#### Replication
 
+Replication is required for slaves to keep up with changes on the master. Every time the slave applies a change it checkpoints the last applied transaction ID.
+During restarts, this allows the slave to pull changes from the last checkpointed ID. Similar to master, the slave logs each transaction to the transaction logs but instead of generating new transaction ID, it uses the same ID generated by the master.
 
-### Fail over
 
-When a master fails, a new slave will be promoted to master. If the prev master node is reachable, then the new master will flush all the 
-changes from previous master before taking up mastership. The new master will record the end transaction id of the current generation and then starts new generation 
-with sequence starting from 1. After this the master will begin accepting writes. 
+#### Failover
 
+When a master fails, a new slave will be promoted to master. If the previous master node is reachable, then the new master will flush all the
+changes from previous the master before taking up mastership. The new master will record the end transaction ID of the current generation and then start a new generation
+with sequence starting from 1. After this the master will begin accepting writes.
 
 ![Partitioned File Store](../images/PFS-Generic.png)
 
 
 
-Rsync based solution
--------------------
+### Rsync-based Solution
 
 ![Rsync based File Store](../images/RSYNC_BASED_PFS.png)
 
 
-This application demonstrate a file store that uses rsync as the replication mechanism. One can envision a similar system where instead of using rsync, 
+This application demonstrates a file store that uses rsync as the replication mechanism. One can envision a similar system where instead of using rsync, one
 can implement a custom solution to notify the slave of the changes and also provide an api to pull the change files.
-#### Concept
-* file_store_dir: Root directory for the actual data files 
-* change_log_dir: The transaction logs are generated under this folder.
-* check_point_dir: The slave stores the check points ( last processed transaction) here.
+
+#### Concepts
+* file_store_dir: Root directory for the actual data files
+* change_log_dir: The transaction logs are generated under this folder
+* check_point_dir: The slave stores the check points ( last processed transaction) here
 
 #### Master
-* File server: This component support file uploads and downloads and writes the files to ```file_store_dir```. This is not included in this application. Idea is that most applications have different ways of implementing this component and has some business logic associated with it. It is not hard to come up with such a component if needed.
-* File store watcher: This component watches the ```file_store_dir``` directory on the local file system for any changes and notifies the registered listeners of the changes.
-* Change Log Generator: This registers as a listener of File System Watcher and on each notification logs the changes into a file under ```change_log_dir```. 
+* File server: This component supports file uploads and downloads and writes the files to ```file_store_dir```. This is not included in this application. The idea is that most applications have different ways of implementing this component and have some associated business logic. It is not hard to come up with such a component if needed.
+* File store watcher: This component watches the ```file_store_dir``` directory on the local file system for any changes and notifies the registered listeners of the changes
+* Change log generator: This registers as a listener of the file store watcher and on each notification logs the changes into a file under ```change_log_dir```
 
-####Slave
-* File server: This component on the slave will only support reads.
-* Cluster state observer: Slave observes the cluster state and is able to know who is the current master. 
+#### Slave
+* File server: This component on the slave will only support reads
+* Cluster state observer: Slave observes the cluster state and is able to know who is the current master
 * Replicator: This has two subcomponents
     - Periodic rsync of change log: This is a background process that periodically rsyncs the ```change_log_dir``` of the master to its local directory
     - Change Log Watcher: This watches the ```change_log_dir``` for changes and notifies the registered listeners of the change
-    - On demand rsync invoker: This is registered as a listener to change log watcher and on every change invokes rsync to sync only the changed file.
-
+    - On demand rsync invoker: This is registered as a listener to change log watcher and on every change invokes rsync to sync only the changed file
 
 #### Coordination
 
 The coordination between nodes is done by Helix. Helix does the partition management and assigns the partition to multiple nodes based on the replication factor. It elects one the nodes as master and designates others as slaves.
-It provides notifications to each node in the form of state transitions ( Offline to Slave, Slave to Master). It also provides notification when there is change is cluster state. 
-This allows the slave to stop replicating from current master and start replicating from new master. 
+It provides notifications to each node in the form of state transitions (Offline to Slave, Slave to Master). It also provides notifications when there is change is cluster state.
+This allows the slave to stop replicating from current master and start replicating from new master.
 
-In this application, we have only one partition but its very easy to extend it to support multiple partitions. By partitioning the file store, one can add new nodes and Helix will automatically 
+In this application, we have only one partition but its very easy to extend it to support multiple partitions. By partitioning the file store, one can add new nodes and Helix will automatically
 re-distribute partitions among the nodes. To summarize, Helix provides partition management, fault tolerance and facilitates automated cluster expansion.
 
 


Mime
View raw message