Return-Path: X-Original-To: apmail-helix-commits-archive@minotaur.apache.org Delivered-To: apmail-helix-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A2DFFD57A for ; Mon, 17 Dec 2012 19:03:12 +0000 (UTC) Received: (qmail 70999 invoked by uid 500); 17 Dec 2012 19:03:12 -0000 Delivered-To: apmail-helix-commits-archive@helix.apache.org Received: (qmail 70977 invoked by uid 500); 17 Dec 2012 19:03:12 -0000 Mailing-List: contact commits-help@helix.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@helix.incubator.apache.org Delivered-To: mailing list commits@helix.incubator.apache.org Received: (qmail 70969 invoked by uid 99); 17 Dec 2012 19:03:12 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 17 Dec 2012 19:03:12 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED,RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.3] (HELO mail.apache.org) (140.211.11.3) by apache.org (qpsmtpd/0.29) with SMTP; Mon, 17 Dec 2012 19:03:11 +0000 Received: (qmail 70430 invoked by uid 99); 17 Dec 2012 19:02:50 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 17 Dec 2012 19:02:50 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id 8052A81DE4A; Mon, 17 Dec 2012 19:02:50 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: kishoreg@apache.org To: commits@helix.incubator.apache.org X-Mailer: ASF-Git Admin Mailer Subject: [2/2] git commit: Adding Helix logo and adding documentation Message-Id: <20121217190250.8052A81DE4A@tyr.zones.apache.org> Date: Mon, 17 Dec 2012 19:02:50 +0000 (UTC) X-Virus-Checked: Checked by ClamAV on apache.org Adding Helix logo and adding documentation Project: http://git-wip-us.apache.org/repos/asf/incubator-helix/repo Commit: http://git-wip-us.apache.org/repos/asf/incubator-helix/commit/de62c690 Tree: http://git-wip-us.apache.org/repos/asf/incubator-helix/tree/de62c690 Diff: http://git-wip-us.apache.org/repos/asf/incubator-helix/diff/de62c690 Branch: refs/heads/master Commit: de62c69073a09ac35369629fc03af2f747c98d69 Parents: 7b6790d Author: Kishore Gopalakrishna Authored: Mon Dec 17 11:00:49 2012 -0800 Committer: Kishore Gopalakrishna Committed: Mon Dec 17 11:00:49 2012 -0800 ---------------------------------------------------------------------- recipes/distributed-lock-manager/README.md | 98 +++++++++++++++++++---- src/site/markdown/index.md | 1 + src/site/markdown/recipes/lock_manager.md | 98 +++++++++++++++++++---- src/site/site.xml | 11 ++- 4 files changed, 170 insertions(+), 38 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/de62c690/recipes/distributed-lock-manager/README.md ---------------------------------------------------------------------- diff --git a/recipes/distributed-lock-manager/README.md b/recipes/distributed-lock-manager/README.md index 5ef55d9..0304fed 100644 --- a/recipes/distributed-lock-manager/README.md +++ b/recipes/distributed-lock-manager/README.md @@ -42,7 +42,9 @@ Helix provides a simple and elegant solution to this problem. Simply specify the To quickly see this working run the lock-manager-demo script where 12 locks are evenly distributed among three nodes, and when a node fails, the locks get re-distributed among remaining two nodes. Note that Helix does not re-shuffle the locks completely, instead it simply distributes the locks relinquished by dead node among 2 remaining nodes evenly. -#### Quick version +---------------------------------------------------------------------------------------- + +#### Short version This version starts multiple threads with in same process to simulate a multi node deployment. Try the long version to get a better idea of how it works. ``` @@ -52,36 +54,97 @@ mvn clean install package -DskipTests cd recipes/distributed-lock-manager/target/distributed-lock-manager-pkg/bin chmod +x * ./lock-manager-demo +``` -``` +##### Output + +``` +./lock-manager-demo +STARTING localhost_12000 +STARTING localhost_12002 +STARTING localhost_12001 +STARTED localhost_12000 +STARTED localhost_12002 +STARTED localhost_12001 +localhost_12001 acquired lock:lock-group_3 +localhost_12000 acquired lock:lock-group_8 +localhost_12001 acquired lock:lock-group_2 +localhost_12001 acquired lock:lock-group_4 +localhost_12002 acquired lock:lock-group_1 +localhost_12002 acquired lock:lock-group_10 +localhost_12000 acquired lock:lock-group_7 +localhost_12001 acquired lock:lock-group_5 +localhost_12002 acquired lock:lock-group_11 +localhost_12000 acquired lock:lock-group_6 +localhost_12002 acquired lock:lock-group_0 +localhost_12000 acquired lock:lock-group_9 +lockName acquired By +====================================== +lock-group_0 localhost_12002 +lock-group_1 localhost_12002 +lock-group_10 localhost_12002 +lock-group_11 localhost_12002 +lock-group_2 localhost_12001 +lock-group_3 localhost_12001 +lock-group_4 localhost_12001 +lock-group_5 localhost_12001 +lock-group_6 localhost_12000 +lock-group_7 localhost_12000 +lock-group_8 localhost_12000 +lock-group_9 localhost_12000 +Stopping localhost_12000 +localhost_12000Interrupted +localhost_12001 acquired lock:lock-group_9 +localhost_12001 acquired lock:lock-group_8 +localhost_12002 acquired lock:lock-group_6 +localhost_12002 acquired lock:lock-group_7 +lockName acquired By +====================================== +lock-group_0 localhost_12002 +lock-group_1 localhost_12002 +lock-group_10 localhost_12002 +lock-group_11 localhost_12002 +lock-group_2 localhost_12001 +lock-group_3 localhost_12001 +lock-group_4 localhost_12001 +lock-group_5 localhost_12001 +lock-group_6 localhost_12002 +lock-group_7 localhost_12002 +lock-group_8 localhost_12001 +lock-group_9 localhost_12001 + +``` + +---------------------------------------------------------------------------------------- #### Long version This provides more details on how to setup the cluster and where to plugin application code. -#### start zookeeper +##### start zookeeper ``` ./start-standalone-zookeeper 2199 ``` -#### Create a cluster +##### Create a cluster ``` ./helix-admin --zkSvr localhost:2199 --addCluster lock-manager-demo ``` -#### Create a lock group +##### Create a lock group -Create a lock group and specify the number of locks in the lock group. You can change add new locks dynamically later. +Create a lock group and specify the number of locks in the lock group. ``` ./helix-admin --zkSvr localhost:2199 --addResource lock-manager-demo lock-group 6 OnlineOffline AUTO_REBALANCE ``` -#### Start the nodes +##### Start the nodes Create a Lock class that handles the callbacks. ``` + public class Lock extends StateModel { private String lockName; @@ -105,21 +168,23 @@ public class Lock extends StateModel ``` -LockFactory that creates the lock +LockFactory that creates the lock + ``` public class LockFactory extends StateModelFactory{ - /* Instantiates the lock handler, one per lockName*/ + /* Instantiates the lock handler, one per lockName*/ public Lock create(String lockName) { return new Lock(lockName); } } ``` -Thats it, now when the node starts simply join the cluster and helix will invoke the appropriate call backs on Lock. + +At node start up, simply join the cluster and helix will invoke the appropriate call backs on Lock instance. One can start any number of nodes and Helix detects that a new node has joined the cluster and re-distributes the locks automatically. ``` -public class MyClass{ +public class LockProcess{ public static void main(String args){ String zkAddress= "localhost:2199"; @@ -142,22 +207,22 @@ public class MyClass{ } ``` -#### Start the controller +##### Start the controller Controller can be started either as a separate process or can be embedded within each node process -##### Separate process +###### Separate process This is recommended when number of nodes in the cluster >100. For fault tolerance, you can run multiple controllers on different boxes. ``` ./run-helix-controller --zkSvr localhost:2199 --cluster mycluster 2>&1 > /tmp/controller.log & ``` -##### Embedded within the node process +###### Embedded within the node process This is recommended when the number of nodes in the cluster is less than 100. To start a controller from each process, simply add the following lines to MyClass ``` -public class MyClass{ +public class LockProcess{ public static void main(String args){ String zkAddress= "localhost:2199"; @@ -171,8 +236,7 @@ public class MyClass{ } ``` - - +---------------------------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/de62c690/src/site/markdown/index.md ---------------------------------------------------------------------- diff --git a/src/site/markdown/index.md b/src/site/markdown/index.md index b024053..da41a5c 100644 --- a/src/site/markdown/index.md +++ b/src/site/markdown/index.md @@ -27,6 +27,7 @@ Pages * [Javadocs](./apidocs/index.html) * [UseCases](./UseCases.html) * Recipes + - [Distributed lock manager](./recipes/lock_manager.html) - [Rabbit MQ consumer group](./recipes/rabbitmq_consumer_group.html) - [Rsync replicated file store](./recipes/rsync_replicated_file_store.html) http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/de62c690/src/site/markdown/recipes/lock_manager.md ---------------------------------------------------------------------- diff --git a/src/site/markdown/recipes/lock_manager.md b/src/site/markdown/recipes/lock_manager.md index 5ef55d9..0304fed 100644 --- a/src/site/markdown/recipes/lock_manager.md +++ b/src/site/markdown/recipes/lock_manager.md @@ -42,7 +42,9 @@ Helix provides a simple and elegant solution to this problem. Simply specify the To quickly see this working run the lock-manager-demo script where 12 locks are evenly distributed among three nodes, and when a node fails, the locks get re-distributed among remaining two nodes. Note that Helix does not re-shuffle the locks completely, instead it simply distributes the locks relinquished by dead node among 2 remaining nodes evenly. -#### Quick version +---------------------------------------------------------------------------------------- + +#### Short version This version starts multiple threads with in same process to simulate a multi node deployment. Try the long version to get a better idea of how it works. ``` @@ -52,36 +54,97 @@ mvn clean install package -DskipTests cd recipes/distributed-lock-manager/target/distributed-lock-manager-pkg/bin chmod +x * ./lock-manager-demo +``` -``` +##### Output + +``` +./lock-manager-demo +STARTING localhost_12000 +STARTING localhost_12002 +STARTING localhost_12001 +STARTED localhost_12000 +STARTED localhost_12002 +STARTED localhost_12001 +localhost_12001 acquired lock:lock-group_3 +localhost_12000 acquired lock:lock-group_8 +localhost_12001 acquired lock:lock-group_2 +localhost_12001 acquired lock:lock-group_4 +localhost_12002 acquired lock:lock-group_1 +localhost_12002 acquired lock:lock-group_10 +localhost_12000 acquired lock:lock-group_7 +localhost_12001 acquired lock:lock-group_5 +localhost_12002 acquired lock:lock-group_11 +localhost_12000 acquired lock:lock-group_6 +localhost_12002 acquired lock:lock-group_0 +localhost_12000 acquired lock:lock-group_9 +lockName acquired By +====================================== +lock-group_0 localhost_12002 +lock-group_1 localhost_12002 +lock-group_10 localhost_12002 +lock-group_11 localhost_12002 +lock-group_2 localhost_12001 +lock-group_3 localhost_12001 +lock-group_4 localhost_12001 +lock-group_5 localhost_12001 +lock-group_6 localhost_12000 +lock-group_7 localhost_12000 +lock-group_8 localhost_12000 +lock-group_9 localhost_12000 +Stopping localhost_12000 +localhost_12000Interrupted +localhost_12001 acquired lock:lock-group_9 +localhost_12001 acquired lock:lock-group_8 +localhost_12002 acquired lock:lock-group_6 +localhost_12002 acquired lock:lock-group_7 +lockName acquired By +====================================== +lock-group_0 localhost_12002 +lock-group_1 localhost_12002 +lock-group_10 localhost_12002 +lock-group_11 localhost_12002 +lock-group_2 localhost_12001 +lock-group_3 localhost_12001 +lock-group_4 localhost_12001 +lock-group_5 localhost_12001 +lock-group_6 localhost_12002 +lock-group_7 localhost_12002 +lock-group_8 localhost_12001 +lock-group_9 localhost_12001 + +``` + +---------------------------------------------------------------------------------------- #### Long version This provides more details on how to setup the cluster and where to plugin application code. -#### start zookeeper +##### start zookeeper ``` ./start-standalone-zookeeper 2199 ``` -#### Create a cluster +##### Create a cluster ``` ./helix-admin --zkSvr localhost:2199 --addCluster lock-manager-demo ``` -#### Create a lock group +##### Create a lock group -Create a lock group and specify the number of locks in the lock group. You can change add new locks dynamically later. +Create a lock group and specify the number of locks in the lock group. ``` ./helix-admin --zkSvr localhost:2199 --addResource lock-manager-demo lock-group 6 OnlineOffline AUTO_REBALANCE ``` -#### Start the nodes +##### Start the nodes Create a Lock class that handles the callbacks. ``` + public class Lock extends StateModel { private String lockName; @@ -105,21 +168,23 @@ public class Lock extends StateModel ``` -LockFactory that creates the lock +LockFactory that creates the lock + ``` public class LockFactory extends StateModelFactory{ - /* Instantiates the lock handler, one per lockName*/ + /* Instantiates the lock handler, one per lockName*/ public Lock create(String lockName) { return new Lock(lockName); } } ``` -Thats it, now when the node starts simply join the cluster and helix will invoke the appropriate call backs on Lock. + +At node start up, simply join the cluster and helix will invoke the appropriate call backs on Lock instance. One can start any number of nodes and Helix detects that a new node has joined the cluster and re-distributes the locks automatically. ``` -public class MyClass{ +public class LockProcess{ public static void main(String args){ String zkAddress= "localhost:2199"; @@ -142,22 +207,22 @@ public class MyClass{ } ``` -#### Start the controller +##### Start the controller Controller can be started either as a separate process or can be embedded within each node process -##### Separate process +###### Separate process This is recommended when number of nodes in the cluster >100. For fault tolerance, you can run multiple controllers on different boxes. ``` ./run-helix-controller --zkSvr localhost:2199 --cluster mycluster 2>&1 > /tmp/controller.log & ``` -##### Embedded within the node process +###### Embedded within the node process This is recommended when the number of nodes in the cluster is less than 100. To start a controller from each process, simply add the following lines to MyClass ``` -public class MyClass{ +public class LockProcess{ public static void main(String args){ String zkAddress= "localhost:2199"; @@ -171,8 +236,7 @@ public class MyClass{ } ``` - - +---------------------------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/de62c690/src/site/site.xml ---------------------------------------------------------------------- diff --git a/src/site/site.xml b/src/site/site.xml index 3edaba3..12369b2 100644 --- a/src/site/site.xml +++ b/src/site/site.xml @@ -16,7 +16,10 @@ limitations under the License. --> - + + images/helix-logo.jpg + http://helix.incubator.apache.org/ + http://incubator.apache.org/images/egg-logo.png http://incubator.apache.org/ @@ -73,10 +76,10 @@ - true - false + false + true - \ No newline at end of file +