Return-Path: X-Original-To: apmail-helix-commits-archive@minotaur.apache.org Delivered-To: apmail-helix-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D486C10AAD for ; Thu, 27 Mar 2014 22:27:05 +0000 (UTC) Received: (qmail 92642 invoked by uid 500); 27 Mar 2014 22:27:03 -0000 Delivered-To: apmail-helix-commits-archive@helix.apache.org Received: (qmail 92586 invoked by uid 500); 27 Mar 2014 22:27:02 -0000 Mailing-List: contact commits-help@helix.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@helix.apache.org Delivered-To: mailing list commits@helix.apache.org Received: (qmail 92544 invoked by uid 99); 27 Mar 2014 22:27:01 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 27 Mar 2014 22:27:01 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 27 Mar 2014 22:26:56 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id 325C32388BFF; Thu, 27 Mar 2014 22:26:12 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: svn commit: r1582513 [9/15] - in /helix/site-content: ./ 0.6.1-incubating-docs/ 0.6.1-incubating-docs/recipes/ 0.6.1-incubating-docs/releasenotes/ 0.6.2-incubating-docs/ 0.6.2-incubating-docs/recipes/ 0.6.2-incubating-docs/releasenotes/ 0.6.3-docs/ 0.6... Date: Thu, 27 Mar 2014 22:26:00 -0000 To: commits@helix.apache.org From: kanak@apache.org X-Mailer: svnmailer-1.0.9 Message-Id: <20140327222612.325C32388BFF@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Added: helix/site-content/0.6.3-docs/team-list.html URL: http://svn.apache.org/viewvc/helix/site-content/0.6.3-docs/team-list.html?rev=1582513&view=auto ============================================================================== --- helix/site-content/0.6.3-docs/team-list.html (added) +++ helix/site-content/0.6.3-docs/team-list.html Thu Mar 27 22:25:55 2014 @@ -0,0 +1,442 @@ + + + + + + + + Apache Helix - Team list + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+ +
+
+ +
+
+
+
+
+ + +

A successful project requires many people to play many roles. Some members write code or documentation, while others are valuable as testers, submitting patches and suggestions.

+

The team is comprised of Members and Contributors. Members have direct access to the source of a project and actively evolve the code-base. Contributors improve the project through submission of patches and suggestions to the Members. The number of Contributors to the project is unbounded. Get involved today. All contributions to the project are greatly appreciated.

+
+

Members

+ +

The following is a list of developers with commit privileges that have directly contributed to the project in one way or another.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ImageIdNameEmailRolesTime ZoneActual Time (GMT)
olamyOlivier Lamyolamy@apache.orgMentorAustralia/MelbourneAustralia/Melbourne
phuntPatrick Huntphunt@apache.orgMentor-8-8
mahadevMahadev Konarmahadev@apache.orgMentor-8-8
omalleyOwen O'Malleyomalley@apache.orgMentor-8-8
kishoregKishore Gopalakrishnakishoreg@apache.orgCommitter-8-8
zzhangZhen Zhangzzhang@apache.orgCommitter-8-8
sluShi Luslu@apache.orgCommitter-8-8
TBAAdam SilbersteinTBA@apache.orgCommitter-8-8
ksurlakerKapil Surlakerksurlaker@apache.orgCommitter-8-8
rmsBob Schulmanrms@apache.orgCommitter-8-8
swaroop-ajSwaroop Jagadishswaroop-aj@apache.orgCommitter-8-8
rahulaRahul Aggarwalrahula@apache.orgCommitter-8-8
chtyimTerence Yimchtyim@apache.orgCommitter-8-8
santipSantiago Perezsantip@apache.orgCommitter-8-8
vinayakbVinayak Borkarvinayakb@apache.orgCommitter-8-8
sdasShirshanka Dassdas@apache.orgCommitter-8-8
kanakKanak Biscuitwalakanak@apache.orgCommitter-8-8
+
+
+

Contributors

+ +

There are no contributors listed for this project. Please check back again later.

+ +
+
+
+
+
+
+ +
+ + + + +
+
+
+

Back to top

+ +

Reflow Maven skin by Andrius Velykis.

+ +
+
Apache Helix, Apache, the Apache feather logo, and the Apache Helix project logos are trademarks of The Apache Software Foundation. + All other marks mentioned may be trademarks or registered trademarks of their respective owners.
+ Privacy Policy +
+
+
+ + + + + + + + + + + + + + + + + + \ No newline at end of file Added: helix/site-content/0.6.3-docs/tutorial_admin.html URL: http://svn.apache.org/viewvc/helix/site-content/0.6.3-docs/tutorial_admin.html?rev=1582513&view=auto ============================================================================== --- helix/site-content/0.6.3-docs/tutorial_admin.html (added) +++ helix/site-content/0.6.3-docs/tutorial_admin.html Thu Mar 27 22:25:55 2014 @@ -0,0 +1,799 @@ + + + + + + + + Apache Helix - Tutorial - Admin Operations + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+ +
+
+ +
+
+
+
+ +

+
+ +

Helix provides a set of admin APIs for cluster management operations. They are supported via:

+
    +
  • Java API
  • +
  • Command Line Interface
  • +
  • REST Interface via helix-admin-webapp
  • +
+
+

Java API

+

See interface org.apache.helix.HelixAdmin

+
+
+

Command Line Interface

+

The command line tool comes with helix-core package:

+

Get the command line tool:

+
+
git clone https://git-wip-us.apache.org/repos/asf/helix.git
+cd helix
+git checkout tags/helix-0.6.3
+./build
+cd helix-core/target/helix-core-pkg/bin
+chmod +x *.sh
+
+
+

Get help:

+
+
./helix-admin.sh --help
+
+
+

All other commands have this form:

+
+
./helix-admin.sh --zkSvr <ZookeeperServerAddress> <command> <parameters>
+
+
+
+

Supported Commands

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Command Syntax Description
--activateCluster <clusterName controllerCluster true/false> Enable/disable a cluster in distributed controller mode
--addCluster <clusterName> Add a new cluster
--addIdealState <clusterName resourceName fileName.json> Add an ideal state to a cluster
--addInstanceTag <clusterName instanceName tag> Add a tag to an instance
--addNode <clusterName instanceId> Add an instance to a cluster
--addResource <clusterName resourceName partitionNumber stateModelName> Add a new resource to a cluster
--addResourceProperty <clusterName resourceName propertyName propertyValue> Add a resource property
--addStateModelDef <clusterName fileName.json> Add a State model definition to a cluster
--dropCluster <clusterName> Delete a cluster
--dropNode <clusterName instanceId> Remove a node from a cluster
--dropResource <clusterName resourceName> Remove an existing resource from a cluster
--enableCluster <clusterName true/false> Enable/disable a cluster
--enableInstance <clusterName instanceId true/false> Enable/disable an instance
--enablePartition <true/false clusterName nodeId resourceName partitionName> Enable/disable a partition
--getConfig <configScope configScopeArgs configKeys> Get user configs
--getConstraints <clusterName constraintType> Get constraints
--help print help information
--instanceGroupTag <instanceTag> Specify instance group tag, used with rebalance command
--listClusterInfo <clusterName> Show information of a cluster
--listClusters List all clusters
--listInstanceInfo <clusterName instanceId> Show information of an instance
--listInstances <clusterName> List all instances in a cluster
--listPartitionInfo <clusterName resourceName partitionName> Show information of a partition
--listResourceInfo <clusterName resourceName> Show information of a resource
--listResources <clusterName> List all resources in a cluster
--listStateModel <clusterName stateModelName> Show information of a state model
--listStateModels <clusterName> List all state models in a cluster
--maxPartitionsPerNode <maxPartitionsPerNode> Specify the max partitions per instance, used with addResourceGroup command
--rebalance <clusterName resourceName replicas> Rebalance a resource
--removeConfig <configScope configScopeArgs configKeys> Remove user configs
--removeConstraint <clusterName constraintType constraintId> Remove a constraint
--removeInstanceTag <clusterName instanceId tag> Remove a tag from an instance
--removeResourceProperty <clusterName resourceName propertyName> Remove a resource property
--resetInstance <clusterName instanceId> Reset all erroneous partitions on an instance
--resetPartition <clusterName instanceId resourceName partitionName> Reset an erroneous partition
--resetResource <clusterName resourceName> Reset all erroneous partitions of a resource
--setConfig <configScope configScopeArgs configKeyValueMap> Set user configs
--setConstraint <clusterName constraintType constraintId constraintKeyValueMap> Set a constraint
--swapInstance <clusterName oldInstance newInstance> Swap an old instance with a new instance
--zkSvr <ZookeeperServerAddress> Provide zookeeper address
+
+
+
+

REST Interface

+

The REST interface comes wit helix-admin-webapp package:

+
+
git clone https://git-wip-us.apache.org/repos/asf/helix.git
+cd helix
+git checkout tags/helix-0.6.3
+./build
+cd helix-admin-webapp/target/helix-admin-webapp-pkg/bin
+chmod +x *.sh
+./run-rest-admin.sh --zkSvr <zookeeperAddress> --port <port> // make sure ZooKeeper is running
+
+
+
+

URL and support methods

+
    +
  • /clusters

    +
      +
    • List all clusters
    • +
    +
    +
    curl http://localhost:8100/clusters
    +
    +
    +
      +
    • Add a cluster
    • +
    +
    +
    curl -d 'jsonParameters={"command":"addCluster","clusterName":"MyCluster"}' -H "Content-Type: application/json" http://localhost:8100/clusters
    +
    +
  • +
  • /clusters/{clusterName}

    +
      +
    • List cluster information
    • +
    +
    +
    curl http://localhost:8100/clusters/MyCluster
    +
    +
    +
      +
    • Enable/disable a cluster in distributed controller mode
    • +
    +
    +
    curl -d 'jsonParameters={"command":"activateCluster","grandCluster":"MyControllerCluster","enabled":"true"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster
    +
    +
    +
      +
    • Remove a cluster
    • +
    +
    +
    curl -X DELETE http://localhost:8100/clusters/MyCluster
    +
    +
  • +
  • /clusters/{clusterName}/resourceGroups

    +
      +
    • List all resources in a cluster
    • +
    +
    +
    curl http://localhost:8100/clusters/MyCluster/resourceGroups
    +
    +
    +
      +
    • Add a resource to cluster
    • +
    +
    +
    curl -d 'jsonParameters={"command":"addResource","resourceGroupName":"MyDB","partitions":"8","stateModelDefRef":"MasterSlave" }' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups
    +
    +
  • +
  • /clusters/{clusterName}/resourceGroups/{resourceName}

    +
      +
    • List resource information
    • +
    +
    +
    curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
    +
    +
    +
      +
    • Drop a resource
    • +
    +
    +
    curl -X DELETE http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
    +
    +
    +
      +
    • Reset all erroneous partitions of a resource
    • +
    +
    +
    curl -d 'jsonParameters={"command":"resetResource"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
    +
    +
  • +
  • /clusters/{clusterName}/resourceGroups/{resourceName}/idealState

    +
      +
    • Rebalance a resource
    • +
    +
    +
    curl -d 'jsonParameters={"command":"rebalance","replicas":"3"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
    +
    +
    +
      +
    • Add an ideal state
    • +
    +
    +
    echo jsonParameters={
    +"command":"addIdealState"
    +   }&newIdealState={
    +  "id" : "MyDB",
    +  "simpleFields" : {
    +    "IDEAL_STATE_MODE" : "AUTO",
    +    "NUM_PARTITIONS" : "8",
    +    "REBALANCE_MODE" : "SEMI_AUTO",
    +    "REPLICAS" : "0",
    +    "STATE_MODEL_DEF_REF" : "MasterSlave",
    +    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
    +  },
    +  "listFields" : {
    +  },
    +  "mapFields" : {
    +    "MyDB_0" : {
    +      "localhost_1001" : "MASTER",
    +      "localhost_1002" : "SLAVE"
    +    }
    +  }
    +}
    +> newIdealState.json
    +curl -d @'./newIdealState.json' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
    +
    +
    +
      +
    • Add resource property
    • +
    +
    +
    curl -d 'jsonParameters={"command":"addResourceProperty","REBALANCE_TIMER_PERIOD":"500"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
    +
    +
  • +
  • /clusters/{clusterName}/resourceGroups/{resourceName}/externalView

    +
      +
    • Show resource external view
    • +
    +
    +
    curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/externalView
    +
    +
  • +
  • /clusters/{clusterName}/instances

    +
      +
    • List all instances
    • +
    +
    +
    curl http://localhost:8100/clusters/MyCluster/instances
    +
    +
    +
      +
    • Add an instance
    • +
    +
    +
    curl -d 'jsonParameters={"command":"addInstance","instanceNames":"localhost_1001"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
    +
    +
    +
      +
    • Swap an instance
    • +
    +
    +
    curl -d 'jsonParameters={"command":"swapInstance","oldInstance":"localhost_1001", "newInstance":"localhost_1002"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
    +
    +
  • +
  • /clusters/{clusterName}/instances/{instanceName}

    +
      +
    • Show instance information
    • +
    +
    +
    curl http://localhost:8100/clusters/MyCluster/instances/localhost_1001
    +
    +
    +
      +
    • Enable/disable an instance
    • +
    +
    +
    curl -d 'jsonParameters={"command":"enableInstance","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
    +
    +
    +
      +
    • Drop an instance
    • +
    +
    +
    curl -X DELETE http://localhost:8100/clusters/MyCluster/instances/localhost_1001
    +
    +
    +
      +
    • Disable/enable partitions on an instance
    • +
    +
    +
    curl -d 'jsonParameters={"command":"enablePartition","resource": "MyDB","partition":"MyDB_0",  "enabled" : "false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
    +
    +
    +
      +
    • Reset an erroneous partition on an instance
    • +
    +
    +
    curl -d 'jsonParameters={"command":"resetPartition","resource": "MyDB","partition":"MyDB_0"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
    +
    +
    +
      +
    • Reset all erroneous partitions on an instance
    • +
    +
    +
    curl -d 'jsonParameters={"command":"resetInstance"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
    +
    +
  • +
  • /clusters/{clusterName}/configs

    +
      +
    • Get user cluster level config
    • +
    +
    +
    curl http://localhost:8100/clusters/MyCluster/configs/cluster
    +
    +
    +
      +
    • Set user cluster level config
    • +
    +
    +
    curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
    +
    +
    +
      +
    • Remove user cluster level config
    • +
    +
    +
    curl -d 'jsonParameters={"command":"removeConfig","configs":"key1,key2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
    +
    +
    +
      +
    • Get/set/remove user participant level config
    • +
    +
    +
    curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/participant/localhost_1001
    +
    +
    +
      +
    • Get/set/remove resource level config
    • +
    +
    +
    curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/resource/MyDB
    +
    +
  • +
  • /clusters/{clusterName}/controller

    +
      +
    • Show controller information
    • +
    +
    +
    curl http://localhost:8100/clusters/MyCluster/Controller
    +
    +
    +
      +
    • Enable/disable cluster
    • +
    +
    +
    curl -d 'jsonParameters={"command":"enableCluster","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/Controller
    +
    +
  • +
  • /zkPath/{path}

    +
      +
    • Get information for zookeeper path
    • +
    +
    +
    curl http://localhost:8100/zkPath/MyCluster
    +
    +
  • +
  • /clusters/{clusterName}/StateModelDefs

    +
      +
    • Show all state model definitions
    • +
    +
    +
    curl http://localhost:8100/clusters/MyCluster/StateModelDefs
    +
    +
    +
      +
    • Add a state mdoel definition
    • +
    +
    +
    echo jsonParameters={
    +  "command":"addStateModelDef"
    +}&newStateModelDef={
    +  "id" : "OnlineOffline",
    +  "simpleFields" : {
    +    "INITIAL_STATE" : "OFFLINE"
    +  },
    +  "listFields" : {
    +    "STATE_PRIORITY_LIST" : [ "ONLINE", "OFFLINE", "DROPPED" ],
    +    "STATE_TRANSITION_PRIORITYLIST" : [ "OFFLINE-ONLINE", "ONLINE-OFFLINE", "OFFLINE-DROPPED" ]
    +  },
    +  "mapFields" : {
    +    "DROPPED.meta" : {
    +      "count" : "-1"
    +    },
    +    "OFFLINE.meta" : {
    +      "count" : "-1"
    +    },
    +    "OFFLINE.next" : {
    +      "DROPPED" : "DROPPED",
    +      "ONLINE" : "ONLINE"
    +    },
    +    "ONLINE.meta" : {
    +      "count" : "R"
    +    },
    +    "ONLINE.next" : {
    +      "DROPPED" : "OFFLINE",
    +      "OFFLINE" : "OFFLINE"
    +    }
    +  }
    +}
    +> newStateModelDef.json
    +curl -d @'./untitled.txt' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/StateModelDefs
    +
    +
  • +
  • /clusters/{clusterName}/StateModelDefs/{stateModelDefName}

    +
      +
    • Show a state model definition
    • +
    +
    +
    curl http://localhost:8100/clusters/MyCluster/StateModelDefs/OnlineOffline
    +
    +
  • +
  • /clusters/{clusterName}/constraints/{constraintType}

    +
      +
    • Show all contraints
    • +
    +
    +
    curl http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT
    +
    +
    +
      +
    • Set a contraint
    • +
    +
    +
    curl -d 'jsonParameters={"constraintAttributes":"RESOURCE=MyDB,CONSTRAINT_VALUE=1"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
    +
    +
    +
      +
    • Remove a constraint
    • +
    +
    +
    curl -X DELETE http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
    +
    +
  • +
+
+
+
+
+
+
+
+ +
+ + + + +
+
+
+

Back to top

+ +

Reflow Maven skin by Andrius Velykis.

+ +
+
Apache Helix, Apache, the Apache feather logo, and the Apache Helix project logos are trademarks of The Apache Software Foundation. + All other marks mentioned may be trademarks or registered trademarks of their respective owners.
+ Privacy Policy +
+
+
+ + + + + + + + + + + + + + + + + + \ No newline at end of file Added: helix/site-content/0.6.3-docs/tutorial_agent.html URL: http://svn.apache.org/viewvc/helix/site-content/0.6.3-docs/tutorial_agent.html?rev=1582513&view=auto ============================================================================== --- helix/site-content/0.6.3-docs/tutorial_agent.html (added) +++ helix/site-content/0.6.3-docs/tutorial_agent.html Thu Mar 27 22:25:55 2014 @@ -0,0 +1,386 @@ + + + + + + + + Apache Helix - Tutorial - Helix Agent + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+ +
+
+ +
+
+
+
+ +

+
+ +

Not every distributed system is written on the JVM, but many systems would benefit from the cluster management features that Helix provides. To make a non-JVM system work with Helix, you can use the Helix Agent module.

+
+

What is Helix Agent?

+

Helix is built on the following assumption: if your distributed resource is modeled by a finite state machine, then Helix can tell participants when they should transition between states. In the Java API, this means implementing transition callbacks. In the Helix agent API, this means providing commands than can run for each transition.

+

These commands could do anything behind the scenes; Helix only requires that they exit once the state transition is complete.

+
+
+

Configuring Transition Commands

+

Here’s how to tell Helix which commands to run on state transitions:

+
+

Java

+

Using the Java API, first get a configuration scope (the Helix agent supports both cluster and resource scopes, picking resource first if it is available):

+
+
// Cluster scope
+HelixConfigScope scope =
+    new HelixConfigScopeBuilder(ConfigScopeProperty.CLUSTER).forCluster(clusterName).build();
+
+// Resource scope
+HelixConfigScope scope =
+    new HelixConfigScopeBuilder(ConfigScopeProperty.RESOURCE).forCluster(clusterName).forResource(resourceName).build();
+
+
+

Then, specify the command to run for each state transition:

+
+
// Get the configuration accessor
+ConfigAccessor configAccessor = new ConfigAccessor(_gZkClient);
+
+// Specify the script for OFFLINE --> ONLINE
+CommandConfig.Builder builder = new CommandConfig.Builder();
+CommandConfig cmdConfig =
+    builder.setTransition("OFFLINE", "ONLINE").setCommand("simpleHttpClient.py OFFLINE-ONLINE")
+        .setCommandWorkingDir(workingDir)
+        .setCommandTimeout("5000L") // optional: ms to wait before failing
+        .setPidFile(pidFile) // optional: for daemon-like systems that will write the process id to a file
+        .build();
+configAccessor.set(scope, cmdConfig.toKeyValueMap());
+
+// Specify the script for ONLINE --> OFFLINE
+builder = new CommandConfig.Builder();
+cmdConfig =
+    builder.setTransition("ONLINE", "OFFLINE").setCommand("simpleHttpClient.py ONLINE-OFFLINE")
+        .setCommandWorkingDir(workingDir)
+        .build();
+configAccessor.set(scope, cmdConfig.toKeyValueMap());
+
+// Specify NOP for OFFLINE --> DROPPED
+builder = new CommandConfig.Builder();
+cmdConfig =
+    builder.setTransition("OFFLINE", "DROPPED")
+        .setCommand(CommandAttribute.NOP.getName())
+        .build();
+configAccessor.set(scope, cmdConfig.toKeyValueMap());
+
+
+

In this example, we have a program called simpleHttpClient.py that we call for all transitions, only changing the arguments that are passed in. However, there is no requirement that each transition invoke the same program; this API allows running arbitrary commands in arbitrary directories with arbitrary arguments.

+

Notice that that for the OFFLINE --> DROPPED transition, we do not run any command (specifically, we specify the NOP command). This just tells Helix that the system doesn’t care about when things are dropped, and it can consider the transition already done.

+
+
+

Command Line

+

It is also possible to configure everything directly from the command line. Here’s how that would look for cluster-wide configuration:

+
+
# Specify the script for OFFLINE --> ONLINE
+/helix-admin.sh --zkSvr localhost:2181 --setConfig CLUSTER clusterName OFFLINE-ONLINE.command="simpleHttpClient.py OFFLINE-ONLINE",OFFLINE-ONLINE.workingDir="/path/to/script", OFFLINE-ONLINE.command.pidfile="/path/to/pidfile"
+
+# Specify the script for ONLINE --> OFFLINE
+/helix-admin.sh --zkSvr localhost:2181 --setConfig CLUSTER clusterName ONLINE-OFFLINE.command="simpleHttpClient.py ONLINE-OFFLINE",ONLINE-OFFLINE.workingDir="/path/to/script", OFFLINE-ONLINE.command.pidfile="/path/to/pidfile"
+
+# Specify NOP for OFFLINE --> DROPPED
+/helix-admin.sh --zkSvr localhost:2181 --setConfig CLUSTER clusterName ONLINE-OFFLINE.command="nop"
+
+
+

Like in the Java configuration, it is also possible to specify a resource scope instead of a cluster scope:

+
+
# Specify the script for OFFLINE --> ONLINE
+/helix-admin.sh --zkSvr localhost:2181 --setConfig RESOURCE clusterName,resourceName OFFLINE-ONLINE.command="simpleHttpClient.py OFFLINE-ONLINE",OFFLINE-ONLINE.workingDir="/path/to/script", OFFLINE-ONLINE.command.pidfile="/path/to/pidfile"
+
+
+
+
+
+

Starting the Agent

+

There should be an agent running for every participant you have running. Ideally, its lifecycle should match that of the participant. Here, we have a simple long-running participant called simpleHttpServer.py. Its only purpose is to record state transitions.

+

Here are some ways that you can start the Helix agent:

+
+

Java

+
+
// Start your application process
+ExternalCommand serverCmd = ExternalCommand.start(workingDir + "/simpleHttpServer.py");
+
+// Start the agent
+Thread agentThread = new Thread() {
+  @Override
+  public void run() {
+    while(!isInterrupted()) {
+      try {
+        HelixAgentMain.main(new String[] {
+            "--zkSvr", zkAddr, "--cluster", clusterName, "--instanceName", instanceName,
+            "--stateModel", "OnlineOffline"
+        });
+      } catch (InterruptedException e) {
+        LOG.info("Agent thread interrupted", e);
+        interrupt();
+      } catch (Exception e) {
+        LOG.error("Exception start helix-agent", e);
+      }
+    }
+  }
+};
+agentThread.start();
+
+// Wait for the process to terminate (either intentionally or unintentionally)
+serverCmd.waitFor();
+
+// Kill the agent
+agentThread.interrupt();
+
+
+
+
+

Command Line

+
+
# Build Helix and start the agent
+mvn clean install -DskipTests
+chmod +x helix-agent/target/helix-agent-pkg/bin/*
+helix-agent/target/helix-agent-pkg/bin/start-helix-agent.sh --zkSvr zkAddr1,zkAddr2 --cluster clusterName --instanceName instanceName --stateModel OnlineOffline
+
+# Here, you can define your own logic to terminate this agent when your process terminates
+...
+
+
+
+
+
+

Example

+

Here is a basic system that uses the Helix agent package.

+
+
+

Notes

+

As you may have noticed from the examples, the participant program and the state transition program are two different programs. The former is a long-running process that is directly tied to the Helix agent. The latter is a process that only exists while a state transition is underway. Despite this, these two processes should be intertwined. The transition command will need to communicate to the participant to actually complete the state transition and the participant will need to communicate whether or not this was successful. The implementation of this protocol is the responsibility of the system.

+
+
+
+
+
+
+ +
+ + + + +
+
+
+

Back to top

+ +

Reflow Maven skin by Andrius Velykis.

+ +
+
Apache Helix, Apache, the Apache feather logo, and the Apache Helix project logos are trademarks of The Apache Software Foundation. + All other marks mentioned may be trademarks or registered trademarks of their respective owners.
+ Privacy Policy +
+
+
+ + + + + + + + + + + + + + + + + + \ No newline at end of file Added: helix/site-content/0.6.3-docs/tutorial_controller.html URL: http://svn.apache.org/viewvc/helix/site-content/0.6.3-docs/tutorial_controller.html?rev=1582513&view=auto ============================================================================== --- helix/site-content/0.6.3-docs/tutorial_controller.html (added) +++ helix/site-content/0.6.3-docs/tutorial_controller.html Thu Mar 27 22:25:55 2014 @@ -0,0 +1,378 @@ + + + + + + + + Apache Helix - Tutorial - Controller + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+ +
+
+ +
+
+
+
+ +

+
+ +

Next, let's implement the controller. This is the brain of the cluster. Helix makes sure there is exactly one active controller running the cluster.

+
+

Start a Connection

+

The Helix manager requires the following parameters:

+
    +
  • clusterName: A logical name to represent the group of nodes
  • +
  • instanceName: A logical name of the process creating the manager instance. Generally this is host:port
  • +
  • instanceType: Type of the process. This can be one of the following types, in this case use CONTROLLER: +
      +
    • CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time
    • +
    • PARTICIPANT: Process that performs the actual task in the distributed system
    • +
    • SPECTATOR: Process that observes the changes in the cluster
    • +
    • ADMIN: To carry out system admin actions
    • +
  • +
  • zkConnectString: Connection string to ZooKeeper. This is of the form host1:port1,host2:port2,host3:port3
  • +
+
+
manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                instanceName,
+                                                instanceType,
+                                                zkConnectString);
+
+
+
+
+

Controller Code

+

The Controller needs to know about all changes in the cluster. Helix takes care of this with the default implementation. If you need additional functionality, see GenericHelixController on how to configure the pipeline.

+
+
manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                instanceName,
+                                                InstanceType.CONTROLLER,
+                                                zkConnectString);
+manager.connect();
+GenericHelixController controller = new GenericHelixController();
+manager.addConfigChangeListener(controller);
+manager.addLiveInstanceChangeListener(controller);
+manager.addIdealStateChangeListener(controller);
+manager.addExternalViewChangeListener(controller);
+manager.addControllerListener(controller);
+
+
+

The snippet above shows how the controller is started. You can also start the controller using command line interface.

+
+
cd helix/helix-core/target/helix-core-pkg/bin
+./run-helix-controller.sh --zkSvr <Zookeeper ServerAddress (Required)>  --cluster <Cluster name (Required)>
+
+
+
+
+

Controller Deployment Modes

+

Helix provides multiple options to deploy the controller.

+
+

STANDALONE

+

The Controller can be started as a separate process to manage a cluster. This is the recommended approach. However, since one controller can be a single point of failure, multiple controller processes are required for reliability. Even if multiple controllers are running, only one will be actively managing the cluster at any time and is decided by a leader-election process. If the leader fails, another leader will take over managing the cluster.

+

Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See the Controller as a Service option.

+
+
+

EMBEDDED

+

If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participants.

+
+
+

CONTROLLER AS A SERVICE

+

One of the cool features we added in Helix was to use a set of controllers to manage a large number of clusters.

+

For example if you have X clusters to be managed, instead of deploying X*3 (3 controllers for fault tolerance) controllers for each cluster, one can deploy just 3 controllers. Each controller can manage X/3 clusters. If any controller fails, the remaining two will manage X/2 clusters.

+

Next, let's implement the controller. This is the brain of the cluster. Helix makes sure there is exactly one active controller running the cluster.

+
+
+
+

Start the Helix agent

+

It requires the following parameters:

+
    +
  • clusterName: A logical name to represent the group of nodes
  • +
  • instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
  • +
  • instanceType: Type of the process. This can be one of the following types, in this case use CONTROLLER: +
      +
    • CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
    • +
    • PARTICIPANT: Process that performs the actual task in the distributed system.
    • +
    • SPECTATOR: Process that observes the changes in the cluster.
    • +
    • ADMIN: To carry out system admin actions.
    • +
  • +
  • zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3.
  • +
+
+
      manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                      instanceName,
+                                                      instanceType,
+                                                      zkConnectString);
+
+
+
+
+

Controller Code

+

The Controller needs to know about all changes in the cluster. Helix takes care of this with the default implementation. If you need additional functionality, see GenericHelixController on how to configure the pipeline.

+
+
      manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                          instanceName,
+                                                          InstanceType.CONTROLLER,
+                                                          zkConnectString);
+     manager.connect();
+     GenericHelixController controller = new GenericHelixController();
+     manager.addConfigChangeListener(controller);
+     manager.addLiveInstanceChangeListener(controller);
+     manager.addIdealStateChangeListener(controller);
+     manager.addExternalViewChangeListener(controller);
+     manager.addControllerListener(controller);
+
+
+

The snippet above shows how the controller is started. You can also start the controller using command line interface.

+
+
cd helix/helix-core/target/helix-core-pkg/bin
+./run-helix-controller.sh --zkSvr <Zookeeper ServerAddress (Required)>  --cluster <Cluster name (Required)>
+
+
+
+
+

Controller Deployment Modes

+

Helix provides multiple options to deploy the controller.

+
+

STANDALONE

+

The Controller can be started as a separate process to manage a cluster. This is the recommended approach. However, since one controller can be a single point of failure, multiple controller processes are required for reliability. Even if multiple controllers are running, only one will be actively managing the cluster at any time and is decided by a leader-election process. If the leader fails, another leader will take over managing the cluster.

+

Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See Controller As a Service option.

+
+
+

EMBEDDED

+

If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participants.

+
+
+

CONTROLLER AS A SERVICE

+

One of the cool features we added in Helix is to use a set of controllers to manage a large number of clusters.

+

For example if you have X clusters to be managed, instead of deploying X*3 (3 controllers for fault tolerance) controllers for each cluster, one can deploy just 3 controllers. Each controller can manage X/3 clusters. If any controller fails, the remaining two will manage X/2 clusters.

+
+
+
+
+
+
+
+ +
+ + + + +
+
+
+

Back to top

+ +

Reflow Maven skin by Andrius Velykis.

+ +
+
Apache Helix, Apache, the Apache feather logo, and the Apache Helix project logos are trademarks of The Apache Software Foundation. + All other marks mentioned may be trademarks or registered trademarks of their respective owners.
+ Privacy Policy +
+
+
+ + + + + + + + + + + + + + + + + + \ No newline at end of file