incubator-cvs mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <>
Subject [Incubator Wiki] Update of "HoyaProposal" by SteveLoughran
Date Fri, 17 Jan 2014 12:15:17 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Incubator Wiki" for change notification.

The "HoyaProposal" page has been changed by SteveLoughran:

hoya and twill

  It's goal is to be "an extensible tool for creating and managing distributed
  applications under YARN"
  == Proposal ==
@@ -107, +106 @@

  There are some longer term possibilities that could improve Hoya
   1. A web UI tracking and redirecting to the (changing) location of deployed services.
+  1. Provide a Java API to ease creation and manipulation of Hoya-deployed clusters by other
   1. Adding metrics for management tools.
   1. Explore/Research sophisticated placement and failure tracking algorithms. Its precisely
because this is a less mature product that we can experiment here.
   1. Explore load-driven cluster sizing.
@@ -151, +151 @@

  Apache Hadoop, it currently deploys HBase and Accumulo.
  For Hadoop, it, along with Samza, drives the work of supporting long-lived
+ services in YARN. While many of these relate to service longevity, there is also the
- services in YARN, work listed in the
- [[YARN-896|]] work. While many of
- the issues listed under YARN-896 relate to service longevity, there is also the
  challenge of having low-latency table lookups co-exist with CPU-and-IO
  intensive analytics workloads. This may drive future developments in Hadoop
  HDFS itself.
+ === Relationship with Twill ===
+ The goals of the two projects are different - Twill's
+ is to make YARN app developers' lives easier, while Hoya is a
+ tool to deploy existing distributed-frameworks easily in a Yarn
+ cluster, and be able to later do basic management
+ ''without making fundamental changes to the applications themselves''.
+ Things like dynamic configuration patching for
+ the applications like HBase to easily run in a Yarn cluster, security
+ issues, failure handling and defining a model for reacting to
+ failures, being able to store some state about applications to
+ facilitate better application restart behavior in a Yarn cluster, etc.
+ would be in the purview of Hoya. Management frameworks could use Hoya
+ as a tool as well to talk to Yarn and do application
+ start/stop/shrink/expand an instance of an application (e.g. HBase)
+ cluster.
+ What could be of mutual benefit would be to take some of Hoya's logic for
+ maintaining a long-lived YARN application and make it re-usable by other applications
+ -where clearly Twill would be the ideal destination.
+ Hoya  a clean model-view split server side, with all the server state stored independently
+ from the AM's entry point and RPC client and service interfaces. This was done
+ to isolate the model and aid mock testing of large clusters with simulated scale,
+ and hence increase confidence that Hoya can scale to work in large YARN clusters
+ and with larger application instances -without mandating that every developer
+ has unrestricted access to a 500 node cluster with the SSH credentials necessary
+ to trigger remote process failure and hence failure recovery.
+ This state is something that could perhaps be worked out into something
+ more re-usable, as all it really does is
+  1. Take a specification of role types, the no. of instances of each, and some options (there's
an unused anti-affinity flag)
+  1. Build a model of the cluster, including live containers, queued operations, nodes in
the YARN cluster and their history
+  1. Build up a list of actions: requests and releases aimed at keeping the model (and hence
the deployed app) consistent with the specification.
+  1. Persist placement history to HDFS for best-effort reassignment requests on cluster restart
+  1. Respond to relayed events such as startup, AM restart, container assignment, container
failure, and specification updates by generating revised list of container request/release
+ For any long-lived YARN service, the goal "keep the number of
+ instances of a role constant" is an underlying use case, from
+ Distributed Shell up. Hoya just effectively allows this to be driven
+ by an JSON document persisted to HDFS, updated via RPC, and delegates
+ to plugin providers the work of actually starting processes at the far
+ end.
+ Making this reusable would not only assist anyone else writing
+ long-lived services, but there are
+ some aspect of failure handling that we can improve in a way
+ that would benefit all apps, such as better weighted-moving average
+ failure tracking, sophisticated policies on reacting to it, and moving
+ beyond a simple works/doesn't work to a more nuanced greylist of
+ perceived server reliability.
+ Similarly, if we develop a management API that can be used by
+ cluster management tools to perform basic monitoring of long-lived
+ YARN applications, such an API could be implemented by other applications
+ -a feature that does again imply re-usable code.
  == Known Risks ==
@@ -183, +242 @@

   1. All source will be moved to Apache Infrastructure
   1. All outstanding issues in our in-house JIRA infrastructure will be replicated into the
Apache JIRA system.
+  1. We have a currently unused twitter handle {{{@hoyaproject}}} which would be passed to
the PMC.
  == External Dependencies ==

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message