fluo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Josh Elser <josh.el...@gmail.com>
Subject Re: [DISCUSS] Separate deployment code from Fluo distribution tarball
Date Tue, 12 Jul 2016 18:05:58 GMT
+1 to that succinctness

Any chance I could persuade you into writing some of the plan down 
somewhere (maybe on the website)? It would be nice to have a 
clean/defined architecture for Fluo. Would help attract new contributors 
(easier to learn about how Fluo works and its modules).

Christopher wrote:
> I like the idea of minimizing the core to increase packaging/deployment
> options, as well as the idea of bootstrapping a few possible deployment
> strategies.
>
> On Tue, Jul 12, 2016 at 1:04 PM Keith Turner<keith@deenlo.com>  wrote:
>
>> I am in favour of creating multiple separate projects for launch Fluo
>> workers and oracle in different environments.   We should do this in such a
>> way that these projects only use Fluo Core public APIs.
>>
>> For the 1.0.0 we can proceed w/ the baked in Twill+YARN support.  For 1.1.0
>> we could work towards externalizing this launch functionality and adding
>> any new public APIs needed.  The 1.1.0 release could deprecate the built in
>> YARN+Twill functionality.
>>
>> On Tue, Jul 12, 2016 at 11:59 AM, Mike Walch<mwalch@apache.org>  wrote:
>>
>>> The Fluo distribution tarball currently contains code that allows users
>> to
>>> start Fluo applications in YARN using Twill.  While deploying to YARN has
>>> worked well, Fluo should not be tied to a single cluster resource
>> manager.
>>> For example, users could have problems deploying their Fluo application
>> to
>>> YARN with our current setup as Twill brings in dependencies that can
>>> conflict with the user-provided dependencies in their Fluo application.
>>>
>>> In order to give users deployment options, we should remove any
>> deployment
>>> code from our tarball.  We should start releasing a simpler tarball that
>>> contains only the necessary jars and commands to configure a Fluo
>>> application (in Zookeeper) and start the processes of a Fluo application
>>> locally.  It looks Kafka Streams is already taking this approach.  Read
>> the
>>> section 'Docker, Mesos, and Kubernetes, Oh My!' in the blog post below:
>>>
>>>
>>>
>> http://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple
>>> If we also took this the approach, the commands and structure of the
>>> tarball would need to follow SEMVER as users and downstream projects
>> would
>>> depend on them.  Users could then either script deployments using Chef,
>>> Ansible, Salt, etc. or use downstream projects created to make it easy to
>>> deploy Fluo applications to YARN, Mesos, Kubernetes, etc.  We could
>>> initially create and maintain some these downstream projects.  However,
>>> they could innovate/survive based on community interset.
>>>
>>> After we release 1.0.0, I would like to start working in this direction
>> for
>>> Fluo. I am interested to see if anyone has any views to share on this
>>> topic.
>>>
>

Mime
View raw message