cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sebastien Goasguen <>
Subject Re: CloudStack Docker and Mesos Support
Date Wed, 17 Sep 2014 10:28:12 GMT

On Sep 16, 2014, at 7:20 PM, ilya musayev <> wrote:

> Hi all,
> Would you know where we stand with Mesos and Docker?

That's a big question.

Mesos is a resource allocator that multiple frameworks can use to run workloads of various
The interest is to mix workloads: big data, long running services, parallel computing, docker
in order to maximize utilization of your resources.

For instance Aurora (mesos framework) can execute long running services within docker containers.

The challenge with docker is the coordination of multiple containers. Kubernetes for example
coordinate docker containers to run HA applications.

What we see (IMHO) is things like Kubernetes being deployed in the cloud (gce, azure, backspace
are currently "supported" in kubernetes). And at mesoscon, there was a small demo of running
kuberneters as a mesos framework.

So…bottom line for me is that I see Mesos and everything on top as a workload that can be
run in CloudStack. Similar thing with CoreOS. If a CloudStack cloud makes available CoreOS
templates, then users can start CoreOS cluster and manage Docker straight up or via Kubernetes
(because of course there is CoreOS "support" in Kubernetes).

Hence, there is nothing to do, except for CloudStack clouds to show that they can offer Mesos*
or Kubernetes* on demand.

However if we were to re-architect CloudStack entirely, we could use Mesos as a base resource
allocator and write a VM framework. The framework would ask for "hypervisors" to mesos and
once allocated CloudStack would start them…etc. The issue would still be in the networking.
The advantage is that a user could run a Mesos cluster and mix workloads: CloudStack + Big
Data + docker….

Anything we can do to make CoreOS "cloud stackable" and create a cloudstack driver in Kubernetes
would be really nice.

> Thanks
> ilya

View raw message