cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Phong Nguyen <pngu...@apache.org>
Subject Re: LXC and SSVM/CPVM on the host
Date Thu, 24 Oct 2013 15:53:11 GMT
> So we need a KVM cluster to run the VMs? (Added the author of the feature)

As it was originally discussed and implemented, the decision was to use KVM
as the system VM for LXC clusters instead of creating an LXC system VM. A
zone with only LXC clusters will deploy a KVM system VM on a host running
an LXC agent. Behind the scenes, this is possible because both KVM and LXC
agents use libvirt for provisioning (and that the setup of an LXC agent is
almost identical to KVM and perfectly capable of running KVM VMs).

-Phong


On Thu, Oct 24, 2013 at 8:57 AM, Francois Gaudreault <
fgaudreault@cloudops.com> wrote:

> If this is the case, then you should remove the ability to create LXC
> zones or clarify the documentation about that.
>
> According to the wiki page:
>
> Each of the different hypervisors currently have their own System VMs.
> These system VM images are used to run a console proxy, secondary storage,
> and router VMs.
>
> We discussed the possibility of creating System VMs for LXC. There was
> concern with the complexity and potential issues involving iptables for the
> router inside an LXC container. As an intermediate solution we are going to
> use KVM System VMs inside the LXC Cluster.
>
> So we need a KVM cluster to run the VMs? (Added the author of the feature)
>
> Francois
>
> On 10/22/2013, 1:24 AM, Chiradeep Vittal wrote:
>
>> As far as I understand, in an LXC scenario, the system vms are expected to
>> run on real hypervisors.
>> You can always use the QuickCloud way to not use system vms at all.
>>
>> On 10/21/13 1:45 PM, "Francois Gaudreault" <fgaudreault@cloudops.com>
>> wrote:
>>
>>  Ok I think we have to look at this further. I'll stop hijacking other
>>> threads.
>>>
>>> I am trying to get the SSVM/CPVM to run on a LXC host. The SSVM/CPVM
>>> starts, get IPs, but then CloudStack kill them for some reason. Yes, I
>>> use the 4.2 images :
>>>
>>> 2013-10-21 16:19:21,605 DEBUG [agent.manager.**AgentManagerImpl]
>>> (AgentManager-Handler-9:null) SeqA 73--1: Processing Seq 73--1:  { Cmd ,
>>> MgmtId: -1, via: 73, Ver: v1, Flags: 111,
>>> [{"com.cloud.agent.api.**ShutdownCommand":{"reason":"**sig.kill","wait":0}}]
>>> }
>>> 2013-10-21 16:19:21,605 INFO  [agent.manager.**AgentManagerImpl]
>>> (AgentManager-Handler-9:null) Host 73 has informed us that it is
>>> shutting down with reason sig.kill and detail null
>>> 2013-10-21 16:19:21,606 INFO  [agent.manager.**AgentManagerImpl]
>>> (AgentTaskPool-11:null) Host 73 is disconnecting with event
>>> ShutdownRequested
>>> 2013-10-21 16:19:21,609 DEBUG [agent.manager.**AgentManagerImpl]
>>> (AgentTaskPool-11:null) The next status of agent 73is Disconnected,
>>> current status is Up
>>> 2013-10-21 16:19:21,609 DEBUG [agent.manager.**AgentManagerImpl]
>>> (AgentTaskPool-11:null) Deregistering link for 73 with state Disconnected
>>> 2013-10-21 16:19:21,609 DEBUG [agent.manager.**AgentManagerImpl]
>>> (AgentTaskPool-11:null) Remove Agent : 73
>>> 2013-10-21 16:19:21,609 DEBUG [agent.manager.**ConnectedAgentAttache]
>>> (AgentTaskPool-11:null) Processing Disconnect.
>>>
>>> I transferred the host to KVM, and now the same SSVM/CPVM images are
>>> running fine for the last 30min ( so I assume it works fine...).
>>> Something seems to be wrong with the LXC side :S
>>>
>>> Anyone wants to invest some time to troubleshoot? I'll open a ticket
>>> also.
>>>
>>> --
>>> Francois Gaudreault
>>> Architecte de Solution Cloud | Cloud Solutions Architect
>>> fgaudreault@cloudops.com
>>> 514-629-6775
>>> - - -
>>> CloudOps
>>> 420 rue Guy
>>> Montréal QC  H3J 1S6
>>> www.cloudops.com
>>> @CloudOps_
>>>
>>>
>>
>
> --
> Francois Gaudreault
> Architecte de Solution Cloud | Cloud Solutions Architect
> fgaudreault@cloudops.com
> 514-629-6775
> - - -
> CloudOps
> 420 rue Guy
> Montréal QC  H3J 1S6
> www.cloudops.com
> @CloudOps_
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message