cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ishimoto, Ryu" <...@midokura.com>
Subject Re: making VM startup more fine-grained
Date Thu, 26 Jul 2012 05:52:28 GMT
On Mon, Jun 4, 2012 at 3:02 PM, Chiradeep Vittal <
Chiradeep.Vittal@citrix.com> wrote:

>
> Also note that in order to support hotplug and hot-detach of nics, we need
> commands like CreateNic and AttachNic.
>
>
This is a great point.  I feel that the right approach is to consider NIC
to exist only within the VM lifetime, and thus the API that the cloud
orchestrator needs to expose are:

- PlugNIC
- UnplugNIC

Where the hypervisor resources must implement these methods in the
hypervisor-specific way.  Depending on the hypervisor, this may include
creating a VIF, hot-attaching it to the VM, and plugging it into the
appropriate network.  These are only necessary when CloudStack needs to
support hot-attach and hot-detaching VIFs.

On a related but a different topic, during the VM launch, VIF plugging has
to also occur, and it has to be designed in a way that both Xen and
Libvirt/KVM can agree on.  If you look at the way libvirt generates the VM
definition, which is an XML configuration, it seems to make sense that you
perform the plug operation in the same place as the XML generation.  This
means that it's ok to keep the 'startVM' at the orchestration level and let
the individual hypervisor resources implement their own VIF attachment
logic.  This VIF attachment logic should be done in a driver model in which
vendors can supply their own logic, and I think this is essential for SDN
integration.  Each hypervisor should have its own VIF driver interface, so
there should be LibvirtVifDriver and XenVifDriver interfaces.  They both
define 'plug' and 'unplug' methods but perhaps differ in signatures.  As
one example, an implementation of an  OpenvswitchLibvirtVIF driver might
use 'ethernet' mode instead of 'bridge' mode, create a tap interface on the
host, create a port on the bridge, and attach the VIF into it before
launching the VM.  For Xen, you'd only need to make xapi calls but this VIF
driver gives the vendors a place to customize parameters sent to VIF.create
call, such as setting 'other-configs' values.

Any feedback would be greatly appreciated.  I've only recently started
looking at the CloudStack architecture so please correct me if I said
something off-base.

Cheers!
Ryu



>
> The other alternative is to launch the vm in a stopped state. Obtain the
> vif uuid and then start it.
>
> From the latest docs:
> "CloudStack now supports creating a VM without starting it on the
> backend. You can determine whether the VM need to
> be started as part of the VM deployment. A VM can now be deployed in two
> ways: create and start a VM (the default
> method); create a VM and leave it in stopped state.
>
>                         A new request parameter, startVM, is introduced in
> the deployVm API to
> support the stopped VM feature. The possible
> values are:
>
>                         true - The VM starts as a part of the VM
> deployment.
> false - The VM is left in stopped state at the end of the VM deployment.
>
>                         The default value is true"
>
>
> On 6/1/12 12:16 PM, "Alex Huang" <Alex.Huang@citrix.com> wrote:
>
> >Even in this plan, the resource is required to have knowledge of someone
> >wanting the know about the vif.  I think Chiradeep's proposal is trying
> >to avoid having the Resource itself changed.
> >
> >To the original proposal, I think to break it down to that level makes it
> >very difficult to manage.  We can't dictate the apis on the hypervisors
> >and to what level they actually support a api by api construction of a
> >virtual machine.  It works out well for XenServer but if a certain
> >hypervisor supports only a XML based virtual machine description, then it
> >won't work.  Therefore, it's best to send down a machine description and
> >let the resource do the translation.
> >
> >For the original problem, I don't think there's any way to get around the
> >changing either the Resource or the hypervisor itself to implement that
> >feature.  I think XenServer team actually mentioned that they're willing
> >to put in script callouts around vif being brought up and down and that
> >might be one approach but we'll have to investigate what version it has
> >been put into.
> >
> >--Alex
> >
> >> -----Original Message-----
> >> From: Kelven Yang [mailto:kelven.yang@citrix.com]
> >> Sent: Thursday, May 31, 2012 11:30 PM
> >> To: cloudstack-dev@incubator.apache.org
> >> Subject: RE: making VM startup more fine-grained
> >>
> >> Another way to state my point - don't let CloudStack orchestrators do
> >>micro-
> >> management. It is impossible to handle every case cleanly if we do
> >>micro-
> >> management at one level. Let these orchestrators behave like people
> >> managers,
> >>
> >>      Hey, this is the user's configuration (network config, CPU, memory,
> >> disk etc),
> >>      This is what I have with my available facilities (physical
> >>infrastructure),
> >>      We need to realize an execution plan (orchestration flow)
> >>      Chiradeep, I need you to work on the network (resource realization)
> >>      Kelven, I need you to work on storage (resource realization)
> >>      Do whatever you need to, you have access to the lab (service
> >> callbacks)
> >>      but please fulfill the plan (try to keep high-level orchestration
> flow
> >> intact)
> >>
> >> Kelven
> >>
> >>
> >> > -----Original Message-----
> >> > From: Kelven Yang [mailto:kelven.yang@citrix.com]
> >> > Sent: Thursday, May 31, 2012 11:07 PM
> >> > To: cloudstack-dev@incubator.apache.org
> >> > Subject: RE: making VM startup more fine-grained
> >> >
> >> > > Another way is to modify the specific hypervisor resource to do
> >> > something
> >> > > just after creating the vifs but prior to starting the vm.
> >> >
> >> > I would go with this way. I'm proposing bi-directional communication
> >> > between resource agent and the CloudStack kernel. Let CloudStack
> >> > kernel only manage meta database for network configuration,
> >> > virtual-to-physical mapping configuration etc, information that is
> >> > generic, stable and independent of underlying resource realization
> >> > technologies, Let resource provisioning orchestrators manage and
> >> > orchestrate the process at flow- level, but leave the resource
> >> > realization details to down-level components. If a down-level
> >> > component needs to access the configuration data related with the
> >> > operation, it calls back into the service API provided by CloudStack
> >>kernel.
> >> >
> >> > In this SDN example, the overall orchestration flow should not be
> >> > affected by its implementation details, changes can be scoped at
> >> > resource level if it has the access to the information it needs from
> >> > common service API provided by CloudStack kernel.
> >> >
> >> > Kelven
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > > -----Original Message-----
> >> > > From: Chiradeep Vittal [mailto:Chiradeep.Vittal@citrix.com]
> >> > > Sent: Thursday, May 31, 2012 10:17 PM
> >> > > To: CloudStack DeveloperList
> >> > > Subject: making VM startup more fine-grained
> >> > >
> >> > > I was helping someone with their integration of an SDN controller
> >> > > with CloudStack. The requirement was that the SDN controller needed
> >> > > the uuid of the virtual interface (vif) of the virtual machine so
> >> > > that it could
> >> > plug
> >> > > it into the right softswitch, manage the vif etc. This vif uuid is
> >> > > generated by the XenServer.
> >> > >
> >> > > My recommendation was to write a plugin (implement NetworkElement)
> >> > > that would get the vif uuid after the vm started by making a XAPI
> >> > > call (via the agent manager) and then call the SDN controller API
> >> > > with this value.
> >> > > The response:
> >> > > "Unfortunately, the mechanism you describe wouldn't be sufficient
> >> > > as
> >> > we
> >> > > would require the the VIF uuid before the VM boots, otherwise there
> >> > might
> >> > > be a race condition where sometimes VMs will boot up and lack
> >> > > network connectivity and therefore might not even receive their DHCP
> >> > > addresses and such.
> >> > > "
> >> > > Currently, when CloudStack starts a VM, all information regarding
> >> > > the
> >> > VM
> >> > > (including nics and storage) is passed down in a single StartCommand
> >> > > to the hypervisor resource. The hypervisor resource (e.g.,
> >> > > CitrixResourceBase or LibVirtComputingResource) takes appropriate
> >> > > actions to create vifs
> >> > and
> >> > > plug them into the vm and start the vm.
> >> > >
> >> > > One way to solve the integration problem would be to split the
> >> > > StartCommand into multiple commands: for e.g., CreateVif,
> >> > > CreateVolume, CreateVm, StartVm. This changes the agent API and
> >> > > affects all
> >> > hypervisor
> >> > > resources.
> >> > > Another way is to modify the specific hypervisor resource to do
> >> > something
> >> > > just after creating the vifs but prior to starting the vm.
> >> > > A third way is to split the agent api into 2 commands: CreateVm and
> >> > > StartVm.
> >> > >
> >> > > Thoughts?
> >> > > --
> >> > > Chiradeep
> >
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message