cloudstack-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CLOUDSTACK-8956) NSX/Nicira Plugin does not support NSX v4.2.1
Date Wed, 18 Nov 2015 14:27:11 GMT

    [ https://issues.apache.org/jira/browse/CLOUDSTACK-8956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011074#comment-15011074
] 

ASF GitHub Bot commented on CLOUDSTACK-8956:
--------------------------------------------

Github user DaanHoogland commented on the pull request:

    https://github.com/apache/cloudstack/pull/935#issuecomment-157727789
  
    @nvazquez I tried to mv and copy the jar in the job but it doesn't work. now trying to
see if your link yields a newer version then our job has


> NSX/Nicira Plugin does not support NSX v4.2.1
> ---------------------------------------------
>
>                 Key: CLOUDSTACK-8956
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8956
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the default.) 
>          Components: VMware
>    Affects Versions: 4.4.0, 4.5.0, 4.4.1, 4.4.2, 4.4.3, 4.5.1, 4.4.4
>         Environment: OS: RHEL 6.6
>            Reporter: Nicolas Vazquez
>             Fix For: 4.5.1, 4.6.0
>
>
> h3. Description of the problem:
> Prior to version 4.2. Nicira/VmWare NSX used a variation of Open vSwitch as means of
integrating SDN into hypervisor layer. Cloudstack NiciraNVP plugin was written to support
OVS as a bridge to NSX.
> In version 4.2 VMware introduced NSX vSwitch as a replacement for OVS in ESX hypervisors.
It is a fork of distributed vSwitch leveraging one of the recent features of ESX called opaque
networks. Because of that change the current version of NiciraNVP plugin doesn’t support
versions of NSX-MH above 4.2 specifically in Vsphere environment. Proposed fix will analyze
a version of NVP/NSX API and use proper support for ESX hypervisors.
> vSphere hypervisor mode operations when NV is deployed onto NSX managed network changes:
> * Current mode. A portgroup = UUID of CS VM NIC is created on a local standard switch
of the Hypervisor where VM is starting. VM nic is attached to that port group.
> * New mode. No additional port group is created on a HW. No port group cleanup is needed
after VM/NIC is destroyed. VM is attached to 1st port group having the following attributes:
> ** opaqueNetworkId string "br-int”
> ** opaqueNetworkType string "nsx.network"
> If portgroup with such attributes is not found a deployment should fail with exception.
> h3. VMware vSphere API version from 5.1 to 5.5:
> Since vSphere API version 5.5, [OpaqueNetworks|https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.OpaqueNetwork.html]
are introduced. 
> Its description says: 
> bq. This interface defines an opaque network, in the sense that the detail and configuration
of the network is unknown to vShpere and is managed by a management plane outside of vSphere.
However, the identifier and name of these networks is made available to vSphere so that host
and virtual machine virtual ethernet device can connect to them.
> In order to connect a vm's virtual ethernet device to the proper opaque network when
deploying a vm into a NSX managed network, we first need to look for a particular opaque network
on hosts. This opaque network's id has to be *"br-int"* and its type *"nsx.network"*.
> Since vSphere API version 5.5 [HostNetworkInfo|https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.host.NetworkInfo.html#opaqueNetwork]
introduces a list of available opaque networks for each host. 
> If NSX API version >= 4.2 we look for a [OpaqueNetworkInfo|https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.host.OpaqueNetworkInfo.html]
which satisfies:
> * opaqueNetworkId = "br-int"
> * opaqueNetworkType = "nsx.netork"
> If that opaque network is found, then we need to attach vm's NIC to a virtual ethernet
device which support this, so we use [VirtualEthernetCardOpaqueNetworkBackingInfo|https://www.vmware.com/support/developer/converter-sdk/conv55_apireference/vim.vm.device.VirtualEthernetCard.OpaqueNetworkBackingInfo.html]
setting:
> * opaqueNetworkId = "br-int"
> * opaqueNetworkType = "nsx.netork"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message