incubator-cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Musayev, Ilya" <imusa...@webmd.net>
Subject RE: question on Distributed Virtual Switch support
Date Thu, 14 Mar 2013 19:03:02 GMT
One more reason to use DVS for everything..

If I have 16 hosts and I need to disjoin from CS, I need to go to each of 16 hosts and remove
the "cloud*" portgroups CS create on each local switch. I know this can be scripted :)

If its in DVS, we just remove it in one place - and that's it.

> -----Original Message-----
> From: Sateesh Chodapuneedi [mailto:sateesh.chodapuneedi@citrix.com]
> Sent: Monday, March 11, 2013 6:13 AM
> To: cloudstack-dev@incubator.apache.org; Hari Kannan
> Cc: Musayev, Ilya
> Subject: RE: question on Distributed Virtual Switch support
> 
> > From: Musayev, Ilya [mailto:imusayev@webmd.net]
> > Sent: 09 March 2013 08:56
> > To: Hari Kannan; cloudstack-dev@incubator.apache.org
> > Cc: Sateesh Chodapuneedi; Koushik Das; Anantha Kasetty
> > Subject: RE: question on Distributed Virtual Switch support
> >
> > Hari
> >
> > I don't want to be selfish person and make that call.
> >
> > There are vmware best practices, the example I've given below
> > considers to be common best practice. Technically, you really don't
> > need to use DVS on switch0 management pgroup because it is created by
> > default when esxi is installed. DVS is a template switch configuration with
> port accountability (and other features).
> > Typically, you use DVS to avoid the manual configuration/management of
> > virtual switches within a cluster - it usually done for guest vms
> > networks and usually you don't want to mix guest vm traffic with
> > management network. Hence supervisor management network resides on
> > separate vswitch0, which comes by default, with only hosts management
> traffic.
> >
> > This is common best practice, but people can get very fancy with
> > configs and I don't want to speak for the rest of the community.
> >
> > There maybe customers who only have 2NICs on their servers and it that
> > case - if they use DVS, they wont be able to use CS. Also, for most
> > proof of concept work of CS, people tend to use basic gear with 2 NICs
> > in LAB, they won't be able to test CS if they used DVS on everything
> including management net.
> >
> > My humble opinion, it's is certainly a needed feature in 4.2, and
> > while Sateesh remembers how it's done (fresh mind) It would probably
> > makes sense to add this feature sooner than later.
> >
> > I will leave it for someone else to make a judgement call on urgency.
> 
> Hari/Musayev, thank you for the feedback.
> I will extend the dvSwitch support to management traffic as well as storage
> traffic and make it configurable so that admin can make the final choice.
> 
> +1 for PVLAN support, I can try to make the required backend changes for
> VMware resource.
> What are the API calls that would work on orchestration of PVLAN networks?
> Is there an FS elaborating the support?
> 
> Regards,
> Sateesh
> 
> >
> > Thanks
> > Ilya
> >
> > Hari Kannan <hari.kannan@citrix.com> wrote:
> > Hi Ilya,
> >
> > Thanks for the feedback - so, did I understand it right that your
> > point of view is that mgmt. network on DVS is not a super-critical need?
> >
> > Hari
> >
> > -----Original Message-----
> > From: Musayev, Ilya [mailto:imusayev@webmd.net]
> > Sent: Friday, March 8, 2013 5:01 PM
> > To: cloudstack-dev@incubator.apache.org
> > Cc: Sateesh Chodapuneedi; Koushik Das; Anantha Kasetty
> > Subject: RE: question on Distributed Virtual Switch support
> >
> > Hari
> >
> > I gave a second thought to your request about having a support for
> > management network and DVS.
> >
> > Here are the use cases,
> >
> > Be default the hypervisors are deployed with local vswitch0 and
> > management network portgroup.
> >
> > In most cases, if you have more than 2 NICs, assume it's 6-8, then
> > breakdown for network is usually something like,
> >
> > 2 NICs (bonded) for vSwitch0
> > 2 NICs (bonded) for vmotion
> > 2 -4 NICs (bonded) for Guest VMs - usually this is where you insert DVS.
> > 2 NICs (bonded) for storage - either local or DVS switch - if no SAN.
> >
> > If your hypervisor only has 2 NICs, technically this is bad design,
> > but even so, you have to bind the 2 interfaces and use DVS for
> > everything, from managememt to vmotion to guest vm communication.
> This
> > is usually LAB environemnts (at least in my case).
> >
> > While this is an important feature request - it will help smaller
> > subset of customers who only use 2 NICs for everything. Probably
> > forward looking, VmWare may decideĀ  to DVS everything at some point
> > and we need this ability anyway.
> >
> > Regards
> > Ilya
> >
> > "Musayev, Ilya" <imusayev@webmd.net> wrote:
> > +1 .. MGMT is also part of DVS in our and other ENVs.
> > > -----Original Message-----
> > > From: Chip Childers [mailto:chip.childers@sungard.com]
> > > Sent: Friday, March 08, 2013 2:25 PM
> > > To: cloudstack-dev@incubator.apache.org
> > > Cc: Sateesh Chodapuneedi; Koushik Das; Anantha Kasetty
> > > Subject: Re: question on Distributed Virtual Switch support
> > >
> > > On Fri, Mar 8, 2013 at 2:20 PM, Hari Kannan <hari.kannan@citrix.com>
> wrote:
> > > > Hi Sateesh,
> > > >
> > > > As we increase the cluster size, I wonder not having the
> > > > management
> > > network on DVS might be an issue. I would strongly suggest we consider
> this.
> > > I also spoke to some folks who are more knowledgeable with customer
> > > implementations and they also say this would be an issue.
> > > >
> > > > As you know, we have a separate feature being discussed - support
> > > > for
> > > PVLAN - so, PVLAN support via DVS is a must-have requirement..
> > >
> > > +1 - yes please.
> 



Mime
View raw message