cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Grudzien <>
Subject Re: Network architecture question
Date Wed, 10 Apr 2013 13:43:00 GMT
I looked at Security groups and I am not sure how this solves my problems. Sure it provides
guest isolation but that is through the virtual router correct? The underlying physical network
--outside of cloudstack-- is still layer 2? That is what I am concerned with. When defining
what IPs my guests sit on CloudStack assumes that those are available, or tagged, on every
host in my zone. If I have every host tagged with the guest network then broadcast packets,
like ARP, will hit every box, regardless of whether a VM runs on it at all. My network engineers
are worried that any kind of broadcast storm, or spanning tree loop, could take the whole
cloud down. Does this make sense or am I still missing something?

What we are looking at is creating a zone per physical rack of servers implementing the shared
network offering. This allows my underlying network to be layer 3 between cabinets and limits
my layer 2 guest traffic to far less servers. Between cabinets I will use routing for VMs
to talk to each other. The problems this introduces is that CloudStack doesn't let me mount
the same secondary storage for images so I have to replicate that data. It would be nice to
be able to mount the images across all zones but leave the snapshots local to the zone. 

We have been intensively building and rebuilding CloudStack for the last three weeks and nowhere
have I seen the ability to pin a guest subnet to a rack (pod) of servers. This is what suggests
that the guest networks must be tagged on all physical host ports and why I am concerned about
the large layer 2 domain. 

Sorry this was long winded some of these concepts are difficult I convey over email.


Sent from my iPhone

On Apr 9, 2013, at 12:26 PM, Chiradeep Vittal <> wrote:

> You can do bonded nics in basic zone. The limitation with basic zone is
> that the Vms cannot have multiple nics. Did you need multiple nics for
> your vms?
> If you need advanced network services such as static NAT and load
> balancing, advanced networking is probably your best bet (currently,
> unless you want to invest in a Netscaler for these services).
> Not sure that VXLAN will solve your problems since that has scaling
> problems as well. On vSphere an NX1000v DVS can only handle about 64
> hypervisors IIRC.
> On 4/9/13 5:39 AM, "Justin Grudzien" <> wrote:
>> We have 2 pairs of bonded 10g nics on each box. Wouldn't that require an
>> advanced network? Is it possible to do the security groups with small L2
>> networks in advanced networking?
>> Justin 
>> Sent from my iPhone
>> On Apr 9, 2013, at 12:38 AM, Chiradeep Vittal
>> <> wrote:
>>> Have you considered using a basic zone?
>>> With security groups you can have *lots* (thousands of) with very small
>>> L2
>>> networks.
>>> On 4/8/13 10:28 PM, "Justin Grudzien" <> wrote:
>>>> My team has been working for three weeks with CloudStack architecture
>>>> design and we are struggling to put together a network architecture
>>>> that
>>>> we feel will scale. From everything I can tell, CloudStack requires a a
>>>> very large layer 2 network when using shared guest networks. We are
>>>> looking to deploy almost a thousand physical hosts across 25 cabinets
>>>> with over 4000 VMs in the next 18 months and having a broadcast domain
>>>> this large feels problematic.
>>>> How have others solved this problem? I don't have a need or a desire
>>>> for
>>>> isolation and even if I had 100 guest networks I would still have to
>>>> tag
>>>> their VLANs into every host port. There doesn't seem to be a way to
>>>> tie a
>>>> network to anything smaller than a zone.
>>>> One solution we are looking into is Cisco's 1000v and utilizing VXLANs.
>>>> This will allow us scale down the broadcast domains. I don't think
>>>> CloudStack has support in configuring their VXLAN settings? Any
>>>> comments
>>>> or suggestions would be appreciated.
>>>> Justin

View raw message