cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Remi Bergsma <RBerg...@schubergphilis.com>
Subject Re: Network architecture
Date Wed, 05 Jul 2017 16:36:00 GMT
Hi,

My advise is to make it as resilient as possible while keeping it simple. Using a single 10g
nic towards primary storage means all your VMs will go down/are halted/risk corruption when
the switch is rebooted for maintenance, or dies etc. I’d always use a mlag/port channel
with 2x10G towards different switches. Then you can also use them active/active if your switches
support it. We’re using Arista, and that can handle this well. Having redundancy on public
without having redundancy on the backend doesn’t really help in my opinion.

Is there a specific reason to use XenServer? KVM is very mature these days and I’d recommend
it over XenServer. I’ve hunderds of both running and in my experience KVM is faster on the
same hardware and has less issues to deal with. XenServer will work, for sure. I just think
KVM (for example on CentOS7) will give you a better experience.

Regards,
Remi



On 04/07/2017, 22:15, "Grégoire Lamodière" <g.lamodiere@dimsi.fr> wrote:

    Dear All,
    
    In the process of implementing a new CS advanced zone (4.9.2), I am wondering about the
best network architecture to implement.
    Any idea / advice would be highly appreciated.
    
    1/ Each host has 4 networks adapters, 2 x 1 Gbe, 2 x 10 Gbe
    2/ The PR Store is nfs based 10 Gbe
    3/ The sec Store is nfs based 10 Gbe
    4/ Maximum network offering is 1 Gbit to Internet
    5/ Hypervisor Xen 7
    6/ Hardware Hp Blade c7000
    
    Right now, my choice would be :
    
    1/ Bound the 2 gigabit networks cards and use the bound for mgmt + public
    2/ Use 1 10Gbe for storage network (operations on sec Store)
    3/ Use 1 10 Gbe for guest traffic (and pr store traffic by design)
    
    This architecture sounds good in terms of performance (using 10 Gbe where it makes sense,
redundancy on mgmt + public with bound).
    
    Another option would be to bound the 2 10 Gbe interfaces, and use Xen Label to manage
Storage and guest on the same physical network. This choice would give us faileover on storage
and guest traffic, but I am wondering if performances would be badly affected.
    
    Do you have any feedback on this ?
    
    Thanks all.
    
    Best Regards.
    
    ---
    Grégoire Lamodière
    T/ + 33 6 76 27 03 31
    F/ + 33 1 75 43 89 71
    
    

Mime
View raw message