cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Burwell <>
Subject Re: SSVM Network Configuration Issue
Date Wed, 05 Dec 2012 17:10:51 GMT

I was wondering if anyone else is experiencing this problem when using secondary storage on
a devcloud-style VM with a host-only and NAT adapter.  One aspect of this issue that seems
interesting is that following route table from the SSVM:

root@s-5-TEST:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface UGH   0      0        0 eth1        *        U     0      0        0 eth2    *        U     0      0        0 eth1    *        U     0      0        0 eth3
link-local      *          U     0      0        0 eth0
default         UG    0      0        0 eth2

In particular, the gateways for the management and guest networks do not match to the configuration
provided to the management server (i.e. is the gateway for the network
and is the gateway for the network).  With this configuration,
the SSVM has a socket connection to the management server, but is in alert state.  Finally,
when I remove the host-only NIC and use only a NAT adapter the SSVM's networking works as
expecting leading me to believe that the segregated network configuration is at the root of
the problem.

Until I can get the networking on the SSVM configured, I am unable to complete the testing
of the S3-backed Secondary Storage enhancement.

Thank you for your help,

On Dec 3, 2012, at 4:46 PM, John Burwell <> wrote:

> All,
> I am setting up a multi-zone devcloud configuration on VirtualBox 4.2.4 using the Ubuntu
12.04.1 and Xen 4.1.  I have configured the base management server VM (zone1) to serve as
both the zone1, as well as, the management server (running MySql) with eth0 as a host-only
adapter and a static IP of and eth1 as a NAT adapter (see the attached zone1-interfaces
file for the exact network configuration on the VM).  The management and guest networks are
configured as follows:
> Zone 1
> Management: gw dns (?)
> Guest: gw dns
> Zone 2
> Management: gw dns (?)
> Guest: gw dns
> The management server deploys and starts without error.  I then populate the configuration
it using the attached Marvin configuration file (zone1.devcloud.cfg) and restart the management
server in order to allow the global configuration option changes to take effect.  Following
the restart, the CPVM and SSVM start without error.  Unfortunately, they drop into alert status,
and the SSVM is unable to connect outbound through the guest network (very important for my
tests because I am testing S3-backed secondary storage).  
> From the diagnostic checks I have performed on the management server and the SSVM, it
appears that the daemon on the SSVM is connecting back to the management server.  I have attached
a set of diagnostic information from the management server (mgmtsvr-zone1-diagnostics.log)
and SSVM server (ssvm-zone1-diagnostics.log) that includes the results of ifconfig, route,
netstat and ping checks, as well as, other information (e.g. the contents of /var/cache/cloud/cmdline
on the SSVM).  Finally, I have attached the vmops log from the management server (vmops-zone1.log).
> What changes need to be made to management server configuration in order to start up
an SSVM that can communicate with the secondary storage NFS volumes, management server, and
connect to hosts on the Internet?
> Thanks for your help,
> -John
> <ssvm-zone1-diagnostics.log>
> <vmops-zone1.tar.gz>
> <mgmtsvr-zone1-diagnostics.log>
> <zone1-interfaces>
> <zone1.devcloud.cfg>

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message