cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Erik Weber <terbol...@gmail.com>
Subject Re: Ways to monitor Virtual Router disk space
Date Sun, 22 Mar 2015 21:35:52 GMT
On Sat, Mar 21, 2015 at 2:20 PM, Rene Moser <mail@renemoser.net> wrote:

> Hi Erik
>
> On 03/20/2015 09:17 PM, Erik Weber wrote:
>
>> I've had a few incidents where conntrack logging has filled the /var
>> partition, and break provisioning of new VMs (unable to save password).
>>
>> And this got me thinking that there must be a way to monitor VR disk
>> space..
>>
>
> We have had the same problem.
>
> We created some tools a while ago for that like
> https://github.com/swisstxt/cloudstack-nagios which helps you monitor
> CloudStack VRs in nagios or icinga.
>
> But recently we switch to Ansible for managing (security updates, config
> changes, package install) the running VRs. So you can basically make a
> playbook where you can setup the monitoring on the VRs.
>
> I created a example project. It uses a "dynamic inventory" by fetching all
> the routers using the API. See https://github.com/resmo/
> ansible-cloudstack-routers
>
> You can run the playbooks scheduled by a cronjob or manually, using check
> mode (aka dry-run) to see what would have changed and you are also able to
> limit the targets like updating the backup routers first, and then the
> masters, etc.
>
> Hope that helps :)
>
>
I do have a small problem though, I'm not entirely sure if it's my setup or
if it usually is like this, but here goes..

My hypervisors have 6 interfaces, eth0-eth5. They are bonded in pairs in
the following way:

eth0 + eth1 = xapi2, label=cloud-private, usage=management network on
native vlan, public network on tagged vlan
eth2 + eth3 = xapi0, label=cloud-backup, usage=guest network, currently not
in use
eth4 + eth5 = xapi1, label=cloud-guest, usage=guest network, vlan tagged

additionally I have the xapi3 bridge, which only consists of virtual
interfaces (ie. systemvm interfaces), and no physical interfaces.
Which makes it really hard to access any systemvm from anything else than
the actual hypervisor host that is running the vm.

This is running on CCP 4.3.2 if it matters, and is on production so I'm
hesitant to mess with it..

Any idea if this is how it's supposed to be or if something if fubar in my
setup? If this is how it's supposed to be, how does other access their
systemvms outside the hypervisor?

-- 
Erik

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message