From users-return-21066-apmail-cloudstack-users-archive=cloudstack.apache.org@cloudstack.apache.org Sun Mar 22 21:36:18 2015 Return-Path: X-Original-To: apmail-cloudstack-users-archive@www.apache.org Delivered-To: apmail-cloudstack-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id CC34510DF0 for ; Sun, 22 Mar 2015 21:36:18 +0000 (UTC) Received: (qmail 54771 invoked by uid 500); 22 Mar 2015 21:36:18 -0000 Delivered-To: apmail-cloudstack-users-archive@cloudstack.apache.org Received: (qmail 54727 invoked by uid 500); 22 Mar 2015 21:36:17 -0000 Mailing-List: contact users-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@cloudstack.apache.org Delivered-To: mailing list users@cloudstack.apache.org Received: (qmail 54715 invoked by uid 99); 22 Mar 2015 21:36:17 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 22 Mar 2015 21:36:17 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of terbolous@gmail.com designates 209.85.212.176 as permitted sender) Received: from [209.85.212.176] (HELO mail-wi0-f176.google.com) (209.85.212.176) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 22 Mar 2015 21:36:13 +0000 Received: by wibg7 with SMTP id g7so24125695wib.1 for ; Sun, 22 Mar 2015 14:35:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=0dCJoZq+K+ZidGxJDH2A+Rup23eZaI2hIAiYJBHBXmo=; b=Btrvfc/3eEYEpZPTdcTWqEM93o5X/8JKOeRydnkze36Gg07ks8U08j2U35SDM6Ts0B UF4IPvawUtbSGCD00GO0EuOSg0sZ1ivQav0hTXxyEp0HXmpOOon0NWhTsJqeN3ZuCy1r +qoBnPc/0FdIqjegVQlNTFo758uxd43KJEiCiiDJqMcPn+2oO2RXlpTIkxezXnS5449i m19Nz2gjK+cfmN4UgdWtHx2SDWYqut4o0jYTVDpBAVuGP0aSx3X6QvijCdZIt3cC72Mo 7y/Ks1Mwv3tbWh/2IF2SFb32OSHj5WaxHGMZPwuUTHDBnes6nWtDfcg8vOy7roEZct3D G9xA== MIME-Version: 1.0 X-Received: by 10.180.105.136 with SMTP id gm8mr14352681wib.13.1427060152829; Sun, 22 Mar 2015 14:35:52 -0700 (PDT) Received: by 10.28.37.2 with HTTP; Sun, 22 Mar 2015 14:35:52 -0700 (PDT) In-Reply-To: <550D7028.4020505@renemoser.net> References: <550D7028.4020505@renemoser.net> Date: Sun, 22 Mar 2015 22:35:52 +0100 Message-ID: Subject: Re: Ways to monitor Virtual Router disk space From: Erik Weber To: "users@cloudstack.apache.org" Content-Type: multipart/alternative; boundary=f46d044280847b8e350511e755c3 X-Virus-Checked: Checked by ClamAV on apache.org --f46d044280847b8e350511e755c3 Content-Type: text/plain; charset=UTF-8 On Sat, Mar 21, 2015 at 2:20 PM, Rene Moser wrote: > Hi Erik > > On 03/20/2015 09:17 PM, Erik Weber wrote: > >> I've had a few incidents where conntrack logging has filled the /var >> partition, and break provisioning of new VMs (unable to save password). >> >> And this got me thinking that there must be a way to monitor VR disk >> space.. >> > > We have had the same problem. > > We created some tools a while ago for that like > https://github.com/swisstxt/cloudstack-nagios which helps you monitor > CloudStack VRs in nagios or icinga. > > But recently we switch to Ansible for managing (security updates, config > changes, package install) the running VRs. So you can basically make a > playbook where you can setup the monitoring on the VRs. > > I created a example project. It uses a "dynamic inventory" by fetching all > the routers using the API. See https://github.com/resmo/ > ansible-cloudstack-routers > > You can run the playbooks scheduled by a cronjob or manually, using check > mode (aka dry-run) to see what would have changed and you are also able to > limit the targets like updating the backup routers first, and then the > masters, etc. > > Hope that helps :) > > I do have a small problem though, I'm not entirely sure if it's my setup or if it usually is like this, but here goes.. My hypervisors have 6 interfaces, eth0-eth5. They are bonded in pairs in the following way: eth0 + eth1 = xapi2, label=cloud-private, usage=management network on native vlan, public network on tagged vlan eth2 + eth3 = xapi0, label=cloud-backup, usage=guest network, currently not in use eth4 + eth5 = xapi1, label=cloud-guest, usage=guest network, vlan tagged additionally I have the xapi3 bridge, which only consists of virtual interfaces (ie. systemvm interfaces), and no physical interfaces. Which makes it really hard to access any systemvm from anything else than the actual hypervisor host that is running the vm. This is running on CCP 4.3.2 if it matters, and is on production so I'm hesitant to mess with it.. Any idea if this is how it's supposed to be or if something if fubar in my setup? If this is how it's supposed to be, how does other access their systemvms outside the hypervisor? -- Erik --f46d044280847b8e350511e755c3--