Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id ADB7110742 for ; Mon, 28 Oct 2013 21:49:14 +0000 (UTC) Received: (qmail 25699 invoked by uid 500); 28 Oct 2013 21:49:14 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 25608 invoked by uid 500); 28 Oct 2013 21:49:14 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 25600 invoked by uid 99); 28 Oct 2013 21:49:14 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Oct 2013 21:49:14 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of msweet.dev@gmail.com designates 209.85.219.54 as permitted sender) Received: from [209.85.219.54] (HELO mail-oa0-f54.google.com) (209.85.219.54) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Oct 2013 21:49:07 +0000 Received: by mail-oa0-f54.google.com with SMTP id o20so4345158oag.27 for ; Mon, 28 Oct 2013 14:48:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=qtm2svJ5NnGt8NbNUAKUhD6bduj30+GD3JeN9U5K2y8=; b=BrOHZOiMAXI+K8HE+9xi5Fm63BnPu3/7REOuaoPMm3j3IkSRHpy3M/CiatN2WYFntP +33nFsuVeZHAH6u/WMceJl5ABvj6es+h3tp7Kay9nRnKIz5IATIE6ZRgZBg/X0UX4VQQ auOLi5UjVCqGRsxvqbs2Ql08b+Pfxvp53scgf7zA1uSMGnIj55kBUShX7UKL7M2omQFQ BW99jHJkCG46psWnYRWcMOG6gLrJUWP0T3XBTi5YvImQ4MBabBN6l1LIwNOJLOpVvHQB 41C9mK7yK1j8qvGERzIWD1X9/klf3/UGoVVEu7SO4l6DU4tMYRBTUkq+kjetkxB95ApZ Za1w== MIME-Version: 1.0 X-Received: by 10.60.140.168 with SMTP id rh8mr1972933oeb.76.1382996925940; Mon, 28 Oct 2013 14:48:45 -0700 (PDT) Received: by 10.76.88.233 with HTTP; Mon, 28 Oct 2013 14:48:45 -0700 (PDT) In-Reply-To: References: Date: Mon, 28 Oct 2013 21:48:45 +0000 Message-ID: Subject: Re: Bandwidth Shaping - Ubuntu 12.04.3 KVM From: Marty Sweet To: "dev@cloudstack.apache.org" Content-Type: multipart/alternative; boundary=047d7b2e4c467f09a504e9d410ae X-Virus-Checked: Checked by ClamAV on apache.org --047d7b2e4c467f09a504e9d410ae Content-Type: text/plain; charset=ISO-8859-1 To add to my last reply, I have just created the following compute examples, tests all run with the VM launch on the same Hypervisor: ------------------------------------------------------------------------ === Offering with 50Mbps === # tc class show dev vnet2 class htb 1:1 root prio 0 rate 51200Kbit ceil 51200Kbit burst 1593b cburst 1593b Phy -> VM = 53Mbits/sec VM -> Phy = 1.92Mbits/sec ------------------------------------------------------------------------ === Offering with 100Mbps === # tc class show dev vnet2 class htb 1:1 root prio 0 rate 102400Kbit ceil 102400Kbit burst 1587b cburst 1587b Phy -> VM = 101Mbits/sec VM -> Phy = 2.04Mbits/sec ------------------------------------------------------------------------ === Offering with 500Mbps === # tc class show dev vnet2 class htb 1:1 root prio 0 rate 512000Kbit ceil 512000Kbit burst 1536b cburst 1536b Phy -> VM = 8.24Mbits/sec VM -> Phy = 2.60Mbits/sec ------------------------------------------------------------------------ === Offering with 1000Mbps === # tc class show dev vnet2 class htb 1:1 root prio 0 rate 1024Mbit ceil 1024Mbit burst 1408b cburst 1408b Phy -> VM = 8.88Mbits/sec VM -> Phy = 2.39Mbits/sec ------------------------------------------------------------------------ === Offering with 0Mbps === (In the Network Rate Field) # tc class show dev vnet2 (No Output) Phy -> VM = 280Mbits/sec VM -> Phy = 693Mbits/sec ------------------------------------------------------------------------ It appears <=100Mbps, the Inbound shaping works as expected, beyond that it has somewhat interesting results. Setting 0Mbps has had varied results, Phy-> another VM (on the same Hypervisor) only gets 57Mbits/sec-160Mbits/sec, with ~799Mbits/sec outbound from the VM. Thanks, Marty On Mon, Oct 28, 2013 at 9:00 PM, Marty Sweet wrote: > Yeah, from the libvirt website (http://libvirt.org/cgroups.html): > > Network tuning > > The net_cls is not currently used. Instead traffic filter policies are > set directly against individual virtual network interfaces. > > > However, when bandwidth limiting is applied, I can't see any obvious rules > with any of the 'tc' commands. > Thanks for your help on this, > Marty > > Marty > > > On Mon, Oct 28, 2013 at 8:36 PM, Marcus Sorensen wrote: > >> It just uses the libvirt XML, which uses cgroups, which uses tc rules. >> >> On Mon, Oct 28, 2013 at 2:25 PM, Marty Sweet >> wrote: >> > Hi Marcus, >> > >> > My earlier email mentioned those configurations, unfortunately not >> really >> > complying with what was set out in the compute offering. >> > After setting the compute offering network limit and stop/starting the >> VMs >> > the lines do not appear, and outbound traffic has returned to normal >> speeds >> > but inbound is proving an issue. >> > >> > Also tried rebooting the hypervisor hosts with no success. >> > How is this traffic shaping implemented, is it just via KVM and virsh, >> or >> > does cloudstack run custom tc rules? >> > >> > Thanks, >> > Marty >> > >> > >> > On Mon, Oct 28, 2013 at 8:19 PM, Marcus Sorensen > >wrote: >> > >> >> Check the XML that was generated when the VM in question was started: >> >> >> >> # virsh dumpxml i-2-15-VM | egrep "inbound|outbound" >> >> >> >> >> >> >> >> >> >> See if the settings match what you put in your network offering or >> >> properties (whichever applies to your situation). >> >> >> >> >> >> On Oct 28, 2013 1:44 PM, "Marty Sweet" wrote: >> >> >> >> > Thanks for the links, while I have set 0 for all the properties the >> >> > following results still occur: >> >> > >> >> > Guest -> Other Server : >900Mbps (As expected) >> >> > Other Server -> Guest (so inbound to the VM) : Varies depending on >> >> > hypervisor host: 121, 405, 233, 234Mbps >> >> > >> >> > Each hypervisor has 2 NICS in an LACP bond, was working perfectly >> before >> >> > 4.2.0 :( >> >> > >> >> > Thanks, >> >> > Marty >> >> > >> >> > >> >> > >> >> > On Mon, Oct 28, 2013 at 2:44 PM, Marcus Sorensen < >> shadowsor@gmail.com >> >> > >wrote: >> >> > >> >> > > Yeah, the bandwidth limiting for KVM was dropped into 4.2. You just >> >> > > need to tweak your settings, whether it's on network offerings or >> >> > > global. >> >> > > >> >> > > On Mon, Oct 28, 2013 at 8:25 AM, Wei ZHOU >> >> wrote: >> >> > > > Please read this artcle >> http://support.citrix.com/article/CTX132019 >> >> > > > Hope this help you. >> >> > > > >> >> > > > >> >> > > > 2013/10/28 Marty Sweet >> >> > > > >> >> > > >> Hi Guys, >> >> > > >> >> >> > > >> Following my upgrade from 4.1.1 -> 4.2.0, I have noticed that VM >> >> > > traffic is >> >> > > >> now limited to 2Mbits. >> >> > > >> My compute offerings were already 1000 for network limit and I >> have >> >> > > created >> >> > > >> new offerings to ensure this wasn't the issue (this fixed it for >> >> > > someone in >> >> > > >> the mailing list). >> >> > > >> >> >> > > >> Is there anything that I am missing? I can't remember reading >> this >> >> as >> >> > a >> >> > > bug >> >> > > >> fix or new feature. >> >> > > >> If there is a way to resolve or disable it, it would be most >> >> > > appreciated - >> >> > > >> have been going round in circles for hours. >> >> > > >> >> >> > > >> Thanks, >> >> > > >> Marty >> >> > > >> >> >> > > >> >> > >> >> >> > > --047d7b2e4c467f09a504e9d410ae--