Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1159DEFB7 for ; Fri, 1 Feb 2013 10:38:29 +0000 (UTC) Received: (qmail 30785 invoked by uid 500); 1 Feb 2013 10:24:28 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 30737 invoked by uid 500); 1 Feb 2013 10:24:27 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 30692 invoked by uid 99); 1 Feb 2013 10:24:25 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Feb 2013 10:24:25 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of akarasulu@gmail.com designates 74.125.82.49 as permitted sender) Received: from [74.125.82.49] (HELO mail-wg0-f49.google.com) (74.125.82.49) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Feb 2013 10:24:21 +0000 Received: by mail-wg0-f49.google.com with SMTP id 15so2685386wgd.4 for ; Fri, 01 Feb 2013 02:24:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type; bh=Z+eJoAksPGistO9lGkucjSfwLHG3ILgUAwVupI0XH7A=; b=M9OlmGgr/hDRCsL1vCtmduu9UpBXkLL4SDmv389VSRX3vN+4vdKogg4Wc0z5wzKDds qQRq+56O4R6mjia6uu7hCgVdq6YB1258bB3kQqb9cWe1qmK9HrcVc6TwHXAqE1GFCWQ2 Hpm3aGoOum6ptt/yvm1sLmnKxg4Snr/ETGPoIX/hHpVDGMedlpwHb9JGY/tDwNiQHBib xPPRMYyenCw+uf29KrmbhhUR4IJHsTBmDgF5XMigR5FuRlNAlgT19Zu930Fo3aHjO3NN 68s8Dl0O3hfEMGFuY+wIXHXPFabDZmwkhYncV/xi4miqtxfyeykWKPVbgT8xmdMvXPCU e8JQ== MIME-Version: 1.0 X-Received: by 10.180.77.68 with SMTP id q4mr1858368wiw.10.1359714239820; Fri, 01 Feb 2013 02:23:59 -0800 (PST) Sender: akarasulu@gmail.com Received: by 10.194.90.73 with HTTP; Fri, 1 Feb 2013 02:23:59 -0800 (PST) In-Reply-To: References: Date: Fri, 1 Feb 2013 12:23:59 +0200 X-Google-Sender-Auth: tY8l159TwxIVyQMUsSRV1gRlrh0 Message-ID: Subject: Re: Adding LXC support to Cloudstack From: Alex Karasulu To: cloudstack-dev@incubator.apache.org Content-Type: multipart/alternative; boundary=f46d043893bd42b3e204d4a724d5 X-Virus-Checked: Checked by ClamAV on apache.org --f46d043893bd42b3e204d4a724d5 Content-Type: text/plain; charset=ISO-8859-1 Hi Chiradeep, On Thu, Jan 31, 2013 at 11:44 PM, Chiradeep Vittal < Chiradeep.Vittal@citrix.com> wrote: > Any updates / help? > > Just as an update Phong has made some progress creating LXC based virtual machines via the libvirt interface. I myself have not caught up to him, just started stepping through code to see how it would work. We have weekly meetings on Tuesdays just to see where we are. I'll see about getting a formal update to the list by then. > I'd like to point out that the secondary storage process > (NfsSecondaryStorageResource) can run outside a system vm as well (bare > metal). > It has a "inSystemVm" flag that turns on/off various things. > > This is good to know. I know Phong and I both had some questions about storage matters. > Alternatively you can run LocalSecondaryStorageResource instead -- this > executes inside the management server and expects the NFS server to be > mounted on the management server. > But not all features are supported (esp. zone-to-zone copy). > > With the storage refactor, you may not even need either resource as long > as all you need is to copy images to primary storage from some store > (e.g., a web server). > > > Thanks for the heads up and offer to help. After meeting with Phong next week we'll report back to the list. Regards, Alex > On 1/8/13 4:42 PM, "Alex Karasulu" wrote: > > >On Wed, Jan 9, 2013 at 1:25 AM, Phong Nguyen wrote: > > > >> Thank you all for your responses. > >> > >> Chip: I have started a design document and will keep it updated with our > >> discussions. > >> > >> > >> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/LXC+Support+in+Clo > >>udstack > >> > >> Chiradeep: I think option #2 as you have suggested is a good idea. I'll > >>be > >> looking at this part soon in my dev setup, thanks for the advice. > >> > >> Alex: Would be great to work with you if you are interested. > >> > >> > >Yes, I'll contact you offline for minor coordination details and every so > >often we can report back to the mailing list. > > > > > >> In terms of collaborating, since I'm a non-committer, would the best > >>option > >> be to develop on github? I'm assuming branch commit privileges is only > >>for > >> committers? > >> > > > >Yep but with git it makes little difference. > > > > > >> Thanks, > >> -Phong > >> > >> > >> On Tue, Jan 8, 2013 at 1:47 AM, Chiradeep Vittal < > >> Chiradeep.Vittal@citrix.com> wrote: > >> > >> > > >> > > >> > On 1/7/13 1:17 PM, "Alex Karasulu" wrote: > >> > > >> > >On Mon, Jan 7, 2013 at 11:15 PM, Alex Karasulu > > >> > >wrote: > >> > > > >> > >> > >> > >> > >> > >> > >> > >> On Mon, Jan 7, 2013 at 11:13 PM, Alex Karasulu > >> > >>wrote: > >> > >> > >> > >>> Hi Phong, > >> > >>> > >> > >>> On Mon, Jan 7, 2013 at 10:02 PM, Phong Nguyen > >> > wrote: > >> > >>> > >> > >>>> Hi, > >> > >>>> > >> > >>>> We are interested in adding LXC support to Cloudstack. > >> > >>> > >> > >>> > >> > >>> I've also been interested in Cloudstack support for LXC. I > >>checked a > >> > >>>few > >> > >>> days ago for it and was disappointed when I could not find it but > >> found > >> > >>> support for it in OpenStack instead :P. I wanted to inquire about > >> > >>>adding > >> > >>> LXC support thinking this might be a good starting point for my > >> getting > >> > >>> involved in the code. At this point, I have nothing further to > >> > >>>contribute > >> > >>> besides the link you already found, but I thought if others saw > >>more > >> > >>>people > >> > >>> interested then LXC support might be considered. > >> > >>> > >> > >>> > >> > >> Here's a bit more chatter on this topic but as we see it's not been > >> > >> implemented. Rip for the picking ... > >> > >> > >> > >> http://goo.gl/x60At > >> > >> > >> > >> > >> > >s/Rip/Ripe/ damn autocorrect on pad. > >> > > > >> > > > >> > >> > >> > >> > >> > >>> I've searched around > >> > >>>> for container support for Cloudstack and was able to find one > >> posting > >> > >>>> related to OpenVZ (over a year ago): > >> > >>>> > >> > >>>> http://sourceforge.net/mailarchive/message.php?msg_id=28030821 > >> > >>>> > >> > >>> > >> > >>> BTW OpenVZ is great stuff but I've found the fact that you need a > >> > >>>custom > >> > >>> Kernel a bit of a problem. LXC is much better in this sense since > >> it's > >> > >>> already present in every kernel past 2.6.26 (or 2.6.29?) but > >>that's > >> > >>>besides > >> > >>> the point of this thread. Sorry for digressing. > >> > >>> > >> > >>> Is there any current, on-going, or future work planned in this > >>area? > >> > >>>Are > >> > >>>> there any architectural changes since then that would affect the > >> > >>>> suggestions in this posting? Any other suggestions greatly > >> > >>>>appreciated. > >> > >>>> > >> > >>>> > >> > >>> I too am interested in these details. > >> > >>> > >> > >>> Thanks, > >> > >>> Alex > >> > >>> > >> > > >> > > >> > I like the concept of more hypervisors being supported! > >> > Having said that, the most perplexing thing that stumps people on > >>such a > >> > quest > >> > is the need to have a system vm image for the new hypervisor > >> > > >> > There's a couple of approaches for this > >> > 1. Assume a multi-hypervisor zone with enough XS/KVM/VMWare > >>hypervisors > >> to > >> > run > >> > the standard system vm image > >> > 2. Make the system vm optional. This requires some code changes (not > >> major) > >> > - make the console proxy optional > >> > - run the secondary storage daemon on baremetal (next to the > >>management > >> > server) > >> > Option #2 will suffice for running vms without complex network > >>services. > >> > > >> > > >> > > > > > > > >-- > >Best Regards, > >-- Alex > > -- Best Regards, -- Alex --f46d043893bd42b3e204d4a724d5--