Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B5D79DA74 for ; Tue, 4 Sep 2012 09:41:05 +0000 (UTC) Received: (qmail 56360 invoked by uid 500); 4 Sep 2012 09:41:05 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 56228 invoked by uid 500); 4 Sep 2012 09:41:05 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 55959 invoked by uid 99); 4 Sep 2012 09:41:05 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Sep 2012 09:41:05 +0000 X-ASF-Spam-Status: No, hits=0.7 required=5.0 tests=RCVD_IN_DNSWL_NONE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [109.72.87.137] (HELO smtp01.mail.pcextreme.nl) (109.72.87.137) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Sep 2012 09:40:59 +0000 Received: from [IPv6:2a00:f10:113:0:993a:b75f:7c77:188f] (unknown [IPv6:2a00:f10:113:0:993a:b75f:7c77:188f]) by smtp01.mail.pcextreme.nl (Postfix) with ESMTPSA id E13B276367 for ; Tue, 4 Sep 2012 11:40:38 +0200 (CEST) Message-ID: <5045CC96.2090908@widodh.nl> Date: Tue, 04 Sep 2012 11:40:38 +0200 From: Wido den Hollander User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120827 Thunderbird/15.0 MIME-Version: 1.0 To: cloudstack-dev@incubator.apache.org Subject: Re: [Discuss] VM Snapshot References: <2C97F788CCC013428671BC3A5FC2F64D0B74B7@c-mail.cloud-valley.com.cn> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 09/03/2012 03:47 PM, Mice Xia wrote: > Dear all, > > Sorry for the late response.Here is a simple update for this feature, > commited in the branch 'vm-snapshot', feedbacks are welcomed. > > [Progress] > https://cwiki.apache.org/confluence/display/CLOUDSTACK/VM+Snapshots > finish major parts for xenserver/vmware > run some simple tests on vmware/xenserver free ediiton/xenserver enterprise > edition > > [ToDo] > continue to enhance existing codes especially for exception handling. > add capacity/limit/usage related codes > unittest/full testing > KVM support (TBD, needs patch libvirt java binding) Reading the Wiki I found this: "For current libvirt java binding, APIs required by vm snapshot are lack of arguments (not fully implemented), for example, CS needs to use API snapshot-create with parameter --redefine to re-associate a snapshot meta-data to a VM. Solutions could be either implement them in libvirt or get around it by directly calling virsh." Please, do NOT call virsh! I'm in the process of getting those things out of the Agent. I'll dive into the libvirt Java bindings for this and fix what is necessary. I've got some good contacts with the libvirt guys. What methods are you referring to in libvirt-java which need fixing? It should be trivial to fix it. Wido > > Regards > Mice > > 2012/8/10 Matthew Patton > >> On Wed, 08 Aug 2012 21:51:50 -0400, Edison Su >> wrote: >> >> From: Mice Xia [mailto:mice_xia@**tcloudcomputing.com >>>> ] >>>> >>> ... >> >> For following scenarios I need some suggestions: >>>> a) 'vm snapshot, detach volume and attach it to another VM, rollback >>>> snapshot', >>>> >>> >> This is not a problem. A volume and it's (several) snapshots are described >> in the HYPERVISOR's native metadata file closely associated with the >> volume. If the hypervisor doesn't actually manage that relationship and >> associated housekeeping tasks, CloudStack has no business trying to >> implement it. The option is simply not available to the user. >> >> Cloud automation is NOT about inventing new things, it's about automating >> what is currently possible within the capabilities of the particular >> hypervisor and storage mixture at hand. Where cloud software goes wrong is >> thinking it's the master of everything. It's NOT, and never will be. All >> configuration and runtime state must be pulled from the actual devices >> (hypervisors, storage arrays, network gear, etc.) because things will >> change underneath it. It is unforgivably naive to assume there won't be >> outside influence. Of course we'd all like to pretend that the cloud knows >> all and that it is always authoritative, but if you write the software >> under that assumption, the users will be mighty ticked off when it's not >> the case and things break left and right. >> >> >> b) 'vm snapshot, detach and destroy volume, rollback snapshot', >>>> >>> >> That's not a valid operation. A snapshot is ALWAYS associated with it's >> parent 'disk'. If you delete the parent, all snapshots are deleted with it. >> >> >> Three candidate solutions that I can figure out now, >>>> 1) disallow detach volumes if specified VM has VM snapshots. >>>> >>> >> uh, no. >> >> >> 2) allow to snapshot/rollback, for a), this will result in two volumes, >>>> one attached to anther VM, one attached to VM that rollback from >>>> snapshot; for >>>> >>> >> This is a matter for the hypervisor or storage array. AFAIK even ESX >> doesn't let you do this even if the parent disk is marked RO since it puts >> an exclusive lock on the volume. Storage arrays pull this off by >> thin-cloning the source so the VM thinks it has it's own disk even when it >> didn't start out that way. >> >> Again, Cloudstack isn't about doing storage trickery. It's about making >> the proper calls to the hypervisor or (maybe?) the storage array to do such >> fancy cloning. So if you want to provision a VM and attach a particular >> volume/snapshot sequence that isn't already locked for use, then no >> problem; treat it like you own it outright and launch. Any successive >> creation/destruction of snapshots is the purview of that one active VM and >> doesn't require any fancy footwork. If on the other hand the source >> volume+snapshot is already open for use, then you have to ask the storage >> provider for a thick or thin clone and you lose the ability to go back in >> time without trashing the private copy and re-attaching/re-cloning to the >> original. >> >> The use case for cloud is 99.95% "dumb". Let's not complicate the >> situation unnecessarily. If qcow/rbd/lvm can't do it with their standard >> command set, then it's not available. If you're using 3par/emc/netapp as >> the storage and they do have the proper calls, then you can permit such >> things. Actually, let me rephrase that. Any fancy cloning or snapshotting >> calls MUST be sent to the hypervisor only!! It is up to the hypervisor to >> issue qcow/rbd/lvm/netapp/emc commands to get it done. Why? Hypervisor >> vendors actually test this stuff and have the resources and incentive to >> make sure it works. Cloud software again has no business trying to take >> over the hypervisor's role. If KVM/Xen are deficient in some aspect, then >> fix the hypervisor; do not try to use cloud software to band-aid the matter. >> >> This early in the CloudStack lifecycle, complicated or exotic operations >> should probably require deliberate manual involvement anyhow. >> >> >> For VM based snapshot, we should not allow user to dynamically >>> change(attach or detach) VM's disks if there are VM-based snapshot taken on >>> this VM. >>> >> >> again, no. It is not up to the cloud to manage/track volume/snapshot >> state. It instead queries the hypervisor for that information. >> >> >> It seems to me (admittedly a very new member here) that there is enormous >> feature creep being introduced. Cloud automation is really fairly >> straightforward. >> >> for any hypervisor >> where is it (IP, logical/physical location) >> what flavor (KVM, Xen, ESX) and version - tells us which command set to >> use >> what network gear is attached and what ports need to be modified for to >> add/drop vlan membership at the switch level >> what is it's utilization >> >> for any storage provider >> how to interface with it >> what storage containers are defined >> to which hypervisors should they be attached, and attachment parameters >> >> for any VM >> to which account does it belong >> which disks does it need and on what storage container to place them >> where to find the master disk templates (if any) >> what network bridge/portgroups it needs to have interfaces on >> what MAC to assign the various interfaces (or have Hypervisor >> auto-assign) >> what VLANs to assign to interface or interface aliases >> what IP address to assign to the interfaces >> are there any locality or performance factors to influence selection of >> deployment destination >> >> Basically all the cloud does is dynamically create what amounts to a >> VMware .vmx file or a libvirt instantiation script and run it against the >> chosen hypervisor host. And if needed prep the hypervisor with network >> underpinnings and attach appropriate storage in order to run the guest. >> Yes, I realize I simplified a bit but there isn't a whole lot more to it. >> >