Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B4D929E38 for ; Fri, 20 Jul 2012 17:24:49 +0000 (UTC) Received: (qmail 51401 invoked by uid 500); 20 Jul 2012 17:24:49 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 51359 invoked by uid 500); 20 Jul 2012 17:24:49 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 51349 invoked by uid 99); 20 Jul 2012 17:24:49 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 20 Jul 2012 17:24:49 +0000 X-ASF-Spam-Status: No, hits=-5.0 required=5.0 tests=RCVD_IN_DNSWL_HI,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of Edison.su@citrix.com designates 66.165.176.89 as permitted sender) Received: from [66.165.176.89] (HELO SMTP.CITRIX.COM) (66.165.176.89) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 20 Jul 2012 17:24:42 +0000 X-IronPort-AV: E=Sophos;i="4.77,623,1336363200"; d="scan'208";a="32166387" Received: from sjcpmailmx02.citrite.net ([10.216.14.75]) by FTLPIPO01.CITRIX.COM with ESMTP/TLS/RC4-MD5; 20 Jul 2012 13:24:01 -0400 Received: from SJCPMAILBOX01.citrite.net ([10.216.4.73]) by SJCPMAILMX02.citrite.net ([10.216.14.75]) with mapi; Fri, 20 Jul 2012 10:24:00 -0700 From: Edison Su To: "cloudstack-dev@incubator.apache.org" CC: Tiberiu Ungureanu Date: Fri, 20 Jul 2012 10:24:00 -0700 Subject: RE: Support for Citrix StorageLink in Cloudstack Thread-Topic: Support for Citrix StorageLink in Cloudstack Thread-Index: Ac1miqVpUckR4Yz2RIytmjL/FxxkdAAEJJZQ Message-ID: References: <009201cd668a$af5ed480$0e1c7d80$@ecommerce.com> In-Reply-To: <009201cd668a$af5ed480$0e1c7d80$@ecommerce.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Copy_vhd_from_secondary: 1. take three parameters:=20 the url of template: e.g. nfs://your-secondary-storage/path/templatena= me.vhd,=20 dest primary storage sr uuid name label of the to be created VHD 2. mount secondary storage 3. create a vhd file on primary storage 4. if the dest primary storage is on nfs or ext dd dest template to primary vhd 5. if the dest primary storage is on other devics, e.g. lvm etc. dd dest template to primary vhd caveat: also copy the last 512 bytes(the VHD footer) to primary vhd, to = make sure primary vhd has the correct VHD footer. > -----Original Message----- > From: Lucas Hughes [mailto:lucas.hughes@ecommerce.com] > Sent: Friday, July 20, 2012 8:17 AM > To: cloudstack-dev@incubator.apache.org > Cc: Tiberiu Ungureanu > Subject: Support for Citrix StorageLink in Cloudstack >=20 > Hello, >=20 >=20 >=20 > We are using CloudStack in an environment with Citrix Xen (Enterprise > Edition) and Dell EqualLogic SANs. >=20 >=20 >=20 > We are trying to take advantage of Citrix Storage Link features, > especially > the feature that allows to have a thin provisioned volume/LUN on the > SAN per > Vitrual Machine, rather than using LVM over ISCSI and a big LUN on the > SAN > for all the VMs, and we want to use SAN-level snapshots available in > CloudStack >=20 >=20 >=20 > For the first part of our problem, we found that using "Presetup" > configuration for ISCSI connections in our SAN with StorageLink > technology > allows us to have the desired outcome when creating the VMs from an ISO. > However, if we want to use templates, things are not working. >=20 >=20 >=20 > Further investigation showed that CloudStacks model involves copying of > all > data from the "primary" storage to "secondary" storage when creating a > template, and copy from "secondary" storage to "primary" storage when > creating a VM from the template (Also, copy from "primary1" to > "secondary" > to "primary2" when moving a VM from a primary storage to another > primary > storage). It turns out that the scripts that facilitate this copying > and > that reside on the Xen host (in /opt/xensource/bin, specifically > copy_vhd_to_secondarystorage.sh and copy_vhd_from_secondarystorage.sh) > do > not support StorageLink operations. >=20 >=20 >=20 > From our perspective (and please correct me if I am wrong), it appears > these > are the only scripts that need to be modified to add StorageLink > support to > CloudStack. We have successfully modified the script > copy_vhd_to_secondarystorage.sh to correctly copy from a StorageLink > LUN to > the secondary storage (patch included below [1]), and we are > successfully > able to migrate VMs from StorageLink LUN to an LVM over ISCSI LUN. We > lack > the proper understanding of the technology to modify the > copy_vhd_from_secondarystorage.sh script to be able to perform the > reverse > operation (and here we need your help). We are able to provide test > environment for somebody who would be able to help us. >=20 >=20 >=20 > Also, if this problem has already been solved but is not yet committed > to > "production" version of CloudStack, we would gladly perform beta > testing in > our environment. We are aware there is a bug opened about this issue in > CloudStack (bug CS-11486) >=20 >=20 >=20 > We have not yet pursued the second part of our problem (SAN-level > snapshots > in CloudStack), but as we feel these problems are related, we are > willing to > offer all the help we can in fixing them. >=20 >=20 >=20 > [1] >=20 > -- start here -- >=20 > [root@a11-3-05 bin]# diff copy_vhd_to_secondarystorage.sh.orig > copy_vhd_to_secondarystorage.sh >=20 > 41c41 >=20 > < echo "2#no uuid of the source sr" >=20 > --- >=20 > > echo "2#no uuid of the source vdi" >=20 > 85a86,106 >=20 > > elif [ $type =3D=3D "cslg" -o $type =3D=3D "equal" ]; then >=20 > > idstr=3D$(xe host-list name-label=3D$(hostname) params=3Duuid) >=20 > > hostuuid=3D$(echo $idstr | awk -F: '$1 !=3D ""{print $2}' | awk '{pri= nt > $1}') >=20 > > CONTROL_DOMAIN_UUID=3D$(xe vm-list is-control-domain=3Dtrue > resident-on=3D$hostuuid params=3Duuid | awk '$1 =3D=3D "uuid"{print $5}') >=20 > > vbd_uuid=3D$(xe vbd-create vm-uuid=3D${CONTROL_DOMAIN_UUID} > vdi-uuid=3D${vdiuuid} device=3Dautodetect) >=20 > > if [ $? -ne 0 ]; then >=20 > > echo "999#failed to create VBD for vdi uuid ${uuid}" >=20 > > cleanup >=20 > > exit 0 >=20 > > fi >=20 > > xe vbd-plug uuid=3D${vbd_uuid} >=20 > > svhdfile=3D/dev/$(xe vbd-param-get uuid=3D${vbd_uuid} param-name=3Dde= vice) >=20 > > dd if=3D${svhdfile} of=3D${vhdfile} bs=3D2M >=20 > > if [ $? -ne 0 ]; then >=20 > > echo "998#failed to dd $svhdfile to $vhdfile" >=20 > > xe vbd-unplug uuid=3D${vbd_uuid} >=20 > > xe vbd-destroy uuid=3D${vbd_uuid} >=20 > > cleanup >=20 > > exit 0 >=20 > > fi >=20 > > >=20 > 123a145,147 >=20 > > xe vbd-unplug uuid=3D${vbd_uuid} >=20 > > xe vbd-destroy uuid=3D${vbd_uuid} >=20 > > >=20 > -- end here -- >=20 >=20 >=20 >=20 >=20 > Lucas Hughes >=20 > Cloud Engineer >=20 > Ecommerce Inc >=20 >=20