Return-Path: X-Original-To: apmail-cloudstack-users-archive@www.apache.org Delivered-To: apmail-cloudstack-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9BB2A17420 for ; Wed, 22 Oct 2014 09:18:33 +0000 (UTC) Received: (qmail 19150 invoked by uid 500); 22 Oct 2014 09:18:32 -0000 Delivered-To: apmail-cloudstack-users-archive@cloudstack.apache.org Received: (qmail 19096 invoked by uid 500); 22 Oct 2014 09:18:32 -0000 Mailing-List: contact users-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@cloudstack.apache.org Delivered-To: mailing list users@cloudstack.apache.org Received: (qmail 19083 invoked by uid 99); 22 Oct 2014 09:18:32 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 22 Oct 2014 09:18:32 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of acldstkusr@gmail.com designates 209.85.220.193 as permitted sender) Received: from [209.85.220.193] (HELO mail-vc0-f193.google.com) (209.85.220.193) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 22 Oct 2014 09:17:56 +0000 Received: by mail-vc0-f193.google.com with SMTP id hq12so532700vcb.0 for ; Wed, 22 Oct 2014 02:17:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=a4TW7tsp2bPfNjz2c2X+MEda9CpE/Bsd01epcfO4XEo=; b=EVm2ebhIl44T44fRcgBKSC9ZpdOZnBMOkN24aSlNu484/hfLJRB4t52cxjRTNEragm h+20welp8+r36m2zacaqi1i3mMmIRYwAzsDetznC6K6tyrtuo/yrjibjtSv6ks6KGGQN C730znF078hkDAvBuMHCYiNz5fIct/QsvLQmk7EJp0h0MdZUTxuv6DqRlb2zy8bViEh/ Ak5oniGu3xqndu/yhbFlwad1dc3q04yRwu3hRTudHhGp0uW3Q3tS7cVoPwqEpPfBvixG a51r/Mhs4ZY0pY7s4p4f8t/BBigbfw6euMkm2BfosnorN0kFWcX5+7yo0qoMd7ppQgdW PeDg== MIME-Version: 1.0 X-Received: by 10.52.120.50 with SMTP id kz18mr29657629vdb.20.1413969475055; Wed, 22 Oct 2014 02:17:55 -0700 (PDT) Received: by 10.221.6.66 with HTTP; Wed, 22 Oct 2014 02:17:54 -0700 (PDT) In-Reply-To: References: <1413966119.35288.ezmlm@cloudstack.apache.org> Date: Wed, 22 Oct 2014 10:17:54 +0100 Message-ID: Subject: Fwd: users Digest of: get.18005 From: cs user To: users@cloudstack.apache.org Content-Type: multipart/alternative; boundary=089e013a0e1248b7dd0505ff6ced X-Virus-Checked: Checked by ClamAV on apache.org --089e013a0e1248b7dd0505ff6ced Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Apologies, new member of the list having problems replying to an existing message! :-) We're having the same issues as France. Xenserver, 6.01 latest hotfixes, cloudstack 4.3 and 4.4. Snapshot of volume on local disk, to be transferred to S3. When the xenhost attemps the PUT request, the snapshot is no longer present and so the request fails. Snapshots of volumes stored on Primary storage work fine and are uploaded to the S3 backed secondary storage as expected. Also tried upgrading the hosts to Xenserver 6.2, however it still has the same issue. On another environment with cloudstack 4.3, with Xenserver 6.01 (no recent xen hotfixes and no S3 backed secondary storage), local snapshots work fine. We see this in the management server logs: (I have removed references to any hosts from the logs) WARN [c.c.h.x.r.CitrixResourceBase] (DirectAgent-269:ctx-18eb6dbd) Task > failed! Task record: uuid: > 54c09e4f-cf52-4bc2-7e3f-e3743a63ff92 > nameLabel: Async.host.call_plugin > nameDescription: > allowedOperations: [] > currentOperations: {} > created: Tue Oct 21 14:18:50 BST 2014 > finished: Tue Oct 21 14:18:49 BST 2014 > status: failure > residentOn: com.xensource.xenapi.Host@1dd85be5 > progress: 1.0 > type: > result: > errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize, > CommandException, 2] > otherConfig: {} > subtaskOf: com.xensource.xenapi.Task@aaf13f6f > subtasks: [] > WARN [c.c.h.x.r.CitrixResourceBase] (DirectAgent-269:ctx-18eb6dbd) > callHostPlugin failed for cmd: getSnapshotSize with args snapshotUuid: > b8b3b99a-66b3-461f-b5db-53ad0adec391, isISCSI: true, pri > maryStorageSRUuid: 8649181d-bf39-40b9-4e88-9e1800f8a2bc, due to Task > failed! Task record: uuid: > 54c09e4f-cf52-4bc2-7e3f-e3743a63ff92 > nameLabel: Async.host.call_plugin > nameDescription: > allowedOperations: [] > currentOperations: {} > created: Tue Oct 21 14:18:50 BST 2014 > finished: Tue Oct 21 14:18:49 BST 2014 > status: failure > residentOn: com.xensource.xenapi.Host@1dd85be5 > progress: 1.0 > type: > result: > errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize, > CommandException, 2] > otherConfig: {} > subtaskOf: com.xensource.xenapi.Task@aaf13f6f > subtasks: [] > Task failed! Task record: uuid: > 54c09e4f-cf52-4bc2-7e3f-e3743a63ff92 > nameLabel: Async.host.call_plugin > nameDescription: > allowedOperations: [] > currentOperations: {} > created: Tue Oct 21 14:18:50 BST 2014 > finished: Tue Oct 21 14:18:49 BST 2014 > status: failure > residentOn: com.xensource.xenapi.Host@1dd85be5 > progress: 1.0 > type: > result: > errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize, > CommandException, 2] > otherConfig: {} > subtaskOf: com.xensource.xenapi.Task@aaf13f6f > subtasks: [] > at > com.cloud.hypervisor.xen.resource.CitrixResourceBase.checkForSuccess(Citr= ixResourceBase.java:3293) On the xenserver we see the following in /var/log/cloud/vmops.log 2014-10-20 16:21:59 DEBUG [root] #### VMOPS enter > create_secondary_storage_folder #### > 2014-10-20 16:21:59 DEBUG [root] create_secondary_storage_folder, args= : > {'newFolder': 'snapshots/2/257', 'remoteMountPath': > 'hostname:/CS_TEST_SEC_01'} > 2014-10-20 16:21:59 DEBUG [root] Successfully mounted > hostname:/CS_TEST_SEC_01 to > /var/run/cloud_mount/0d932145-892f-4960-8153-a30f4132fdc3 > 2014-10-20 16:21:59 DEBUG [root] Successfully unmounted > /var/run/cloud_mount/0d932145-892f-4960-8153-a30f4132fdc3 > 2014-10-20 16:21:59 DEBUG [root] #### VMOPS exit > create_secondary_storage_folder #### > 2014-10-20 16:22:03 DEBUG [root] #### VMOPS enter gethostvmstats #### > 2014-10-20 16:22:03 DEBUG [statsLogger] Calling URL: > http://localhost/rrd_updates?session_id=3DOpaqueRef:79767c0a-a353-4f9c-cb= b4-fba8a124bb83&host=3Dtrue&cf=3DAVERAGE&interval=3D60&start=3D1413818423 > 2014-10-20 16:22:03 DEBUG [statsLogger] Size of returned XML: 6408 > 2014-10-20 16:22:03 DEBUG [root] #### VMOPS exit gethostvmstats #### > 2014-10-20 16:22:36 DEBUG [root] #### VMOPS enter gethostvmstats #### > 2014-10-20 16:22:36 DEBUG [statsLogger] Calling URL: > http://localhost/rrd_updates?session_id=3DOpaqueRef:693d7e20-000d-81ee-58= e5-10b795e5a330&host=3Dfalse&cf=3DAVERAGE&interval=3D60&start=3D1413818456 > 2014-10-20 16:22:36 DEBUG [statsLogger] Size of returned XML: 6408 > 2014-10-20 16:22:36 DEBUG [root] #### VMOPS exit gethostvmstats #### > 2014-10-20 16:22:36 DEBUG [root] primarySRPath: > /dev/VG_XenStorage-5e7f09b4-de48-81f1-509c-b0d4dc3571eb > 2014-10-20 16:22:38 DEBUG [root] #### VMOPS enter deleteSnapshotBacku= p > #### > 2014-10-20 16:22:38 DEBUG [root] Calling deleteSnapshotBackup with > {'localMountPoint': > '/var/run/cloud_mount/ccd200be-0636-32f6-ad98-fb8e2fa8c70c', 'backupUUID'= : > '94f0826b-8bd4-4a75-ae06-5b9724bd3780', 'secondaryStorageMountPath': > 'hostname:/CS_TEST_SEC_01', 'path': 'snapshots/2/257'} > 2014-10-20 16:22:38 DEBUG [root] Successfully mounted > hostname:/CS_TEST_SEC_01/snapshots to > /var/run/cloud_mount/ccd200be-0636-32f6-ad98-fb8e2fa8c70c > 2014-10-20 16:22:38 DEBUG [root] Chdired to > /var/run/cloud_mount/ccd200be-0636-32f6-ad98-fb8e2fa8c70c/2/257 > 2014-10-20 16:22:38 DEBUG [root] checking existence of > 94f0826b-8bd4-4a75-ae06-5b9724bd3780.vhd > 2014-10-20 16:22:38 DEBUG [root] backupVHD > 94f0826b-8bd4-4a75-ae06-5b9724bd3780.vhd exists. > 2014-10-20 16:22:38 DEBUG [root] #### VMOPS exit deleteSnapshotBackup > #### On the xenserver we see the following in the /var/log/cloud/s3xen.log: 2014-10-21 12:36:57 DEBUG [root] #### VMOPS enter s3 #### #### > 2014-10-21 12:36:57 DEBUG [root] #### VMOPS Operation put on file > /dev/VG_XenStorage-b1802ff2-5a63-d1e7-04de-dea7ba7eab27/VHD-0cef2f39-03e8= -458d-bf19-7a2294c40ac7 > from/in bucket cs-testing-01 key > snapshots/2/257/VHD-0cef2f39-03e8-458d-bf19-7a2294c40ac7 #### > 2014-10-21 12:36:57 DEBUG [root] #### VMOPS Traceback (most recent cal= l > last): > File "/etc/xapi.d/plugins/s3xen", line 414, in s3 > client.put(bucket, key, filename, maxSingleUploadBytes) > File "/etc/xapi.d/plugins/s3xen", line 325, in put > raise Exception( > Exception: Attempt to put > /dev/VG_XenStorage-b1802ff2-5a63-d1e7-04de-dea7ba7eab27/VHD-0cef2f39-03e8= -458d-bf19-7a2294c40ac7 > that does not exist. > #### > 2014-10-21 12:36:57 DEBUG [root] #### VMOPS exit s3 with result false > #### It appears that Xen is deleting the file on the local filesystem before the S3 script has a chance to upload it, however the snapshot does appear to have been successfully transferred to the secondary storage staging area, after which the S3 transfer is attempted from the local lvm volume. When this fails the snapshot is deleted, as you can see from the above logs. It appears that either the issue with Xen deleting the snapshot needs to be resolved (has this behavior changed recently?) or the PUT request to S3 needs to send the file which is located in the staging area, rather than the file on local disk, as happens with the primary storage volume snapshots. Thanks in advance for any help with this issue. > > ---------- Forwarded message ---------- > From: France > To: users@cloudstack.apache.org > Cc: > Date: Wed, 8 Oct 2014 12:45:13 +0200 > Subject: Fwd: ACS 4.1.1 -> 4.3.1 =3D errorInfo: [XENAPI_PLUGIN_FAILURE, > getSnapshotSize, CommandException, 2] && vhd-util: libvhd::vhd_open: fail= ed > to open > /dev/VG_XenStorage-197c3580-fc82-35e5-356d-1f0909a9cfd9/VHD-3d732e11-8697= -4a45-9518-44d4154ee6d6: > -2 > Sending it here. Maybe here someone will respond. > > Begin forwarded message: > > > From: France > > Subject: ACS 4.1.1 -> 4.3.1 =3D errorInfo: [XENAPI_PLUGIN_FAILURE, > getSnapshotSize, CommandException, 2] && vhd-util: libvhd::vhd_open: fail= ed > to open > /dev/VG_XenStorage-197c3580-fc82-35e5-356d-1f0909a9cfd9/VHD-3d732e11-8697= -4a45-9518-44d4154ee6d6: > -2 > > Date: 7 Oct 2014 19:04:15 GMT+2 > > To: "dev@cloudstack.apache.org" > > Reply-To: dev@cloudstack.apache.org > > > > Hi guys, > > > > after upgrading ACS from 4.1.1 to 4.3.1 and upgrading XS 6.0.2 to lates= t > hotfixes and manually replacing NFSSR.py with one from ACS 4.3.1, amongst > other errors, I see this error. > > Snapshot is created successfully. I have created templates from it and > used them to create new VMs, but in the log there is an error. (Also > snapshots are not working on volumes which have had snapshots before the > upgrade. But i will deal with that issue later. They might even be relate= d.) > > > > What should I check to provide you with more info about the issue, so w= e > can address it? > > Google search came up empty for this issue. :-( Please help me sort thi= s > out. > > To me, it looks like Xen removed > /dev/VG_XenStorage-197c3580-fc82-35e5-356d-1f0909a9cfd9/VHD-3d732e11-8697= -4a45-9518-44d4154ee6d6 > after the copy to secondary storage but before cloudstack ordered the > removal. > > Should I open a bug for it? > > > > > > This is where i believe the snapshot was ordered: > > ///////////////////////////////////// > > 2014-10-07 16:51:16,919 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > (http-6443-exec-42:ctx-6a64da65 ctx-e324e472) submit async job-2171, > details: AsyncJobVO {id:2171, userId: 28, accountId: 30, instanceType: > Snapshot, instanceId: 977, cmd: > org.apache.cloudstack.api.command.user.snapshot.CreateSnapshotCmd, cmdInf= o: > {"id":"977","response":"json","sessionkey=E2=80=9D:"XXXXX","cmdEventType"= :"SNAPSHOT.CREATE","ctxUserId":"28","httpmethod":"GET","_":"1412693528063",= "volumeid":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","quiescevm":"false","ctxA= ccountId":"30","ctxStartEventId":"60842"}, > cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, > result: null, initMsid: 95545481387, completeMsid: null, lastUpdated: nul= l, > lastPolled: null, created: null} > > 2014-10-07 16:51:16,920 INFO [o.a.c.f.j.i.AsyncJobMonitor] > (Job-Executor-84:ctx-649dbc0e) Add job-2171 into job monitoring > > 2014-10-07 16:51:16,920 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > (Job-Executor-84:ctx-649dbc0e) Executing AsyncJobVO {id:2171, userId: 28, > accountId: 30, instanceType: Snapshot, instanceId: 977, cmd: > org.apache.cloudstack.api.command.user.snapshot.CreateSnapshotCmd, cmdInf= o: > {"id":"977","response":"json","sessionkey=E2=80=9D:"XXXXX","cmdEventType"= :"SNAPSHOT.CREATE","ctxUserId":"28","httpmethod":"GET","_":"1412693528063",= "volumeid":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","quiescevm":"false","ctxA= ccountId":"30","ctxStartEventId":"60842"}, > cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, > result: null, initMsid: 95545481387, completeMsid: null, lastUpdated: nul= l, > lastPolled: null, created: null} > > 2014-10-07 16:51:16,920 DEBUG [c.c.a.ApiServlet] > (http-6443-exec-42:ctx-6a64da65 ctx-e324e472) =3D=3D=3DEND=3D=3D=3D XX.X= X.XX.XX -- > GET > command=3DcreateSnapshot&volumeid=3D72d5b956-fbaf-44c2-8aeb-df49354ddbb3&= quiescevm=3Dfalse&response=3Djson&sessionkey=3DXXXXXX%3D&_=3D1412693528063 > > 2014-10-07 16:51:16,928 DEBUG [c.c.u.AccountManagerImpl] > (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Access to > Acct[7ff6cd6b-7400-4d44-980b-9dc3115264eb-XXXXXX] granted to > Acct[7ff6cd6b-7400-4d44-980b-9dc3115264eb-XXXX] by DomainChecker > > 2014-10-07 16:51:16,953 INFO [o.a.c.a.c.u.s.CreateSnapshotCmd] > (Job-Executor-84:ctx-649dbc0e ctx-e324e472) VOLSS: createSnapshotCmd > starts:1412693476953 > > 2014-10-07 16:51:17,290 DEBUG [c.c.a.t.Request] > (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Seq 2-1824254062: Sending { > Cmd , MgmtId: 95545481387, via: 2(x2.XXX), Ver: v1, Flags: 100011, > [{"org.apache.cloudstack.storage.command.CreateObjectCommand":{"data":{"o= rg.apache.cloudstack.storage.to.SnapshotObjectTO":{"volume":{"uuid":"72d5b9= 56-fbaf-44c2-8aeb-df49354ddbb3","volumeType":"ROOT","dataStore":{"org.apach= e.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91e= e-5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"s2.v.XXX","path":"/i= qn.2010-02.XXX:storage.XXX.s2/1","port":3260,"url":"IscsiLUN://s2.v.XXX//iq= n.2010-02.XXX:storage.XXX.s2/1/?ROLE=3DPrimary&STOREUUID=3D1c8dbf77-2b23-32= 44-91ee-5037cb2a55a8"}},"name":"ROOT-457","size":21474836480,"path":"5f54e0= 49-7336-46bf-88be-c364624504f9","volumeId":556,"vmName":"i-30-457-VM","acco= untId":30,"format":"VHD","id":556,"deviceId":0,"hypervisorType":"XenServer"= },"dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid= ":"1c8dbf77-2b23-3244-91ee-5037cb2a55a8","id":208,"poolType":"IscsiLUN","ho= st":"s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","port":3260,"url"= :"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:storage.XXX.s2/1/?ROLE=3DPrimary&STO= REUUID=3D1c8dbf77-2b23-3244-91ee-5037cb2a55a8"}},"vmName":"i-30-457-VM","na= me":"MInecraftDebian_ROOT-457_20141007145115","hypervisorType":"XenServer",= "id":977,"quiescevm":false,"physicalSize":0}},"wait":0}}] > } > > 2014-10-07 16:51:17,291 DEBUG [c.c.a.t.Request] > (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Seq 2-1824254062: Executing: = { > Cmd , MgmtId: 95545481387, via: 2(x2.XXX), Ver: v1, Flags: 100011, > [{"org.apache.cloudstack.storage.command.CreateObjectCommand":{"data":{"o= rg.apache.cloudstack.storage.to.SnapshotObjectTO":{"volume":{"uuid":"72d5b9= 56-fbaf-44c2-8aeb-df49354ddbb3","volumeType":"ROOT","dataStore":{"org.apach= e.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91e= e-5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"s2.v.XXX","path":"/i= qn.2010-02.XXX:storage.XXX.s2/1","port":3260,"url":"IscsiLUN://s2.v.XXX//iq= n.2010-02.XXX:storage.XXX.s2/1/?ROLE=3DPrimary&STOREUUID=3D1c8dbf77-2b23-32= 44-91ee-5037cb2a55a8"}},"name":"ROOT-457","size":21474836480,"path":"5f54e0= 49-7336-46bf-88be-c364624504f9","volumeId":556,"vmName":"i-30-457-VM","acco= untId":30,"format":"VHD","id":556,"deviceId":0,"hypervisorType":"XenServer"= },"dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid= ":"1c8dbf77-2b23-3244-91ee-5037cb2a55a8","id":208,"poolType":"IscsiLUN","ho= st":"s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","port":3260,"url"= :"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:storage.XXX.s2/1/?ROLE=3DPrimary&STO= REUUID=3D1c8dbf77-2b23-3244-91ee-5037cb2a55a8"}},"vmName":"i-30-457-VM","na= me":"MInecraftDebian_ROOT-457_20141007145115","hypervisorType":"XenServer",= "id":977,"quiescevm":false,"physicalSize":0}},"wait":0}}] > } > > 2014-10-07 16:51:17,291 DEBUG [c.c.a.m.DirectAgentAttache] > (DirectAgent-25:ctx-a5197893) Seq 2-1824254062: Executing request > > 2014-10-07 16:51:18,352 DEBUG [c.c.a.m.AgentManagerImpl] > (AgentManager-Handler-7:null) Ping from 21 > > 2014-10-07 16:51:20,075 DEBUG [c.c.a.ApiServlet] > (http-6443-exec-34:ctx-c9163aa8) =3D=3D=3DSTART=3D=3D=3D XX.XX.XX.XX -- = GET > command=3DqueryAsyncJobResult&jobId=3D89c49a6f-9d61-4bd8-a9ee-c41390ea374= 7&response=3Djson&sessionkey=3DXXXXX%3D&_=3D1412693532730 > > 2014-10-07 16:51:20,165 DEBUG [c.c.a.ApiServlet] > (http-6443-exec-34:ctx-c9163aa8 ctx-25e4cfa1) =3D=3D=3DEND=3D=3D=3D XX.X= X.XX.XX -- > GET > command=3DqueryAsyncJobResult&jobId=3D89c49a6f-9d61-4bd8-a9ee-c41390ea374= 7&response=3Djson&sessionkey=3DXXXXXX%3D&_=3D1412693532730 > > 2014-10-07 16:51:20,238 DEBUG [c.c.a.m.AgentManagerImpl] > (AgentManager-Handler-12:null) SeqA 27-139339: Processing Seq 27-139339: = { > Cmd , MgmtId: -1, via: 27, Ver: v1, Flags: 11, > [{"com.cloud.agent.api.ConsoleProxyLoadReportCommand":{"_proxyVmId":473,"= _loadInfo":"{\n > \"connections\": []\n}","wait":0}}] } > > 2014-10-07 16:51:20,281 DEBUG [c.c.a.m.AgentManagerImpl] > (AgentManager-Handler-12:null) SeqA 27-139339: Sending Seq 27-139339: { > Ans: , MgmtId: 95545481387, via: 27, Ver: v1, Flags: 100010, > [{"com.cloud.agent.api.AgentControlAnswer":{"result":true,"wait":0}}] } > > 2014-10-07 16:51:21,767 DEBUG [c.c.a.m.DirectAgentAttache] > (DirectAgent-25:ctx-a5197893) Seq 2-1824254062: Response Received: > > 2014-10-07 16:51:21,767 DEBUG [c.c.a.t.Request] > (DirectAgent-25:ctx-a5197893) Seq 2-1824254062: Processing: { Ans: , > MgmtId: 95545481387, via: 2, Ver: v1, Flags: 10, > [{"org.apache.cloudstack.storage.command.CreateObjectAnswer":{"data":{"or= g.apache.cloudstack.storage.to.SnapshotObjectTO":{"path":"d38b38f3-ebda-4cf= 7-9701-4ba22409644c","id":0,"quiescevm":false,"physicalSize":0}},"result":t= rue,"wait":0}}] > } > > 2014-10-07 16:51:21,768 DEBUG [c.c.a.t.Request] > (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Seq 2-1824254062: Received: = { > Ans: , MgmtId: 95545481387, via: 2, Ver: v1, Flags: 10, { > CreateObjectAnswer } } > > 2014-10-07 16:51:22,156 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] > (Job-Executor-84:ctx-649dbc0e ctx-e324e472) copyAsync inspecting src type > SNAPSHOT copyAsync inspecting dest type SNAPSHOT > > 2014-10-07 16:51:22,201 DEBUG [c.c.a.t.Request] > (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Seq 1-949601544: Sending { C= md > , MgmtId: 95545481387, via: 1(x3.XXX), Ver: v1, Flags: 100011, > [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apac= he.cloudstack.storage.to.SnapshotObjectTO":{"path":"d38b38f3-ebda-4cf7-9701= -4ba22409644c","volume":{"uuid":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","vol= umeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataS= toreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-5037cb2a55a8","id":208,"poolType":= "IscsiLUN","host":"s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","po= rt":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:storage.XXX.s2/1/?ROLE= =3DPrimary&STOREUUID=3D1c8dbf77-2b23-3244-91ee-5037cb2a55a8"}},"name":"ROOT= -457","size":21474836480,"path":"5f54e049-7336-46bf-88be-c364624504f9","vol= umeId":556,"vmName":"i-30-457-VM","accountId":30,"format":"VHD","id":556,"d= eviceId":0,"hypervisorType":"XenServer"},"dataStore":{"org.apache.cloudstac= k.storage.to.PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-5037cb2a5= 5a8","id":208,"poolType":"IscsiLUN","host":"s2.v.XXX","path":"/iqn.2010-02.= XXX:storage.XXX.s2/1","port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.X= XX:storage.XXX.s2/1/?ROLE=3DPrimary&STOREUUID=3D1c8dbf77-2b23-3244-91ee-503= 7cb2a55a8"}},"vmName":"i-30-457-VM","name":"MInecraftDebian_ROOT-457_201410= 07145115","hypervisorType":"XenServer","id":977,"quiescevm":false,"physical= Size":0}},"destTO":{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"p= ath":"snapshots/30/556","volume":{"uuid":"72d5b956-fbaf-44c2-8aeb-df49354dd= bb3","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.Pri= maryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-5037cb2a55a8","id":208,"p= oolType":"IscsiLUN","host":"s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.= s2/1","port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:storage.XXX.s= 2/1/?ROLE=3DPrimary&STOREUUID=3D1c8dbf77-2b23-3244-91ee-5037cb2a55a8"}},"na= me":"ROOT-457","size":21474836480,"path":"5f54e049-7336-46bf-88be-c36462450= 4f9","volumeId":556,"vmName":"i-30-457-VM","accountId":30,"format":"VHD","i= d":556,"deviceId":0,"hypervisorType":"XenServer"},"dataStore":{"com.cloud.a= gent.api.to.NfsTO":{"_url":"nfs://s6.XXX/mnt/rootPool/nfsSecondary","_role"= :"Image"}},"vmName":"i-30-457-VM","name":"MInecraftDebian_ROOT-457_20141007= 145115","hypervisorType":"XenServer","id":977,"quiescevm":false,"physicalSi= ze":0}},"executeInSequence":false,"options":{"fullSnapshot":"true"},"wait":= 21600}}] > } > > 2014-10-07 16:51:22,201 DEBUG [c.c.a.t.Request] > (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Seq 1-949601544: Executing: = { > Cmd , MgmtId: 95545481387, via: 1(x3.XXX), Ver: v1, Flags: 100011, > [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apac= he.cloudstack.storage.to.SnapshotObjectTO":{"path":"d38b38f3-ebda-4cf7-9701= -4ba22409644c","volume":{"uuid":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","vol= umeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataS= toreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-5037cb2a55a8","id":208,"poolType":= "IscsiLUN","host":"s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","po= rt":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:storage.XXX.s2/1/?ROLE= =3DPrimary&STOREUUID=3D1c8dbf77-2b23-3244-91ee-5037cb2a55a8"}},"name":"ROOT= -457","size":21474836480,"path":"5f54e049-7336-46bf-88be-c364624504f9","vol= umeId":556,"vmName":"i-30-457-VM","accountId":30,"format":"VHD","id":556,"d= eviceId":0,"hypervisorType":"XenServer"},"dataStore":{"org.apache.cloudstac= k.storage.to.PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-5037cb2a5= 5a8","id":208,"poolType":"IscsiLUN","host":"s2.v.XXX","path":"/iqn.2010-02.= XXX:storage.XXX.s2/1","port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.X= XX:storage.XXX.s2/1/?ROLE=3DPrimary&STOREUUID=3D1c8dbf77-2b23-3244-91ee-503= 7cb2a55a8"}},"vmName":"i-30-457-VM","name":"MInecraftDebian_ROOT-457_201410= 07145115","hypervisorType":"XenServer","id":977,"quiescevm":false,"physical= Size":0}},"destTO":{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"p= ath":"snapshots/30/556","volume":{"uuid":"72d5b956-fbaf-44c2-8aeb-df49354dd= bb3","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.Pri= maryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-5037cb2a55a8","id":208,"p= oolType":"IscsiLUN","host":"s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.= s2/1","port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:storage.XXX.s= 2/1/?ROLE=3DPrimary&STOREUUID=3D1c8dbf77-2b23-3244-91ee-5037cb2a55a8"}},"na= me":"ROOT-457","size":21474836480,"path":"5f54e049-7336-46bf-88be-c36462450= 4f9","volumeId":556,"vmName":"i-30-457-VM","accountId":30,"format":"VHD","i= d":556,"deviceId":0,"hypervisorType":"XenServer"},"dataStore":{"com.cloud.a= gent.api.to.NfsTO":{"_url":"nfs://s6.XXX/mnt/rootPool/nfsSecondary","_role"= :"Image"}},"vmName":"i-30-457-VM","name":"MInecraftDebian_ROOT-457_20141007= 145115","hypervisorType":"XenServer","id":977,"quiescevm":false,"physicalSi= ze":0}},"executeInSequence":false,"options":{"fullSnapshot":"true"},"wait":= 21600}}] > } > > 2014-10-07 16:51:22,202 DEBUG [c.c.a.m.DirectAgentAttache] > (DirectAgent-307:ctx-a2d8b59b) Seq 1-949601544: Executing request > > 2014-10-07 16:51:23,072 DEBUG [c.c.a.ApiServlet] > (http-6443-exec-33:ctx-7d98629a) =3D=3D=3DSTART=3D=3D=3D XX.XX.XX.XX -- = GET > command=3DqueryAsyncJobResult&jobId=3D89c49a6f-9d61-4bd8-a9ee-c41390ea374= 7&response=3Djson&sessionkey=3DXXXX%3D&_=3D1412693535726 > > 2014-10-07 16:51:23,091 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-307:ctx-a2d8b59b) Creating a not shared SR for > nfs://s6.XXX/mnt/rootPool/nfsSecondary/snapshots/30/556 > > 2014-10-07 16:51:23,254 DEBUG [c.c.a.ApiServlet] > (http-6443-exec-33:ctx-7d98629a ctx-b1c48235) =3D=3D=3DEND=3D=3D=3D XX.X= X.XX.XX -- > GET > command=3DqueryAsyncJobResult&jobId=3D89c49a6f-9d61-4bd8-a9ee-c41390ea374= 7&response=3Djson&sessionkey=3DXXXXXD&_=3D1412693535726 > > 2014-10-07 16:51:23,954 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-307:ctx-a2d8b59b) Checking > 844c542f-2085-350d-8be6-2c2452eb3728 or SR > 197c3580-fc82-35e5-356d-1f0909a9cfd9 on > XS[f6d699aa-f58f-449e-9a51-6e965562f178-XX.XX.023] > > 2014-10-07 16:51:23,965 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-307:ctx-a2d8b59b) Host XX.XX.023 > OpaqueRef:72228e56-6e60-e1dc-677d-d1dd171045c7: Created a SR; UUID is > 197c3580-fc82-35e5-356d-1f0909a9cfd9 device config is > {serverpath=3D/mnt/rootPool/nfsSecondary/snapshots/30/556, server=3Ds6.XX= X} > > ///////////////////////////////////// > > > > At that time x3 hypervisor wrote the following to messages: > > ///////////////////////////////////// > > Oct 7 16:51:34 x3 multipathd: dm-24: add map (uevent) > > Oct 7 16:51:34 x3 multipathd: dm-24: add map (uevent) > > Oct 7 16:51:34 x3 multipathd: dm-25: add map (uevent) > > Oct 7 16:51:34 x3 multipathd: dm-25: add map (uevent) > > Oct 7 16:51:34 x3 multipathd: dm-26: add map (uevent) > > Oct 7 16:51:34 x3 multipathd: dm-27: add map (uevent) > > Oct 7 16:51:34 x3 multipathd: dm-27: add map (uevent) > > Oct 7 16:51:34 x3 tapdisk[1677]: tapdisk-control: init, 10 x 4k buffer= s > > Oct 7 16:51:34 x3 tapdisk[1677]: I/O queue driver: lio > > Oct 7 16:51:34 x3 tapdisk[1677]: tapdisk-log: started, level 0 > > Oct 7 16:51:34 x3 tapdisk[1677]: > /dev/VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586/VHD-d38b38f3-ebda= -4cf7-9701-4ba22409644c > version: tap 0x00010003, b: 10240, a: 0, f: 0, n: 0 > > Oct 7 16:51:34 x3 tapdisk[1677]: opened image > /dev/VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586/VHD-d38b38f3-ebda= -4cf7-9701-4ba22409644c > (1 users, state: 0x00000003, type: 4, ro) > > Oct 7 16:51:34 x3 tapdisk[1677]: > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= a6e64bdc--9883--476f--87b1--07fc669eade7 > version: tap 0x00010003, b: 10240, a: 227, f: 1, n: 0 > > Oct 7 16:51:34 x3 tapdisk[1677]: opened image > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= a6e64bdc--9883--476f--87b1--07fc669eade7 > (1 users, state: 0x00000003, type: 4, ro) > > Oct 7 16:51:34 x3 tapdisk[1677]: > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= 19ef64e0--398a--43b4--ad52--1d36eedc2b81 > version: tap 0x00010003, b: 10240, a: 9422, f: 8614, n: 0 > > Oct 7 16:51:34 x3 tapdisk[1677]: opened image > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= 19ef64e0--398a--43b4--ad52--1d36eedc2b81 > (1 users, state: 0x00000003, type: 4, ro) > > Oct 7 16:51:34 x3 tapdisk[1677]: > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= dde0bd55--1ae3--44bf--be8d--dac901b49c66 > version: tap 0x00010003, b: 10240, a: 770, f: 372, n: 0 > > Oct 7 16:51:34 x3 tapdisk[1677]: opened image > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= dde0bd55--1ae3--44bf--be8d--dac901b49c66 > (1 users, state: 0x00000003, type: 4, ro) > > Oct 7 16:51:34 x3 tapdisk[1677]: VBD CHAIN: > > Oct 7 16:51:34 x3 tapdisk[1677]: > /dev/VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586/VHD-d38b38f3-ebda= -4cf7-9701-4ba22409644c: > type:vhd(4) storage:lvm(3) > > Oct 7 16:51:34 x3 tapdisk[1677]: > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= a6e64bdc--9883--476f--87b1--07fc669eade7: > type:vhd(4) storage:lvm(3) > > Oct 7 16:51:34 x3 tapdisk[1677]: > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= 19ef64e0--398a--43b4--ad52--1d36eedc2b81: > type:vhd(4) storage:lvm(3) > > Oct 7 16:51:34 x3 tapdisk[1677]: > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= dde0bd55--1ae3--44bf--be8d--dac901b49c66: > type:vhd(4) storage:lvm(3) > > Oct 7 16:51:34 x3 tapdisk[1677]: bdev: capacity=3D41943040 > sector_size=3D512/512 flags=3D0 > > Oct 7 16:51:34 x3 multipathd: tdg: add path (uevent) > > Oct 7 16:51:34 x3 multipathd: tdg: failed to store path info > > Oct 7 16:51:34 x3 multipathd: uevent trigger error > > Oct 7 16:51:34 x3 kernel: block tdg: sector-size: 512/512 capacity: > 41943040 > > Oct 7 16:51:34 x3 kernel: blkback: event-channel 85 > > Oct 7 16:51:34 x3 kernel: blkback: ring-ref 8 > > Oct 7 16:51:34 x3 kernel: blkback: protocol 1 (x86_32-abi) > > Oct 7 16:51:34 x3 multipathd: xvda: add path (uevent) > > Oct 7 16:51:34 x3 multipathd: xvda: failed to store path info > > Oct 7 16:51:34 x3 multipathd: uevent trigger error > > Oct 7 16:51:34 x3 kernel: blkfront: xvda: barriers enabled > > Oct 7 16:51:34 x3 kernel: xvda: xvda1 xvda2 < xvda5 > > > Oct 7 16:51:35 x3 vbd.uevent[add](backend/vbd/0/51712): wrote > /xapi/0/hotplug/vbd/51712/hotplug =3D 'online' > > Oct 7 16:51:35 x3 vbd.uevent[add](backend/vbd/0/51712): wrote > backend/vbd/0/51712/hotplug-status =3D 'connected' > > Oct 7 16:51:37 x3 tapdisk[2033]: tapdisk-control: init, 10 x 4k buffer= s > > Oct 7 16:51:37 x3 tapdisk[2033]: I/O queue driver: lio > > Oct 7 16:51:37 x3 tapdisk[2033]: tapdisk-log: started, level 0 > > Oct 7 16:51:37 x3 tapdisk[2033]: > /var/run/sr-mount/197c3580-fc82-35e5-356d-1f0909a9cfd9/3d732e11-8697-4a45= -9518-44d4154ee6d6.vhd > version: tap 0x00010003, b: 10240, a: 0, f: 0, n: 87 > > Oct 7 16:51:37 x3 tapdisk[2033]: opened image > /var/run/sr-mount/197c3580-fc82-35e5-356d-1f0909a9cfd9/3d732e11-8697-4a45= -9518-44d4154ee6d6.vhd > (1 users, state: 0x00000001, type: 4, rw) > > Oct 7 16:51:37 x3 tapdisk[2033]: VBD CHAIN: > > Oct 7 16:51:37 x3 tapdisk[2033]: > /var/run/sr-mount/197c3580-fc82-35e5-356d-1f0909a9cfd9/3d732e11-8697-4a45= -9518-44d4154ee6d6.vhd: > type:vhd(4) storage:nfs(1) > > Oct 7 16:51:37 x3 tapdisk[2033]: bdev: capacity=3D41943040 > sector_size=3D512/512 flags=3D0 > > Oct 7 16:51:37 x3 multipathd: tdi: add path (uevent) > > Oct 7 16:51:37 x3 multipathd: tdi: failed to store path info > > Oct 7 16:51:37 x3 multipathd: uevent trigger error > > Oct 7 16:51:37 x3 kernel: block tdi: sector-size: 512/512 capacity: > 41943040 > > Oct 7 16:51:37 x3 multipathd: xvdb: add path (uevent) > > Oct 7 16:51:37 x3 multipathd: xvdb: failed to store path info > > Oct 7 16:51:37 x3 multipathd: uevent trigger error > > Oct 7 16:51:37 x3 kernel: blkback: event-channel 87 > > Oct 7 16:51:37 x3 kernel: blkback: ring-ref 69 > > Oct 7 16:51:37 x3 kernel: blkback: protocol 1 (x86_32-abi) > > Oct 7 16:51:37 x3 kernel: blkfront: xvdb: barriers enabled > > Oct 7 16:51:37 x3 kernel: xvdb: unknown partition table > > Oct 7 16:51:37 x3 vbd.uevent[add](backend/vbd/0/51728): wrote > /xapi/0/hotplug/vbd/51728/hotplug =3D 'online' > > Oct 7 16:51:37 x3 vbd.uevent[add](backend/vbd/0/51728): wrote > backend/vbd/0/51728/hotplug-status =3D 'connected' > > Oct 7 16:51:38 x3 sparse_dd: [ info|x3.XXX|0||sparse_dd] streaming fro= m > raw /dev/xvda using BAT from > /dev/VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586/VHD-d38b38f3-ebda= -4cf7-9701-4ba22409644c > (relative to None) to raw file:///dev/xvdb > > ///////////////////////////////////// > > > > This is the actual error, which a believe comes after the copy of the > snapshot is made to secondary storage: > > ///////////////////////////////////// > > 2014-10-07 17:58:49,920 WARN [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-307:ctx-a2d8b59b) Task failed! Task record: > uuid: 1258a9a9-1b7b-027c-5bfd-a210407b8ba5 > > nameLabel: Async.host.call_plugin > > nameDescription: > > allowedOperations: [] > > currentOperations: {} > > created: Tue Oct 07 17:58:48 CEST 2014 > > finished: Tue Oct 07 17:58:49 CEST 2014 > > status: failure > > residentOn: com.xensource.xenapi.Host@c612622b > > progress: 1.0 > > type: > > result: > > errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize, > CommandException, 2] > > otherConfig: {} > > subtaskOf: com.xensource.xenapi.Task@aaf13f6f > > subtasks: [] > > > > 2014-10-07 17:58:49,929 WARN [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-307:ctx-a2d8b59b) callHostPlugin failed for cmd: > getSnapshotSize with args snapshotUuid: > 3d732e11-8697-4a45-9518-44d4154ee6d6, isISCSI: true, primaryStorageSRUuid= : > 197c3580-fc82-35e5-356d-1f0909a9cfd9, due to Task failed! Task record: > uuid: 1258a9a9-1b7b-027c-5bfd-a210407b8ba5 > > nameLabel: Async.host.call_plugin > > nameDescription: > > allowedOperations: [] > > currentOperations: {} > > created: Tue Oct 07 17:58:48 CEST 2014 > > finished: Tue Oct 07 17:58:49 CEST 2014 > > status: failure > > residentOn: com.xensource.xenapi.Host@c612622b > > progress: 1.0 > > type: > > result: > > errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize, > CommandException, 2] > > otherConfig: {} > > subtaskOf: com.xensource.xenapi.Task@aaf13f6f > > subtasks: [] > > > > Task failed! Task record: uuid: > 1258a9a9-1b7b-027c-5bfd-a210407b8ba5 > > nameLabel: Async.host.call_plugin > > nameDescription: > > allowedOperations: [] > > currentOperations: {} > > created: Tue Oct 07 17:58:48 CEST 2014 > > finished: Tue Oct 07 17:58:49 CEST 2014 > > status: failure > > residentOn: com.xensource.xenapi.Host@c612622b > > progress: 1.0 > > type: > > result: > > errorInfo: [XENAPI_PLUGIN_FAILURE, getSnapshotSize, > CommandException, 2] > > otherConfig: {} > > subtaskOf: com.xensource.xenapi.Task@aaf13f6f > > subtasks: [] > > > > at > com.cloud.hypervisor.xen.resource.CitrixResourceBase.checkForSuccess(Citr= ixResourceBase.java:3764) > > at > com.cloud.hypervisor.xen.resource.CitrixResourceBase.callHostPluginAsync(= CitrixResourceBase.java:3971) > > at > com.cloud.hypervisor.xen.resource.XenServerStorageProcessor.getSnapshotSi= ze(XenServerStorageProcessor.java:1161) > > at > com.cloud.hypervisor.xen.resource.XenServerStorageProcessor.backupSnapsho= t(XenServerStorageProcessor.java:1348) > > at > com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(Sto= rageSubsystemCommandHandlerBase.java:90) > > at > com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStora= geCommands(StorageSubsystemCommandHandlerBase.java:50) > > at > com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(Citri= xResourceBase.java:609) > > at > com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenS= erver56Resource.java:59) > > at > com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentA= ttache.java:216) > > at > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(Manage= dContextRunnable.java:49) > > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(D= efaultManagedContext.java:56) > > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWith= Context(DefaultManagedContext.java:103) > > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithC= ontext(DefaultManagedContext.java:53) > > at > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedC= ontextRunnable.java:46) > > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > at > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > > at java.util.concurrent.FutureTask.run(FutureTask.java:166) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.acce= ss$101(ScheduledThreadPoolExecutor.java:165) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(= ScheduledThreadPoolExecutor.java:266) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1146) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615) > > at java.lang.Thread.run(Thread.java:701) > > 2014-10-07 17:58:49,945 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-307:ctx-a2d8b59b) Host XX.XX.023 > OpaqueRef:72228e56-6e60-e1dc-677d-d1dd171045c7: Removing SR > > 2014-10-07 17:58:49,976 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-307:ctx-a2d8b59b) Host XX.XX.023 > OpaqueRef:417a2b3b-8042-359e-6394-180cbf82f600: Unplugging pbd > > 2014-10-07 17:58:50,423 DEBUG [c.c.a.ApiServlet] > (http-6443-exec-42:ctx-b31e9f9b) =3D=3D=3DSTART=3D=3D=3D XX.XX.XX.XX -- = GET > command=3DqueryAsyncJobResult&jobId=3D89c49a6f-9d61-4bd8-a9ee-c41390ea374= 7&response=3Djson&sessionkey=3DvnXXXXXXX%3D&_=3D1412697583219 > > 2014-10-07 17:58:50,446 DEBUG [c.c.a.ApiServlet] > (http-6443-exec-42:ctx-b31e9f9b ctx-b14e1b8b) =3D=3D=3DEND=3D=3D=3D XX.X= X.XX.XX -- > GET > command=3DqueryAsyncJobResult&jobId=3D89c49a6f-9d61-4bd8-a9ee-c41390ea374= 7&response=3Djson&sessionkey=3DvnXXXXXXX%3D&_=3D1412697583219 > > 2014-10-07 17:58:50,529 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-307:ctx-a2d8b59b) Host XX.XX.023 > OpaqueRef:72228e56-6e60-e1dc-677d-d1dd171045c7: Forgetting > > 2014-10-07 17:58:50,535 DEBUG [c.c.s.StatsCollector] > (StatsCollector-3:ctx-222d1581) VmStatsCollector is running... > > 2014-10-07 17:58:50,555 DEBUG [c.c.a.m.DirectAgentAttache] > (DirectAgent-418:ctx-665da9a5) Seq 1-949601730: Executing request > > 2014-10-07 17:58:50,979 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-418:ctx-665da9a5) Vm cpu utilization 0.98 > > 2014-10-07 17:58:50,979 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-418:ctx-665da9a5) Vm cpu utilization 7.2124999999999995 > > 2014-10-07 17:58:50,979 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-418:ctx-665da9a5) Vm cpu utilization 1.3299999999999998 > > 2014-10-07 17:58:50,979 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-418:ctx-665da9a5) Vm cpu utilization 0.27 > > 2014-10-07 17:58:51,013 WARN [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-418:ctx-665da9a5) Error while collecting disk stats from : > > You gave an invalid object reference. The object may have recently bee= n > deleted. The class parameter gives the type of reference given, and the > handle parameter echoes the bad value given. > > at com.xensource.xenapi.Types.checkResponse(Types.java:209) > > at com.xensource.xenapi.Connection.dispatch(Connection.java:368) > > at > com.cloud.hypervisor.xen.resource.XenServerConnectionPool$XenServerConnec= tion.dispatch(XenServerConnectionPool.java:909) > > at > com.xensource.xenapi.VBDMetrics.getIoReadKbs(VBDMetrics.java:210) > > at > com.cloud.hypervisor.xen.resource.CitrixResourceBase.getVmStats(CitrixRes= ourceBase.java:2863) > > at > com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResour= ceBase.java:2763) > > at > com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(Citri= xResourceBase.java:493) > > at > com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenS= erver56Resource.java:59) > > at > com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentA= ttache.java:216) > > at > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(Manage= dContextRunnable.java:49) > > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(D= efaultManagedContext.java:56) > > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWith= Context(DefaultManagedContext.java:103) > > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithC= ontext(DefaultManagedContext.java:53) > > at > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedC= ontextRunnable.java:46) > > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > at > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > > at java.util.concurrent.FutureTask.run(FutureTask.java:166) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.acce= ss$101(ScheduledThreadPoolExecutor.java:165) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(= ScheduledThreadPoolExecutor.java:266) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1146) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615) > > at java.lang.Thread.run(Thread.java:701) > > 2014-10-07 17:58:51,036 DEBUG [c.c.a.m.DirectAgentAttache] > (DirectAgent-418:ctx-665da9a5) Seq 1-949601730: Response Received: > > 2014-10-07 17:58:51,036 DEBUG [c.c.a.t.Request] > (StatsCollector-3:ctx-222d1581) Seq 1-949601730: Received: { Ans: , > MgmtId: 95545481387, via: 1, Ver: v1, Flags: 10, { GetVmStatsAnswer } } > > 2014-10-07 17:58:51,061 DEBUG [c.c.a.m.DirectAgentAttache] > (DirectAgent-165:ctx-3c5b0bdc) Seq 2-1824254287: Executing request > > 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 0.53 > > 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 37.46 > > 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 0.62 > > 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 3.375 > > 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 0.54 > > 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 60.28 > > 2014-10-07 17:58:51,627 DEBUG [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-165:ctx-3c5b0bdc) Vm cpu utilization 0.63 > > 2014-10-07 17:58:51,660 WARN [c.c.h.x.r.CitrixResourceBase] > (DirectAgent-165:ctx-3c5b0bdc) Error while collecting disk stats from : > > You gave an invalid object reference. The object may have recently bee= n > deleted. The class parameter gives the type of reference given, and the > handle parameter echoes the bad value given. > > at com.xensource.xenapi.Types.checkResponse(Types.java:209) > > at com.xensource.xenapi.Connection.dispatch(Connection.java:368) > > at > com.cloud.hypervisor.xen.resource.XenServerConnectionPool$XenServerConnec= tion.dispatch(XenServerConnectionPool.java:909) > > at > com.xensource.xenapi.VBDMetrics.getIoReadKbs(VBDMetrics.java:210) > > at > com.cloud.hypervisor.xen.resource.CitrixResourceBase.getVmStats(CitrixRes= ourceBase.java:2863) > > at > com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResour= ceBase.java:2763) > > at > com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(Citri= xResourceBase.java:493) > > at > com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenS= erver56Resource.java:59) > > at > com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentA= ttache.java:216) > > at > org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(Manage= dContextRunnable.java:49) > > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(D= efaultManagedContext.java:56) > > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWith= Context(DefaultManagedContext.java:103) > > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithC= ontext(DefaultManagedContext.java:53) > > at > org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedC= ontextRunnable.java:46) > > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > at > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > > at java.util.concurrent.FutureTask.run(FutureTask.java:166) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.acce= ss$101(ScheduledThreadPoolExecutor.java:165) > > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(= ScheduledThreadPoolExecutor.java:266) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1146) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:615) > > at java.lang.Thread.run(Thread.java:701) > > 2014-10-07 17:58:51,683 DEBUG [c.c.a.m.DirectAgentAttache] > (DirectAgent-165:ctx-3c5b0bdc) Seq 2-1824254287: Response Received: > > 2014-10-07 17:58:51,683 DEBUG [c.c.a.t.Request] > (StatsCollector-3:ctx-222d1581) Seq 2-1824254287: Received: { Ans: , > MgmtId: 95545481387, via: 2, Ver: v1, Flags: 10, { GetVmStatsAnswer } } > > 2014-10-07 17:58:51,708 DEBUG [c.c.a.m.DirectAgentAttache] > (DirectAgent-16:ctx-3d6b458a) Seq 3-1392049173: Executing request > > 2014-10-07 17:58:51,733 DEBUG [c.c.h.x.r.XenServerStorageProcessor] > (DirectAgent-307:ctx-a2d8b59b) Successfully destroyed snapshot on volume: > 5f54e049-7336-46bf-88be-c364624504f9 execept this current snapshot > d38b38f3-ebda-4cf7-9701-4ba22409644c > > 2014-10-07 17:58:51,734 DEBUG [c.c.a.m.DirectAgentAttache] > (DirectAgent-307:ctx-a2d8b59b) Seq 1-949601544: Response Received: > > 2014-10-07 17:58:51,734 DEBUG [c.c.a.t.Request] > (DirectAgent-307:ctx-a2d8b59b) Seq 1-949601544: Processing: { Ans: , > MgmtId: 95545481387, via: 1, Ver: v1, Flags: 10, > [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"newData":{"org.= apache.cloudstack.storage.to.SnapshotObjectTO":{"path":"snapshots/30/556/3d= 732e11-8697-4a45-9518-44d4154ee6d6","id":0,"quiescevm":false,"physicalSize"= :0}},"result":true,"wait":0}}] > } > > 2014-10-07 17:58:51,734 DEBUG [c.c.a.t.Request] > (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Seq 1-949601544: Received: { > Ans: , MgmtId: 95545481387, via: 1, Ver: v1, Flags: 10, { CopyCmdAnswer }= } > > 2014-10-07 17:58:51,784 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] > (Job-Executor-84:ctx-649dbc0e ctx-e324e472) Complete async job-2171, > jobStatus: SUCCEEDED, resultCode: 0, result: > org.apache.cloudstack.api.response.SnapshotResponse/snapshot/{"id":"a5df5= f35-c5f6-4644-a569-9bd4a26e500d","account":"martinG","domainid":"1268ca9a-7= ec5-4154-9839-9b90699dfa4b","domain":"ROOT","snapshottype":"MANUAL","volume= id":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","volumename":"ROOT-457","volumet= ype":"ROOT","created":"2014-10-07T16:51:15+0200","name":"MInecraftDebian_RO= OT-457_20141007145115","intervaltype":"MANUAL","state":"Allocated","tags":[= ],"revertable":false} > > ///////////////////////////////////// > > > > At that time the x3 hypervisor wrote this to messages: > > ///////////////////////////////////// > > Oct 7 17:58:42 x3 multipathd: xvdb: remove path (uevent) > > Oct 7 17:58:42 x3 multipathd: xvdb: spurious uevent, path not in pathv= ec > > Oct 7 17:58:42 x3 multipathd: uevent trigger error > > Oct 7 17:58:42 x3 vbd.uevent[remove](backend/vbd/0/51728): removed > /xapi/0/hotplug/vbd/51728/hotplug > > Oct 7 17:58:42 x3 vbd.uevent[remove](backend/vbd/0/51728): removed > backend/vbd/0/51728/hotplug-status > > Oct 7 17:58:43 x3 multipathd: tdi: remove path (uevent) > > Oct 7 17:58:43 x3 multipathd: tdi: spurious uevent, path not in pathve= c > > Oct 7 17:58:43 x3 multipathd: uevent trigger error > > Oct 7 17:58:43 x3 tapdisk[2033]: gaps written/skipped: 0/0 > > Oct 7 17:58:43 x3 tapdisk[2033]: > /var/run/sr-mount/197c3580-fc82-35e5-356d-1f0909a9cfd9/3d732e11-8697-4a45= -9518-44d4154ee6d6.vhd: > b: 10240, a: 9510, f: 9510, n: 39029120 > > Oct 7 17:58:43 x3 tapdisk[2033]: closed image > /var/run/sr-mount/197c3580-fc82-35e5-356d-1f0909a9cfd9/3d732e11-8697-4a45= -9518-44d4154ee6d6.vhd > (0 users, state: 0x00000000, type: 4) > > Oct 7 17:58:43 x3 tapdisk[2033]: tapdisk-log: closing after 0 errors > > Oct 7 17:58:43 x3 tapdisk[2033]: tapdisk-syslog: 11 messages, 893 > bytes, xmits: 12, failed: 0, dropped: 0 > > Oct 7 17:58:43 x3 tapdisk[2033]: tapdisk-control: draining 1 connectio= ns > > Oct 7 17:58:43 x3 tapdisk[2033]: tapdisk-control: done > > Oct 7 17:58:44 x3 ovs-vswitchd: 46807|timeval|WARN|9 ms poll interval > (0 ms user, 0 ms system) is over 14 times the weighted mean interval 1 ms > (54312674 samples) > > Oct 7 17:58:44 x3 ovs-vswitchd: 46808|coverage|INFO|Skipping details o= f > duplicate event coverage for hash=3D735285a1 in epoch 54312674 > > Oct 7 17:58:44 x3 multipathd: xvda: remove path (uevent) > > Oct 7 17:58:44 x3 multipathd: xvda: spurious uevent, path not in pathv= ec > > Oct 7 17:58:44 x3 multipathd: uevent trigger error > > Oct 7 17:58:44 x3 vbd.uevent[remove](backend/vbd/0/51712): removed > /xapi/0/hotplug/vbd/51712/hotplug > > Oct 7 17:58:44 x3 vbd.uevent[remove](backend/vbd/0/51712): removed > backend/vbd/0/51712/hotplug-status > > Oct 7 17:58:45 x3 multipathd: tdg: remove path (uevent) > > Oct 7 17:58:45 x3 multipathd: tdg: spurious uevent, path not in pathve= c > > Oct 7 17:58:45 x3 tapdisk[1677]: gaps written/skipped: 0/0 > > Oct 7 17:58:45 x3 multipathd: uevent trigger error > > Oct 7 17:58:45 x3 tapdisk[1677]: > /dev/VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586/VHD-d38b38f3-ebda= -4cf7-9701-4ba22409644c: > b: 10240, a: 0, f: 0, n: 0 > > Oct 7 17:58:45 x3 tapdisk[1677]: closed image > /dev/VG_XenStorage-3b6e386d-8736-6b6b-7006-f3f4df9bd586/VHD-d38b38f3-ebda= -4cf7-9701-4ba22409644c > (0 users, state: 0x00000002, type: 4) > > Oct 7 17:58:45 x3 tapdisk[1677]: gaps written/skipped: 0/0 > > Oct 7 17:58:45 x3 tapdisk[1677]: > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= a6e64bdc--9883--476f--87b1--07fc669eade7: > b: 10240, a: 227, f: 1, n: 0 > > Oct 7 17:58:45 x3 tapdisk[1677]: closed image > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= a6e64bdc--9883--476f--87b1--07fc669eade7 > (0 users, state: 0x00000002, type: 4) > > Oct 7 17:58:45 x3 tapdisk[1677]: gaps written/skipped: 0/0 > > Oct 7 17:58:45 x3 tapdisk[1677]: > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= 19ef64e0--398a--43b4--ad52--1d36eedc2b81: > b: 10240, a: 9422, f: 8614, n: 0 > > Oct 7 17:58:45 x3 tapdisk[1677]: closed image > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= 19ef64e0--398a--43b4--ad52--1d36eedc2b81 > (0 users, state: 0x00000002, type: 4) > > Oct 7 17:58:45 x3 tapdisk[1677]: gaps written/skipped: 0/0 > > Oct 7 17:58:45 x3 tapdisk[1677]: > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= dde0bd55--1ae3--44bf--be8d--dac901b49c66: > b: 10240, a: 770, f: 372, n: 0 > > Oct 7 17:58:45 x3 tapdisk[1677]: closed image > /dev/mapper/VG_XenStorage--3b6e386d--8736--6b6b--7006--f3f4df9bd586-VHD--= dde0bd55--1ae3--44bf--be8d--dac901b49c66 > (0 users, state: 0x00000002, type: 4) > > Oct 7 17:58:45 x3 tapdisk[1677]: tapdisk-log: closing after 0 errors > > Oct 7 17:58:46 x3 tapdisk[1677]: tapdisk-syslog: 14 messages, 1425 > bytes, xmits: 15, failed: 0, dropped: 0 > > Oct 7 17:58:45 x3 tapdisk[1677]: tapdisk-control: draining 1 connectio= ns > > Oct 7 17:58:45 x3 tapdisk[1677]: tapdisk-control: done > > Oct 7 17:58:46 x3 multipathd: dm-24: remove map (uevent) > > Oct 7 17:58:46 x3 multipathd: dm-24: devmap not registered, can't remo= ve > > Oct 7 17:58:46 x3 multipathd: dm-24: remove map (uevent) > > Oct 7 17:58:46 x3 multipathd: dm-24: devmap not registered, can't remo= ve > > Oct 7 17:58:46 x3 multipathd: dm-27: remove map (uevent) > > Oct 7 17:58:46 x3 multipathd: dm-27: devmap not registered, can't remo= ve > > Oct 7 17:58:46 x3 multipathd: dm-27: remove map (uevent) > > Oct 7 17:58:46 x3 multipathd: dm-27: devmap not registered, can't remo= ve > > Oct 7 17:58:46 x3 multipathd: dm-25: remove map (uevent) > > Oct 7 17:58:46 x3 multipathd: dm-25: devmap not registered, can't remo= ve > > Oct 7 17:58:47 x3 multipathd: dm-25: remove map (uevent) > > Oct 7 17:58:47 x3 multipathd: dm-25: devmap not registered, can't remo= ve > > Oct 7 17:58:47 x3 multipathd: dm-26: remove map (uevent) > > Oct 7 17:58:47 x3 multipathd: dm-26: devmap not registered, can't remo= ve > > Oct 7 17:58:47 x3 multipathd: dm-26: remove map (uevent) > > Oct 7 17:58:47 x3 multipathd: dm-26: devmap not registered, can't remo= ve > > Oct 7 17:58:49 x3 vhd-util: libvhd::vhd_open: failed to open > /dev/VG_XenStorage-197c3580-fc82-35e5-356d-1f0909a9cfd9/VHD-3d732e11-8697= -4a45-9518-44d4154ee6d6: > -2 > > ///////////////////////////////////// > > > > vhd-utils on x3: > > > > [root@x3 log]# find / -name vhd-util > > /usr/sbin/vhd-util > > /opt/xensource/bin/vhd-util > > /opt/cloud/bin/vhd-util > > find: /proc/7815/net: Invalid argument > > [root@x3 log]# md5sum /usr/sbin/vhd-util > > c849eae34a76f18163ffeea7ae1008fe /usr/sbin/vhd-util > > [root@x3 log]# md5sum /opt/xensource/bin/vhd-util > > 2f3b434842d25d9672cc0a75d103ae90 /opt/xensource/bin/vhd-util > > [root@x3 log]# md5sum /opt/cloud/bin/vhd-util > > 2f3b434842d25d9672cc0a75d103ae90 /opt/cloud/bin/vhd-util > > > > Also in XenCenter I can still see > MInecraftDebian_ROOT-457_20141007145115. > > > > Thank you for your time. > > Regads, > > F. > > > > > --089e013a0e1248b7dd0505ff6ced--