cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From S. Brüseke - proIO GmbH <s.brues...@proio.com>
Subject AW: Snapshot and secondary storage utilisation.
Date Mon, 10 Jul 2017 07:55:05 GMT
Hi Makran,

please take a look at global setting "snapshot.delta.max". As far as I understand scheduled
snapshots ACP is using deltas to minimize time and transferred data. So after the first snapshot
has been done, the next is only a delta until you hit snapshot.delta.max.
Because ACP is needing the full snapshot as long as you need one of the deltas you will see
the vhd file on your secondary storage, but not in the UI.

Hope that helped.

Mit freundlichen Grüßen / With kind regards,

Swen Brüseke

-----Ursprüngliche Nachricht-----
Von: Makrand [mailto:makrandsanap@gmail.com] 
Gesendet: Montag, 10. Juli 2017 09:14
An: users@cloudstack.apache.org
Betreff: Snapshot and secondary storage utilisation.

​​
Hi all,

My setup is:- ACS 4.4. XENserver 6.2 SP1. 4TB of secondary storage coming from NFS.

I am observing some issues the way *.vhd* files are stored and cleaned up in secondary storage.
Let's take an example of a VM-813. It has 250G root disk (disk ID 1015) The snapshot is scheduled
to happen once every week (sat night) and supposes to keep only 1 snapshot. From GUI I am
seeing its only keeping the latest week snapshot.

But resource utilization on CS GUI is increasing day by day. So I just ran du -smh and found
there are multiple vhd files of different sizes under secondary storage.

Here is snippet:-

root@gcx-bom-cloudstack:/mnt/secondary2/snapshots/22# du -smhh *
1.5K    1002
1.5K    1003
1.5K    1004
*243G    1015*
1.5K    1114

root@gcx-bom-cloudstack:/mnt/secondary2/snapshots/22# ls -lht *
*1015:*
*total 243G*
*-rw-r--r-- 1 nobody nogroup  32G Jul  8 21:19
8a7e6580-5191-4eb0-9eb1-3ec8e75ce104.vhd*
*-rw-r--r-- 1 nobody nogroup  40G Jul  1 21:30
f52b82b0-0eaf-4297-a973-1f5477c10b5e.vhd*
*-rw-r--r-- 1 nobody nogroup  43G Jun 24 21:35
3dc72a3b-91ad-45ae-b618-9aefb7565edb.vhd*
*-rw-r--r-- 1 nobody nogroup  40G Jun 17 21:30
c626a9c5-1929-4489-b181-6524af1c88ad.vhd*
*-rw-r--r-- 1 nobody nogroup  29G Jun 10 21:16
697cf9bd-4433-426d-a4a1-545f03aae3e6.vhd*
*-rw-r--r-- 1 nobody nogroup  29G Jun  3 21:00
bff859b3-a51c-4186-8c19-1ba94f99f9e7.vhd*
*-rw-r--r-- 1 nobody nogroup  43G May 27 21:35
127e3f6e-4fa5-45ed-a95d-7d0b850a053d.vhd*
*-rw-r--r-- 1 nobody nogroup  60G May 20 22:01
619fe1ed-6807-441c-9526-526486d7a6d2.vhd*
*-rw-r--r-- 1 nobody nogroup  35G May 13 21:23
71b0d6a8-3c93-493f-b82c-732b7a808f6d.vhd*
*-rw-r--r-- 1 nobody nogroup  31G May  6 21:19
ccbfb3ec-abd8-448c-ba79-36631b227203.vhd*
*-rw-r--r-- 1 nobody nogroup  32G Apr 29 21:18
52215821-ed4d-4283-9aed-9f9cc5acd5bd.vhd*
*-rw-r--r-- 1 nobody nogroup  38G Apr 22 21:26
4cb6ea42-8450-493a-b6f2-5be5b0594a30.vhd*
*-rw-r--r-- 1 nobody nogroup 248G Apr 16 00:44
243f50d6-d06a-47af-ab45-e0b8599aac8d.vhd*


Observed same behavior for root disks of other 4 VMs. So the number of vhds are ever growing
on secondary storage and one will eventually run out of secondary storage size.

Simple Question:-

1) Why is cloud stack creating multiple vhd files? Should not it supposed to keep only one
vhd at secondary storage defined in snap policy?

Any thoughts? As explained earlier...from GUI I am seeing last weeks snap as backed up.



--
Makrand


- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) please notify

the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly
forbidden. 



Mime
View raw message