Return-Path: X-Original-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 27063EA73 for ; Sat, 2 Feb 2013 00:03:16 +0000 (UTC) Received: (qmail 861 invoked by uid 500); 2 Feb 2013 00:03:15 -0000 Delivered-To: apmail-incubator-cloudstack-users-archive@incubator.apache.org Received: (qmail 813 invoked by uid 500); 2 Feb 2013 00:03:15 -0000 Mailing-List: contact cloudstack-users-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-users@incubator.apache.org Delivered-To: mailing list cloudstack-users@incubator.apache.org Received: (qmail 805 invoked by uid 99); 2 Feb 2013 00:03:15 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 02 Feb 2013 00:03:15 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [209.85.212.41] (HELO mail-vb0-f41.google.com) (209.85.212.41) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 02 Feb 2013 00:03:08 +0000 Received: by mail-vb0-f41.google.com with SMTP id l22so2770986vbn.14 for ; Fri, 01 Feb 2013 16:02:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=megahappy.net; s=google; h=mime-version:x-received:date:message-id:subject:from:to :content-type; bh=sEkhec6uI8wtlP00JJDtN7VslZMTNLxru2dCbXA/MV4=; b=HV2fDP07Zb3fcVsX/O/qRv6DF2LgNekgcKIz7KZVPEJSXeWpi5e8NMt3VN5oJzpNNX YvHxT1Ql268e+rb1Qw0784Q5UURs1hggD2CuWV2lR0U2f8VrZyXnjPluctDn6sjinWhg sbDx+VBXwyKa3yLocspmuH4gcYBFphTIHb+0o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=sEkhec6uI8wtlP00JJDtN7VslZMTNLxru2dCbXA/MV4=; b=lmut15jmW7yca5CTUeogP/ALB9j8b+BhxD7G+2UOrwZWL4FlGgyjsImNOzDHgnpte5 T8uTwBZlP5c53HfDnITtJBClDEafayxAs8b125FoceytN4AIo6nMLZoed8X9pqjLutlx 495neTYODQ+51kFbcozWzB8XKWfkkil7xS8sums3z/+nDqb3gTA/Jzn1BTTbbTXX8ZMP +WhdmOhadSMjp4nGdlPsla0LE+hagX7WNWuQyx0vQ/PbbTx6XmuHcPL5Vim/qpVIXUpm K3fJd3O6wYJoWZzOBftXXFFJ/o465fc1NS/4lOY5TNQcB48/v0twD0x7JAKfZafNGZ20 9puw== MIME-Version: 1.0 X-Received: by 10.52.240.146 with SMTP id wa18mr11396094vdc.47.1359763366749; Fri, 01 Feb 2013 16:02:46 -0800 (PST) Received: by 10.58.169.100 with HTTP; Fri, 1 Feb 2013 16:02:46 -0800 (PST) Date: Fri, 1 Feb 2013 16:02:46 -0800 Message-ID: Subject: SystemVM offline causes extreme snapshot bloat From: Bryan Whitehead To: cloudstack-users@incubator.apache.org Content-Type: multipart/alternative; boundary=20cf30780b467424e904d4b29482 X-Gm-Message-State: ALoCoQmB5p5gB2QGajYnIsHDHszje1hB63mZ6DBFBx8UVAPoT30iFt3VznKcgUnYokVwOg7voKOP X-Virus-Checked: Checked by ClamAV on apache.org --20cf30780b467424e904d4b29482 Content-Type: text/plain; charset=ISO-8859-1 For some reason my storageVM was in a state where things "worked" but automated snapshots were not working. Basically the step where libvirt created a stapshot on the primary storage worked - but the step to have the snapshot parent copied to the secondary storage was not working. I got automated snapshots working again by rebooting the storage systemVM. However, when i look at libvirt I can see there are many snapshots on the primary storage. A 12G filesystem has a qcow2 size of 220GB. virsh snapshot-list --parent i-3-14-VM Name Creation Time State Parent ------------------------------------------------------------ 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130125025313 2013-01-25 02:53:13 +0000 running 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130126025313 2013-01-26 02:53:13 +0000 running 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130125025313 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130127025313 2013-01-27 02:53:13 +0000 running 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130126025313 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130128025313 2013-01-28 02:53:13 +0000 running 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130127025313 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130129025313 2013-01-29 02:53:13 +0000 running 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130128025313 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130130025313 2013-01-30 02:53:13 +0000 running 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130129025313 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130131025313 2013-01-31 02:53:13 +0000 running 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130130025313 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130201025313 2013-02-01 02:53:13 +0000 running 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130131025313 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130201223643 2013-02-01 22:36:43 +0000 running 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130201025313 The actual qcow2 file looks like a much bigger wreck: qemu-img info /gluster/qcow2/images/7ec192c1-ecee-4f5e-8e3d-567f7878ceb1 image: /gluster/qcow2/images/7ec192c1-ecee-4f5e-8e3d-567f7878ceb1 file format: qcow2 virtual size: 100G (107374182400 bytes) disk size: 211G cluster_size: 65536 backing file: /gluster/qcow2/images/cb151441-209c-4f43-a2a4-b390c9bb9768 (actual path: /gluster/qcow2/images/cb151441-209c-4f43-a2a4-b390c9bb9768) Snapshot list: ID TAG VM SIZE DATE VM CLOCK 1 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20121230025415 1.0G 2012-12-30 02:54:16 785:39:01.680 2 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20121231025415 1.0G 2012-12-31 02:54:16 809:35:35.417 3 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130101025415 1.0G 2013-01-01 02:54:16 833:24:20.046 4 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130102025415 1.0G 2013-01-02 02:54:15 857:13:02.766 5 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130103025415 1.0G 2013-01-03 02:54:15 881:01:42.467 6 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130104025415 1.0G 2013-01-04 02:54:15 904:50:29.840 7 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130105025415 1.0G 2013-01-05 02:54:16 928:39:17.670 8 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130106025415 1.0G 2013-01-06 02:54:16 952:28:07.139 9 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130107025415 1.1G 2013-01-07 02:54:16 976:16:58.396 10 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130108025415 1.1G 2013-01-08 02:54:15 1000:05:45.624 11 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130109025415 1.1G 2013-01-09 02:54:15 1023:54:25.482 12 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130110025415 1.1G 2013-01-10 02:54:16 1047:43:17.503 13 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130111025415 1.1G 2013-01-11 02:54:16 1071:32:03.747 14 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130112025415 1.1G 2013-01-12 02:54:16 1095:20:52.220 15 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130113025415 1.1G 2013-01-13 02:54:16 1119:09:43.387 16 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130114025415 1.1G 2013-01-14 02:54:16 1142:58:32.657 17 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130115025415 1.1G 2013-01-15 02:54:16 1166:47:20.084 18 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130116025416 1.1G 2013-01-16 02:54:16 1190:36:06.729 19 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130117025416 1.1G 2013-01-17 02:54:16 1214:24:53.904 20 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130118025415 1.1G 2013-01-18 02:54:16 1238:13:35.640 21 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130119025415 1.1G 2013-01-19 02:54:16 1262:02:17.839 22 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130120025415 1.1G 2013-01-20 02:54:16 1285:51:01.114 23 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130121025415 1.1G 2013-01-21 02:54:16 1309:39:49.956 24 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130122025416 1.1G 2013-01-22 02:54:16 1333:28:39.703 25 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130123025415 1.1G 2013-01-23 02:54:16 1357:17:25.426 26 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130124025416 1.1G 2013-01-24 02:54:16 1381:06:04.731 27 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130125025313 517M 2013-01-25 02:53:13 04:46:52.307 28 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130126025313 678M 2013-01-26 02:53:13 28:44:35.134 29 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130127025313 918M 2013-01-27 02:53:13 52:42:00.249 30 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130128025313 1.1G 2013-01-28 02:53:13 76:38:55.428 31 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130129025313 1.2G 2013-01-29 02:53:13 100:35:27.978 32 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130130025313 1.2G 2013-01-30 02:53:13 124:31:55.179 33 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130131025313 1.3G 2013-01-31 02:53:13 148:28:17.744 34 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130201025313 1.3G 2013-02-01 02:53:13 172:24:33.630 35 24661080-ae91-468d-b8f0-030b5b36f786_ROOT-14_20130201223643 1.3G 2013-02-01 22:36:43 192:04:17.498 Suggestions on how I can clean up these extra snapshots? -Bryan --20cf30780b467424e904d4b29482--