From dev-return-113114-archive-asf-public=cust-asf.ponee.io@cloudstack.apache.org Thu Jun 13 13:24:20 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id AFAE318064E for ; Thu, 13 Jun 2019 15:24:19 +0200 (CEST) Received: (qmail 19958 invoked by uid 500); 13 Jun 2019 13:24:18 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 19946 invoked by uid 99); 13 Jun 2019 13:24:17 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 Jun 2019 13:24:17 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 99C3F1803EE for ; Thu, 13 Jun 2019 13:24:16 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.201 X-Spam-Level: X-Spam-Status: No, score=-0.201 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (1024-bit key) header.d=arhont.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id oDJ2nveIDf4w for ; Thu, 13 Jun 2019 13:24:15 +0000 (UTC) Received: from mail1.arhont.com (mail1.arhont.com [178.248.108.111]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 88EA05F1BE for ; Thu, 13 Jun 2019 13:24:12 +0000 (UTC) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail1.arhont.com (Postfix) with ESMTP id 5D7CC360C60 for ; Thu, 13 Jun 2019 14:24:10 +0100 (BST) Received: from mail1.arhont.com ([127.0.0.1]) by localhost (mail1.arhont.com [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 3YFFNYBr1Tv9 for ; Thu, 13 Jun 2019 14:24:08 +0100 (BST) Received: from localhost (localhost.localdomain [127.0.0.1]) by mail1.arhont.com (Postfix) with ESMTP id 282A2360C65 for ; Thu, 13 Jun 2019 14:24:08 +0100 (BST) DKIM-Filter: OpenDKIM Filter v2.10.3 mail1.arhont.com 282A2360C65 X-Virus-Scanned: amavisd-new at arhont.com Received: from mail1.arhont.com ([127.0.0.1]) by localhost (mail1.arhont.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id IeMiJNbhNB4M for ; Thu, 13 Jun 2019 14:24:07 +0100 (BST) Received: from mail1.arhont.com (localhost.localdomain [127.0.0.1]) by mail1.arhont.com (Postfix) with ESMTP id D38C8360C60 for ; Thu, 13 Jun 2019 14:24:07 +0100 (BST) Date: Thu, 13 Jun 2019 14:24:07 +0100 (BST) From: Andrei Mikhailovsky To: dev Message-ID: <949517325.11716.1560432247502.JavaMail.zimbra@arhont.com> In-Reply-To: References: <1808416834.11530.1560430643319.JavaMail.zimbra@arhont.com> Subject: Re: Concurrent Volume Snapshots MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Mailer: Zimbra 8.8.12_GA_3803 (ZimbraWebClient - SAF12.1 (Mac)/8.8.12_GA_3794) Thread-Topic: Concurrent Volume Snapshots Thread-Index: uj2RGlk4GKzfh9G7UGBBmF7qznSm/q8hmyQ7aeQVsQc= Hi Rohit, I have updated some of those options to increase the timeout to 2 days rather than a few hours by default. However, these options relate to the timeout of the process. I was wondering if there is an option to allow simultaneous snapshotting of volumes on a single VM? I would like all volumes of the vm to be copied over to the secondary storage at the same time, rather than one after another. Cheers ----- Original Message ----- > From: "Rohit Yadav" > To: "dev" > Sent: Thursday, 13 June, 2019 14:02:21 > Subject: Re: Concurrent Volume Snapshots > You can try to experiment with the following global settings: > > > wait > > backup.snapshot.wait > copy.volume.wait > vm.job.lock.timeout > > > Regards, > > Rohit Yadav > > Software Architect, ShapeBlue > > https://www.shapeblue.com > > ________________________________ > From: Andrei Mikhailovsky > Sent: Thursday, June 13, 2019 6:27:23 PM > To: dev > Subject: Concurrent Volume Snapshots > > Hello everyone > > I am having running snapshot issues on large volumes. The hypervisor is KVM and > the storage backend is Ceph (rbd). ACS version is 4.11.2. Here is my issue: > > I've got several vms with 3-6 volumes of 2TB each. I have a recurring schedule > setup to take a snapshot of each volume once a month. It takes a long time for > a volume to be snapshotted (in a magnitude of 20 hours). As a result, when the > schedule kicks in, it only manages to snapshot the first volume and the > snapshots of the other volumes fail due to the async job timeout. From what I > have discovered, ACS only does a single volume snapshot at a time. I can't seem > to find the settings to enable concurrent snapshotting. So, it can't snapshot > all of the vm volumes at the same time. This is very much problematic for many > reasons, but the main reason is that upon recovery of multiple volumes, the > data on those will not be consistent. > > Is there a way around it? Perhaps there is an option in the settings that I > can't find that disables this odd behaviour of the volume snapshots? > > Cheers > > Andrei > > rohit.yadav@shapeblue.com > www.shapeblue.com > Amadeus House, Floral Street, London WC2E 9DPUK > @shapeblue