cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rafael Weingärtner <rafaelweingart...@gmail.com>
Subject Re: Squeeze another PR (#2398) in 4.11 milestone
Date Tue, 09 Jan 2018 18:01:31 GMT
Khosrow, I have seen this issue as well. It happens when there are problems
to transfer the snapshot from the primary to the secondary storage.
However, we need to clarify one thing. We are already deleting snapshots in
the primary storage, but we always leave behind the last one. The problem
is that if an error happens, during the transfer of the VHD from the
primary to the secondary storage. The failed snapshot VDI is left behind in
primary storage (for XenServer). These failed snapshots can accumulate with
time and cause the problem you described because XenServer will not be able
to coalesce the VHD files of the VM. Therefore, what you are addressing in
this PR are cases when an exception happens during the transfer from
primary to secondary storage.

On Tue, Jan 9, 2018 at 3:25 PM, Khosrow Moossavi <kmoossavi@cloudops.com>
wrote:

> Hi community
>
> We've found [1] and fixed [2] an issue in 4.10 regarding snapshots
> remaining on primary storage (XenServer + Swift) which causes VDI chain
> gets full after some time and user cannot take another snapshot.
>
> Please include this in 4.11 milestone if you see fit.
>
> [1]: https://issues.apache.org/jira/browse/CLOUDSTACK-10222
> [2]: https://github.com/apache/cloudstack/pull/2398
>
> Thanks
> Khosrow
>



-- 
Rafael Weingärtner

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message