cloudstack-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joris van Lieshout (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CLOUDSTACK-7319) Copy Snapshot command too heavy on XenServer Dom0 resources when using dd to copy incremental snapshots
Date Tue, 12 Aug 2014 11:23:11 GMT

    [ https://issues.apache.org/jira/browse/CLOUDSTACK-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14093971#comment-14093971
] 

Joris van Lieshout commented on CLOUDSTACK-7319:
------------------------------------------------

We believe Hot-fix 4 for XS62 sp1 contains a similar fix but for the sparse dd process used
for the first copy of a chain.

http://support.citrix.com/article/CTX140417

== begin quote ==
Copying a virtual disk between SRs uses the unbuffered I/O to avoid polluting the pagecache
in the Control Domain (dom0). This reduces the dom0 vCPU overhead and allows the pagecache
to work more effectively for other operations.
== end quote ==

> Copy Snapshot command too heavy on XenServer Dom0 resources when using dd to copy incremental
snapshots
> -------------------------------------------------------------------------------------------------------
>
>                 Key: CLOUDSTACK-7319
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7319
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the default.) 
>          Components: Snapshot, XenServer
>    Affects Versions: 4.0.0, 4.0.1, 4.0.2, 4.1.0, 4.1.1, 4.2.0, Future, 4.2.1, 4.3.0,
4.4.0, 4.5.0, 4.3.1, 4.4.1
>            Reporter: Joris van Lieshout
>            Priority: Critical
>
> We noticed that the dd process was way to agressive on Dom0 causing all kinds of problems
on a xenserver with medium workloads. 
> ACS uses the dd command to copy incremental snapshots to secondary storage. This process
is to heavy on Dom0 resources and even impacts DomU performance, and can even lead to domain
freezes (including Dom0) of more then a minute. We've found that this is because the Dom0
kernel caches the read and write operations of dd.
> Some of the issues we have seen as a consequence of this are:
> - DomU performance/freezes
> - OVS freeze and not forwarding any traffic
> - Including LACPDUs resulting in the bond going down
> - keepalived heartbeat packets between RRVMs not being send/received resulting in flapping
RRVM master state
> - Braking snapshot copy processes
> - the xenserver heartbeat script reaching it's timeout and fencing the server
> - poolmaster connection loss
> - ACS marking the host as down and fencing the instances even though they are still running
on the origional host resulting in the same instance running on to hosts in one cluster
> - vhd corruption are a result of some of the issues mentioned above
> We've developed a patch on the xenserver scripts /etc/xapi.d/plugins/vmopsSnapshot that
added the direct flag of both input and output files (iflag=direct oflag=direct).
> Our test have shown that Dom0 load during snapshot copy is way lower.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message