Hello! I’m hoping someone can help me troubleshoot the following issue:
I have a client who has a 960G data volume which contains their VM’s Exchange Data Store. When starting a snapshot, I found that a process is started on one of my Compute Nodes titled “sparse_dd”. I found that this process is then sending the output of “sparse_dd” through another Compute Node’s xapi before placing it into the “snapshot store” on Secondary Storage. It appears that this is part of the bottle neck as all of our systems are connected via gigabit link and should not take 15+ hours to create a snapshot. The following is the behavior that I have analyzed from within my environment:
1) Snapshot is started (either via Manual or Scheduled).
2) Compute Node 1 “processes the snapshot” by exposing the VDI which “sparse_dd” then creates a “thin provisioned” snapshot.
3) The output of sparse_dd is delivered over HTTP to xapi on Compute Node 2 where the Management Server mounted Secondary Storage.
4) Compute Node 2 (receiving the snapshot via xapi) stores the snapshot in the Secondary Storage mount point.
Based on the behavior, I have devise the following logic that I believe CloudStack is utilizing:
1) CloudStack creates a “snapshot VDI” via XenServer Pool Master’s API.
2) CloudStack finds a Compute Node that can mount Secondary Storage.
3) CloudStack finds a Compute Node that can run “sparse_dd”.
4) CloudStack uses available Compute node to output the VDI to xapi on the Compute Node that mounted Secondary Storage.
I must mention that the same Compute Node that ran sparse_dd or mounted Secondary Storage is not always the same. It appears the Management Server is simply round-robining through the list of Compute Nodes and using the first one that is available.
Does anyone have any input on the issue I’m having or analysis of how CloudStack/XenServer snapshots operate?