cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeremy Peterson <jpeter...@acentek.net>
Subject RE: Recreating SystemVM's
Date Fri, 30 Jun 2017 19:00:34 GMT
I checked again 

'118', '5', '1', '2017-06-30 12:59:10', NULL, NULL, '100', 'DOWNLOADED', NULL, '108bc844-305d-445d-88e7-accc3c846374', '108bc844-305d-445d-88e7-accc3c846374', '0', '0', 'Ready', '2', '2017-06-30 13:00:47'
'119', '6', '1', '2017-06-30 13:03:32', NULL, NULL, '100', 'DOWNLOADED', NULL, '5e848189-2cfb-4599-9f0e-87a3a720374a', '5e848189-2cfb-4599-9f0e-87a3a720374a', '0', '0', 'Ready', '2', '2017-06-30 13:04:48'
'120', '7', '1', '2017-06-30 13:41:12', NULL, NULL, '100', 'DOWNLOADED', NULL, 'fbfabe33-d3f9-49b9-a2b0-1c12c9bae1ff', 'fbfabe33-d3f9-49b9-a2b0-1c12c9bae1ff', '0', '0', 'Ready', '2', '2017-06-30 13:42:25'
'121', '8', '1', '2017-06-30 18:01:08', NULL, NULL, '100', 'DOWNLOADED', NULL, 'd3428251-926f-4ebb-80c5-1d7743bdfb83', 'd3428251-926f-4ebb-80c5-1d7743bdfb83', '0', '0', 'Ready', '2', '2017-06-30 18:02:17'

All four are back with new dates.

I am going to restart cloudstack-management.

Do I need to delete anything else?  I noticed in the web UI that the System VM template XenServer did not delete.

Jeremy


-----Original Message-----
From: Jeremy Peterson [mailto:jpeterson@acentek.net] 
Sent: Friday, June 30, 2017 8:45 AM
To: users@cloudstack.apache.org
Subject: RE: Recreating SystemVM's

	118	5	1	2017-06-30 12:59:10			100	DOWNLOADED		108bc844-305d-445d-88e7-accc3c846374	108bc844-305d-445d-88e7-accc3c846374	0	0	Ready	2	2017-06-30 13:00:47
	119	6	1	2017-06-30 13:03:32			100	DOWNLOADED		5e848189-2cfb-4599-9f0e-87a3a720374a	5e848189-2cfb-4599-9f0e-87a3a720374a	0	0	Ready	2	2017-06-30 13:04:48
	120	7	1	2017-06-30 13:41:12			100	DOWNLOADED		fbfabe33-d3f9-49b9-a2b0-1c12c9bae1ff	fbfabe33-d3f9-49b9-a2b0-1c12c9bae1ff	0	0	Ready	2	2017-06-30 13:42:25

The 3rd template has been created.

Jeremy


-----Original Message-----
From: Jeremy Peterson [mailto:jpeterson@acentek.net] 
Sent: Friday, June 30, 2017 8:11 AM
To: users@cloudstack.apache.org
Subject: RE: Recreating SystemVM's

We are seeing the templates being re downloaded at this time.

'118', '5', '1', '2017-06-30 12:59:10', NULL, NULL, '100', 'DOWNLOADED', NULL, '108bc844-305d-445d-88e7-accc3c846374', '108bc844-305d-445d-88e7-accc3c846374', '0', '0', 'Ready', '2', '2017-06-30 13:00:47'
'119', '6', '1', '2017-06-30 13:03:32', NULL, NULL, '100', 'DOWNLOADED', NULL, '5e848189-2cfb-4599-9f0e-87a3a720374a', '5e848189-2cfb-4599-9f0e-87a3a720374a', '0', '0', 'Ready', '2', '2017-06-30 13:04:48'

Two have completed.


Jeremy


-----Original Message-----
From: Jeremy Peterson [mailto:jpeterson@acentek.net] 
Sent: Wednesday, June 28, 2017 6:03 PM
To: users@cloudstack.apache.org
Subject: Re: Recreating SystemVM's

I will check that out in the morning.

Sorry i sent my email before seeing your response.

Here's to hoping that deleting the lines in the DB will recreate the volumes. and all does not break i will backup the db lines i delete to make sure i know what to re add if all **** hits the fan.

Jeremy
________________________________________
From: Dag Sonstebo <Dag.Sonstebo@shapeblue.com>
Sent: Wednesday, June 28, 2017 5:36 PM
To: users@cloudstack.apache.org
Subject: Re: Recreating SystemVM's

- See my email from earlier today, it outlines how to troubleshoot and potentially solve your issue.
- 8a4039f2-bb71-11e4-8c76-0050569b1662 is your system VM template UUID from the vm_template table.
- 886aed91-da26-4337-9d60-a40e628eb16b is most likely your VM volume ID, check your volumes table.
- d4085d91-22fa-4965-bfa7-d1a1800f6aa7 is the path where CloudStack believes the system VM template should be, from template_spool_ref. It's not there, we can go another 50 emails down the line - it will still not be there. Again see my mail from earlier today.
- Ignore anything to do with DEPRECATED. The storage SR UUIDs are NOT deprecated, Citrix deprecated the "host" field for the "host-uuid" field a long time ago and marked it as such. Google it.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 28/06/2017, 18:35, "Jeremy Peterson" <jpeterson@acentek.net> wrote:

    Ok so after this email yesterday nothing has been changed.  Is there somewhere that the storage UUID is set to DEPRECATED in XenServer but then CloudStack still knows of that UUID?

    In my case I'll pull the FlexSan1-LUN0 UUID of 2a00a50b-764b-ce7f-589c-c67b353957da in XenCenter.  And cloudstack says

    2017-06-26 10:32:30,784 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-12:ctx-17d037ae) SR retrieved for FlexSAN1-LUN0
    2017-06-26 10:32:30,793 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-12:ctx-17d037ae) Checking FlexSAN1-LUN0 or SR 2a00a50b-764b-ce7f-589c-c67b353957da on XS[130e2063-0ec1-4150-a61e-ff9a526eb842-10.90.2.116]


    Ok so the SR and UUID's are correct.

    2017-06-28 11:16:13,043 DEBUG [c.c.a.t.Request] (Work-Job-Executor-97:ctx-8dea9e5e job-448/job-198627 ctx-040fa9e4) Seq 18-6819294260769085930: Sending  { Cmd , MgmtId: 345050411715, via: 18(Flex-Xen5.flexhost.local), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"d4085d91-22fa-4965-bfa7-d1a1800f6aa7","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM Template (XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"886aed91-da26-4337-9d60-a40e628eb16b","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"ROOT-32569","size":2689602048,"volumeId":37488,"vmName":"s-32569-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":37488,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}] }

    2017-06-28 11:16:13,043 DEBUG [c.c.a.t.Request] (Work-Job-Executor-97:ctx-8dea9e5e job-448/job-198627 ctx-040fa9e4) Seq 18-6819294260769085930: Executing:  { Cmd , MgmtId: 345050411715, via: 18(Flex-Xen5.flexhost.local), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"d4085d91-22fa-4965-bfa7-d1a1800f6aa7","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM Template (XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"886aed91-da26-4337-9d60-a40e628eb16b","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"ROOT-32569","size":2689602048,"volumeId":37488,"vmName":"s-32569-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":37488,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}] }
    2017-06-28 11:16:13,050 DEBUG [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-25:ctx-b958a2b9) Catch Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid: d4085d91-22fa-4965-bfa7-d1a1800f6aa7 failed due to The uuid you supplied was invalid.


    What the heck why is uuid: "8a4039f2-bb71-11e4-8c76-0050569b1662" being used? Or maybe I should be looking at uuid:" 886aed91-da26-4337-9d60-a40e628eb16b"  that's still not the UUID from above.  Even the last uuid displayed : d4085d91-22fa-4965-bfa7-d1a1800f6aa7  is not the correct uuid that the SR retrieved for the FlexSAN1-LUN0 which it is clearly calling in the command.

    Does anything think I should move all my vm's to the other 3 luns then remove that primary storage from CloudStack, Remove the storage repository from XenCenter, restart cloudstack management (just for kicks), add to XenCenter, add to cloudstack. Then see if it pulls the new UUID and see if everything works?

    Anyone ???

    I know Dag has been helping but does anyone else have any suggestions?

    Thanks Dag for all the help.


    Jeremy


    -----Original Message-----
    From: Jeremy Peterson [mailto:jpeterson@acentek.net]
    Sent: Tuesday, June 27, 2017 11:15 AM
    To: users@cloudstack.apache.org
    Subject: RE: Recreating SystemVM's

    http://prntscr.com/for3zv

    My LUN's don't show errors and going through each LUN state shows OK the same as FlexSAN1-LUN0 shows in the above.

    iSCSI

    When I go into XenCenter I look at the storage UUID's and they list the following.

    FlexSAN1-LUN0       2a00a50b-764b-ce7f-589c-c67b353957da
    FlexSAN1-LUN1       befd4536-fdf1-6ab6-0adb-19ae532e0ee8
    FlexSAN2-LUN0       94d4494c-1317-4ffc-f0e6-a9210b0a0daf
    FlexSAN2-LUN1       469b6dcd-8466-3d03-de0e-cc3983e1b6e2

    When I issue the xe sr-list params=all uuid="insert the UUID's from above" let's just do it for one for now to see


    [root@Flex-Xen5 ~]# xe sr-list params=all uuid=2a00a50b-764b-ce7f-589c-c67b353957da
    uuid ( RO)                    : 2a00a50b-764b-ce7f-589c-c67b353957da
                  name-label ( RW): FlexSAN1-LUN0
            name-description ( RW): iSCSI SR [10.83.0.2 (*; LUN 0: 21J003K: 2792.1 GB (DELL))]
                        host ( RO): <shared>
          allowed-operations (SRO): VDI.create; VDI.snapshot; PBD.create; PBD.destroy; plug; update; VDI.destroy; scan; VDI.clone; VDI.resize; unplug
          current-operations (SRO):
                        VDIs (SRO): 05ada428-0603-4e64-8511-44b21eac4339; dd826a7d-f061-4f51-a1c0-e8f02f3a03a1; 8d5eaab3-4625-44fb-98d4-a03bd6195871; b3773d2d-fbd2-4ecb-a40d-f3509cae821a; 89e39540-52be-4625-b986-0a28f9b425f3; 97b38c2c-4491-4159-af60-d0be3632e015; b5c52291-3843-4ded-96a1-57c842e7cda2; 333d4b31-588a-4bc7-b526-a7949bf8ead2; dd5e4c19-332d-4f81-9cf9-ac19cb9d5f36; add51d49-6864-42f3-a7c1-8dcd940d7727; 1a7d06c4-75df-4ac9-942f-c4a738e7911b; 079a3d6f-441b-4875-8a47-e2bbd0780d7d; 4e94e067-65db-44b3-9d8f-13b69bc9d5e3; 14b1921d-cbe5-478a-9484-a8ddcda2105b; 7fd37916-9d08-41a1-9f57-a323f6675ccd; ba531bb1-fe42-4c48-9565-550cc7bd14ba; 5097eaf3-f77d-45b5-aae1-c03b903f4cc2; f42420ba-5815-4afb-b9d9-1fc6fbdcf93f; c3315fa9-5eb0-46e6-b0f4-9c5aaffae51e; 64431128-5f0c-42a9-a331-ee8c6410ee92; 641f9f81-8ff0-4a8c-9fa8-38a01868400e; 4e0b3ed9-1811-4edd-a391-64d91280a086; 9b40b350-f1cb-43f0-94ac-80950914bc8b; 2a2a6af0-d04f-42c6-8699-9a0e4732432d; 8fb1d6d3-b18c-46e0-b88c-74fa9c0ac422; da902bf0-54bc-4532-a96b-189faf87bfb2; 23f4637b-a094-427a-930f-5d86390fdf32; d9f0e205-880d-4901-a1be-348695d41c9a; 76d5a986-6ba5-4900-bc4e-cc1448e829c4; 265ac46c-cb40-413e-84a4-72c3667441e1; 2cbeeb77-edf4-4763-9962-57ad41d8e5ae; f0d93155-31e3-475b-9d3f-7d940a12a630; 15e0cf67-4101-4f3a-8e2e-6e47a2ceb514; a734a50b-522d-4fc1-80e3-813240b479bd; 749ecb9d-afe0-4bca-b1e2-7eda4a9ac85d; 0518285e-eff8-4b28-93ce-2499fc695c90; cf05e792-6dea-49fb-9baf-375ace73a7c0; 0e96f0c8-bb57-4f13-9e47-d4f0f787ce82; ba40c202-37df-4ef8-91dd-9d6fbd12e863; ad443b41-c51f-4e90-9d22-b60a7788327d; 136ce79a-d59c-4d70-a940-0999b5c98024; 7c58e49d-bb62-4b99-8b76-0fd5a42a60ee; cc36d832-c82d-42dc-9885-9698f2ce9b6f; ea1fd35d-e2a6-4243-be95-40d6d18f1fee; 4d5c3042-e835-4f6d-a427-000b6236dfb5; 7dea7d36-7945-4245-9c66-8536e8f340dd; 809e445d-fab3-49e9-bd67-b6db41cdba2a; 89be4309-ca08-4083-93eb-8bf9da665970; 4ca2f46b-5e1a-47f0-b735-e49713ec6893; 531cc147-67ff-4c08-ba66-07e7a7530d58; 9dd4b7e1-4df5-41c5-a502-38206790790e; 23cf3395-5ad5-42c6-92e5-3b7a3d3f78ad; 9f723f6a-33a1-4bd0-b3ae-de15974e143a; 372c1410-cb08-403e-9623-317a271ac718; 75bc80b0-9404-412c-a733-94ec404b11c4; 5a7620fd-95db-4581-8bb9-fcadcca700d0; d20fda45-95f5-41cf-b6b0-993dce7b35a6; a29d44b4-9301-4445-a49c-3aef277829bd; 9e2e76c0-9ac8-4459-8757-7b4a036b6ff1; b91c3c4d-720e-4073-bf17-7fc7b91f8ca2; cf0ffec0-d55a-48c7-9e88-a6e2a09d08a8; 874c097a-224c-4007-b8bd-79386ef9e40d; 47b2f5cd-fbf7-46be-8a7b-5e7afff67f45; eb9e49df-3daa-4b39-961b-1198c08fddb2; e4d3bd56-8114-495d-bc32-a7bc2863576d; bbc045f6-b52d-4844-956a-f577dccf886d; c7850ed9-517d-48c4-8119-944f18784bd4; c56a6379-dcc7-4619-9002-6eeb3fa610a5; d0959518-1825-4096-8aaf-8928e265447d; 18c4dc9c-c8b3-4530-b540-46d6e6f59658; 2c8c6910-f798-43ad-90a9-ed900cdf01b5; f458506c-f7b1-49b3-a89b-61984c578b13; 97e622b9-8663-4252-be23-25809ce8e09e; 37e6a084-ef4c-4d02-ae00-4636250f4a22; 67951972-1e90-4bff-b327-6a32a70d03bf
                        PBDs (SRO): 914bb245-5521-8199-9d49-ffa2ddecde97; 2b9d9b41-9af0-a951-94fd-c8045e342857; a5921971-5f25-56b2-3e1c-98ac4f16ea99; 71700c22-a739-b846-0870-811a35b9a5ef; 79bee0e4-b6bc-65bb-d823-75dfb40b3ae0; 7cd09f16-0fab-631e-cbd2-b1bee9cb3e50
          virtual-allocation ( RO): 799153324032
        physical-utilisation ( RO): 1040518742016
               physical-size ( RO): 2997937504256
                        type ( RO): lvmoiscsi
                content-type ( RO):
                      shared ( RW): true
               introduced-by ( RO): <not in database>
                other-config (MRW):
                   sm-config (MRO): allocation: thick; use_vhd: true; multipathable: true; devserial:
                       blobs ( RO):
         local-cache-enabled ( RO): false
                        tags (SRW):


    Now I don't see mount points but I seea  done of VDI's and a few PBDs.

    None of those match up with the /var/run/sr-mount/ folders

    Now I issued xe pbd-list params=all

    https://pastebin.com/RCpP62Wp

    And pasted that all above.

    I see that show up has sr-uuid but it's under host ( RO) [DEPRECATED]

    [root@Flex-Xen5 xapi]# xe pbd-list params=all | grep c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7
                   sr-uuid ( RO): c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7
                   sr-uuid ( RO): c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7
                   sr-uuid ( RO): c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7
                   sr-uuid ( RO): c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7
                   sr-uuid ( RO): c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7
                   sr-uuid ( RO): c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7

    You can see the below shows DEPRECATED.

    uuid ( RO)                  : 4d557776-e8ad-fe27-01d3-04177bda7a9d
         host ( RO) [DEPRECATED]: 3630a571-847f-4881-a7ce-0230213540ea
                 host-uuid ( RO): 3630a571-847f-4881-a7ce-0230213540ea
           host-name-label ( RO): Flex-Xen3.flexhost.local
                   sr-uuid ( RO): c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7
             sr-name-label ( RO): MGMT-HA
             device-config (MRO): server: 10.90.2.51; serverpath: /ha; options:
        currently-attached ( RO): true
              other-config (MRW): storage_driver_domain: OpaqueRef:e9ebe476-92dc-3bdc-2231-01c6aad8e0f4

    But that is for my HA and even that is checked fine.

    So I searched up a little more.

    uuid ( RO)                  : 8e832f3d-2131-4359-1491-5fa550b9192a
         host ( RO) [DEPRECATED]: b34f086e-fabf-471e-9feb-8f54362d7d0f
                 host-uuid ( RO): b34f086e-fabf-471e-9feb-8f54362d7d0f
           host-name-label ( RO): Flex-Xen4.flexhost.local
                   sr-uuid ( RO): befd4536-fdf1-6ab6-0adb-19ae532e0ee8
             sr-name-label ( RO): FlexSAN1-LUN1
             device-config (MRO): multihomelist: 10.83.0.2:3260,10.83.1.2:3260,10.83.1.3:3260,10.83.0.3:3260; targetIQN: iqn.1984-05.com.dell:powervault.md3200i.6d4ae520007d01af000000004f918538; target: 10.83.0.2; SCSIid: 36d4ae520007d01af0000f06753f6b6d9; port: 3260
        currently-attached ( RO): true
              other-config (MRW): storage_driver_domain: OpaqueRef:686f8ed6-a6e9-1e28-9406-d28281145481; iscsi_sessions: 4; mpath-36d4ae520007d01af0000f06753f6b6d9: [4, 4]; multipathed: true

    I see DEPRECATED a lot in that command. so the above is xen4 doing FlexSAN1-LUN1


    I'll post xe pbd-list params=all here since it is a lot of data


    Flex-XEN1.flexhost.local    xe pbd-list params=all
    https://pastebin.com/sV3WVDNm
    Flex-XEN2.flexhost.local    xe pbd-list params=all
    https://pastebin.com/U5v5XHWr
    Flex-XEN3.flexhost.local    xe pbd-list params=all
    https://pastebin.com/S65ExwLe
    Flex-XEN4.flexhost.local    xe pbd-list params=all
    https://pastebin.com/4cvhpnWi
    Flex-XEN5.flexhost.local    xe pbd-list params=all
    https://pastebin.com/RCpP62Wp
    Flex-XEN6.flexhost.local    xe pbd-list params=all
    https://pastebin.com/fLmTGuCt

    I keep going back to that host avoid set is that.

    I restarted cloudstack-management yesterday  and watched the logs start up

    2017-06-26 10:32:29,478 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-3:ctx-e004f48d) SR retrieved for FlexSAN2-LUN0
    2017-06-26 10:32:29,483 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-3:ctx-e004f48d) Checking FlexSAN2-LUN0 or SR 94d4494c-1317-4ffc-f0e6-a9210b0a0daf on XS[3630a571-847f-4881-a7ce-0230213540ea-10.90.2.113]
    2017-06-26 10:32:29,483 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-2:ctx-621d9637) SR retrieved for FlexSAN2-LUN0
    2017-06-26 10:32:29,488 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-2:ctx-621d9637) Checking FlexSAN2-LUN0 or SR 94d4494c-1317-4ffc-f0e6-a9210b0a0daf on XS[130e2063-0ec1-4150-a61e-ff9a526eb842-10.90.2.116]
    2017-06-26 10:32:29,514 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-1:ctx-f06f9d5d) SR retrieved for FlexSAN2-LUN0
    2017-06-26 10:32:29,521 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-1:ctx-f06f9d5d) Checking FlexSAN2-LUN0 or SR 94d4494c-1317-4ffc-f0e6-a9210b0a0daf on XS[b34f086e-fabf-471e-9feb-8f54362d7d0f-10.90.2.114]
    2017-06-26 10:32:29,522 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-8:ctx-1ad53014) SR retrieved for FlexSAN2-LUN0
    2017-06-26 10:32:29,532 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-8:ctx-1ad53014) Checking FlexSAN2-LUN0 or SR 94d4494c-1317-4ffc-f0e6-a9210b0a0daf on XS[c4c912c1-bd19-4fc5-b35a-120e99f8c5b8-10.90.2.115]
    2017-06-26 10:32:29,935 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-6:ctx-560ad386) SR retrieved for FlexSAN2-LUN1
    2017-06-26 10:32:29,942 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-6:ctx-560ad386) Checking FlexSAN2-LUN1 or SR 469b6dcd-8466-3d03-de0e-cc3983e1b6e2 on XS[130e2063-0ec1-4150-a61e-ff9a526eb842-10.90.2.116]
    2017-06-26 10:32:29,945 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-7:ctx-8593eae7) SR retrieved for FlexSAN2-LUN1
    2017-06-26 10:32:29,951 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-7:ctx-8593eae7) Checking FlexSAN2-LUN1 or SR 469b6dcd-8466-3d03-de0e-cc3983e1b6e2 on XS[c4c912c1-bd19-4fc5-b35a-120e99f8c5b8-10.90.2.115]
    2017-06-26 10:32:30,012 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-4:ctx-957ac3dc) SR retrieved for FlexSAN2-LUN1
    2017-06-26 10:32:30,017 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-4:ctx-957ac3dc) Checking FlexSAN2-LUN1 or SR 469b6dcd-8466-3d03-de0e-cc3983e1b6e2 on XS[3630a571-847f-4881-a7ce-0230213540ea-10.90.2.113]
    2017-06-26 10:32:30,129 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-2:ctx-3a737dae) SR retrieved for FlexSAN2-LUN1
    2017-06-26 10:32:30,134 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-2:ctx-3a737dae) Checking FlexSAN2-LUN1 or SR 469b6dcd-8466-3d03-de0e-cc3983e1b6e2 on XS[b34f086e-fabf-471e-9feb-8f54362d7d0f-10.90.2.114]
    2017-06-26 10:32:30,368 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-8:ctx-4950376c) SR retrieved for FlexSAN1-LUN1
    2017-06-26 10:32:30,374 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-8:ctx-4950376c) Checking FlexSAN1-LUN1 or SR befd4536-fdf1-6ab6-0adb-19ae532e0ee8 on XS[130e2063-0ec1-4150-a61e-ff9a526eb842-10.90.2.116]
    2017-06-26 10:32:30,383 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-10:ctx-4580ed74) SR retrieved for FlexSAN1-LUN1
    2017-06-26 10:32:30,390 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-10:ctx-4580ed74) Checking FlexSAN1-LUN1 or SR befd4536-fdf1-6ab6-0adb-19ae532e0ee8 on XS[c4c912c1-bd19-4fc5-b35a-120e99f8c5b8-10.90.2.115]
    2017-06-26 10:32:30,534 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-11:ctx-084f98b1) SR retrieved for FlexSAN1-LUN1
    2017-06-26 10:32:30,542 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-11:ctx-084f98b1) Checking FlexSAN1-LUN1 or SR befd4536-fdf1-6ab6-0adb-19ae532e0ee8 on XS[3630a571-847f-4881-a7ce-0230213540ea-10.90.2.113]
    2017-06-26 10:32:30,651 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-3:ctx-4a7034e4) SR retrieved for FlexSAN1-LUN1
    2017-06-26 10:32:30,657 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-3:ctx-4a7034e4) Checking FlexSAN1-LUN1 or SR befd4536-fdf1-6ab6-0adb-19ae532e0ee8 on XS[b34f086e-fabf-471e-9feb-8f54362d7d0f-10.90.2.114]
    2017-06-26 10:32:30,784 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-12:ctx-17d037ae) SR retrieved for FlexSAN1-LUN0
    2017-06-26 10:32:30,793 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-12:ctx-17d037ae) Checking FlexSAN1-LUN0 or SR 2a00a50b-764b-ce7f-589c-c67b353957da on XS[130e2063-0ec1-4150-a61e-ff9a526eb842-10.90.2.116]
    2017-06-26 10:32:30,794 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-13:ctx-dd2e3276) SR retrieved for FlexSAN1-LUN0
    2017-06-26 10:32:30,807 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-13:ctx-dd2e3276) Checking FlexSAN1-LUN0 or SR 2a00a50b-764b-ce7f-589c-c67b353957da on XS[c4c912c1-bd19-4fc5-b35a-120e99f8c5b8-10.90.2.115]
    2017-06-26 10:32:31,041 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-1:ctx-ca95ad39) SR retrieved for FlexSAN1-LUN0
    2017-06-26 10:32:31,047 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-1:ctx-ca95ad39) Checking FlexSAN1-LUN0 or SR 2a00a50b-764b-ce7f-589c-c67b353957da on XS[3630a571-847f-4881-a7ce-0230213540ea-10.90.2.113]
    2017-06-26 10:32:31,218 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-14:ctx-8a0d0199) SR retrieved for FlexSAN1-LUN0
    2017-06-26 10:32:31,225 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-14:ctx-8a0d0199) Checking FlexSAN1-LUN0 or SR 2a00a50b-764b-ce7f-589c-c67b353957da on XS[b34f086e-fabf-471e-9feb-8f54362d7d0f-10.90.2.114]
    2017-06-26 10:32:31,311 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-15:ctx-1f0a0810) SR retrieved for FlexSAN2-LUN0
    2017-06-26 10:32:31,315 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-15:ctx-1f0a0810) Checking FlexSAN2-LUN0 or SR 94d4494c-1317-4ffc-f0e6-a9210b0a0daf on XS[1d627d1b-5c76-4db8-9347-35e9f171cacf-10.90.2.112]
    2017-06-26 10:32:32,007 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-13:ctx-f910d06c) SR retrieved for FlexSAN2-LUN1
    2017-06-26 10:32:32,014 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-13:ctx-f910d06c) Checking FlexSAN2-LUN1 or SR 469b6dcd-8466-3d03-de0e-cc3983e1b6e2 on XS[1d627d1b-5c76-4db8-9347-35e9f171cacf-10.90.2.112]
    2017-06-26 10:32:32,770 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-23:ctx-7cd2afeb) SR retrieved for FlexSAN1-LUN1
    2017-06-26 10:32:32,788 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-23:ctx-7cd2afeb) Checking FlexSAN1-LUN1 or SR befd4536-fdf1-6ab6-0adb-19ae532e0ee8 on XS[1d627d1b-5c76-4db8-9347-35e9f171cacf-10.90.2.112]
    2017-06-26 10:32:32,923 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-6:ctx-64452aa3) SR retrieved for FlexSAN2-LUN0
    2017-06-26 10:32:32,927 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-6:ctx-64452aa3) Checking FlexSAN2-LUN0 or SR 94d4494c-1317-4ffc-f0e6-a9210b0a0daf on XS[20b55a3c-3470-4d8e-8797-853b4b2b439f-10.90.2.111]
    2017-06-26 10:32:33,212 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-24:ctx-70211e0b) SR retrieved for FlexSAN1-LUN0
    2017-06-26 10:32:33,217 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-24:ctx-70211e0b) Checking FlexSAN1-LUN0 or SR 2a00a50b-764b-ce7f-589c-c67b353957da on XS[1d627d1b-5c76-4db8-9347-35e9f171cacf-10.90.2.112]
    2017-06-26 10:32:33,440 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-5:ctx-e286e17d) SR retrieved for FlexSAN2-LUN1
    2017-06-26 10:32:33,445 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-5:ctx-e286e17d) Checking FlexSAN2-LUN1 or SR 469b6dcd-8466-3d03-de0e-cc3983e1b6e2 on XS[20b55a3c-3470-4d8e-8797-853b4b2b439f-10.90.2.111]
    2017-06-26 10:32:33,966 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-32:ctx-82780b73) SR retrieved for FlexSAN1-LUN1
    2017-06-26 10:32:33,972 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-32:ctx-82780b73) Checking FlexSAN1-LUN1 or SR befd4536-fdf1-6ab6-0adb-19ae532e0ee8 on XS[20b55a3c-3470-4d8e-8797-853b4b2b439f-10.90.2.111]
    2017-06-26 10:32:34,621 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-20:ctx-f0a94586) SR retrieved for FlexSAN1-LUN0
    2017-06-26 10:32:34,626 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-20:ctx-f0a94586) Checking FlexSAN1-LUN0 or SR 2a00a50b-764b-ce7f-589c-c67b353957da on XS[20b55a3c-3470-4d8e-8797-853b4b2b439f-10.90.2.111]

    Ok so that turned into more then I thought it would be.

    Since I am working on Xen5 I found the 10.90.2.115 data above and used the XS of c4c912c1-bd19-4fc5-b35a-120e99f8c5b8


    [root@Flex-Xen5 xapi]# xe pbd-list params=all | grep c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
         host ( RO) [DEPRECATED]: c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
                 host-uuid ( RO): c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
         host ( RO) [DEPRECATED]: c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
                 host-uuid ( RO): c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
         host ( RO) [DEPRECATED]: c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
                 host-uuid ( RO): c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
         host ( RO) [DEPRECATED]: c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
                 host-uuid ( RO): c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
         host ( RO) [DEPRECATED]: c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
                 host-uuid ( RO): c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
         host ( RO) [DEPRECATED]: c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
                 host-uuid ( RO): c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
         host ( RO) [DEPRECATED]: c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
                 host-uuid ( RO): c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
         host ( RO) [DEPRECATED]: c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
                 host-uuid ( RO): c4c912c1-bd19-4fc5-b35a-120e99f8c5b8

    Then I also ude the StorageRepository 94d4494c-1317-4ffc-f0e6-a9210b0a0daf

    [root@Flex-Xen5 xapi]# xe pbd-list params=all |grep  94d4494c-1317-4ffc-f0e6-a9210b0a0daf
                   sr-uuid ( RO): 94d4494c-1317-4ffc-f0e6-a9210b0a0daf
                   sr-uuid ( RO): 94d4494c-1317-4ffc-f0e6-a9210b0a0daf
                   sr-uuid ( RO): 94d4494c-1317-4ffc-f0e6-a9210b0a0daf
                   sr-uuid ( RO): 94d4494c-1317-4ffc-f0e6-a9210b0a0daf
                   sr-uuid ( RO): 94d4494c-1317-4ffc-f0e6-a9210b0a0daf
                   sr-uuid ( RO): 94d4494c-1317-4ffc-f0e6-a9210b0a0daf

    So those two show up as

    uuid ( RO)                  : 230c8ed1-07c4-e4a1-d6c3-c4ad72aac0e5
         host ( RO) [DEPRECATED]: c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
                 host-uuid ( RO): c4c912c1-bd19-4fc5-b35a-120e99f8c5b8
           host-name-label ( RO): Flex-Xen5.flexhost.local
                   sr-uuid ( RO): 94d4494c-1317-4ffc-f0e6-a9210b0a0daf
             sr-name-label ( RO): FlexSAN2-LUN0
             device-config (MRO): multiSession: 10.83.0.5,3260,iqn.1984-05.com.dell:powervault.md3200i.6d4ae520007cf51a000000004f910f60|10.83.1.5,3260,iqn.1984-05.com.dell:powervault.md3200i.6d4ae520007cf51a0
    00000004f910f60|10.83.1.4,3260,iqn.1984-05.com.dell:powervault.md3200i.6d4ae520007cf51a000000004f910f60|10.83.0.4,3260,iqn.1984-05.com.dell:powervault.md3200i.6d4ae520007cf51a000000004f910f60|; target: 10
    .83.0.5; multihomelist: 10.83.1.4:3260,10.83.0.4:3260,10.83.0.5:3260,10.83.1.5:3260; targetIQN: *; SCSIid: 36d4ae520007cf51a00001c4853f5e93e; port: 3260
        currently-attached ( RO): true
              other-config (MRW): storage_driver_domain: OpaqueRef:be743e6a-5612-9431-8bc0-fe7e33f75df1; iscsi_sessions: 4; mpath-36d4ae520007cf51a00001c4853f5e93e: [4, 4]; multipathed: true




    So what do I need to update to remove the deprecated and how do I tell cloudstack not to use those UUID's and why is it only using those for system vm's?

    Jeremy

    -----Original Message-----
    From: Dag Sonstebo [mailto:Dag.Sonstebo@shapeblue.com]
    Sent: Monday, June 26, 2017 12:44 PM
    To: users@cloudstack.apache.org
    Subject: Re: Recreating SystemVM's

    OK, how do your LUNs show up in XenCentre? Do they show up as healthy? If not you can right click on them and select "repair".

    Which storage protocol are you using? NFS/iSCSI/FCoE?

    For the mounts you can see check them with "xe sr-list params=all uuid= c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7" etc.

    Can you also do a "xe pbd-list params=all" on your XenServers.

    Regards,
    Dag Sonstebo
    Cloud Architect
    ShapeBlue

    On 26/06/2017, 18:13, "Jeremy Peterson" <jpeterson@acentek.net> wrote:

        Ok so how do I recreate these correct sr-mount's that are missing???  See the output form each host.


        [root@Flex-Xen1 ~]# cd /var/run/sr-mount/
        [root@Flex-Xen1 sr-mount]# ls
        9b52e80e-92ef-2a5b-2b0f-6381b2b035c6  c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7

        [root@Flex-Xen2 ~]# cd /var/run/sr-mount/
        [root@Flex-Xen2 sr-mount]# ls
        4fde7baa-1963-3656-915e-c6ddb1ca15ad  c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7

        [root@Flex-Xen3 ~]# cd /var/run/sr-mount/
        [root@Flex-Xen3 sr-mount]# ls
        c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7

        [root@Flex-Xen4 ~]# cd /var/run/sr-mount/
        [root@Flex-Xen4 sr-mount]# ls
        bb48b5cc-e7b0-ba00-db1e-7f1d5e47559a  c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7

        [root@Flex-Xen5 ~]# cd /var/run/sr-mount/
        [root@Flex-Xen5 sr-mount]# ls
        c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7

        [root@Flex-Xen6 ~]# cd /var/run/sr-mount/
        [root@Flex-Xen6 sr-mount]# ls
        c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7

        Since in XenCenter I see the storage shouldn't these mounts look like what you described below?


        Jeremy


        -----Original Message-----
        From: Dag Sonstebo [mailto:Dag.Sonstebo@shapeblue.com]
        Sent: Monday, June 26, 2017 10:33 AM
        To: users@cloudstack.apache.org
        Subject: Re: Recreating SystemVM's

        Please see my last email - you've not carried out the checks discussed.

        1) From previous email you have provided output from template_spool_ref;
        '53', '5', '1', '2015-04-13 12:50:37', NULL, NULL, '100', 'DOWNLOADED', NULL, 'ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf', 'ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf', '0', '0', 'Ready', '2', '2015-04-13 13:16:11'
        '52', '6', '1', '2015-04-13 12:50:31', NULL, NULL, '100', 'DOWNLOADED', NULL, 'bed64043-2208-415c-ad32-02ffeb4802d7', 'bed64043-2208-415c-ad32-02ffeb4802d7', '0', '0', 'Ready', '2', '2015-04-13 13:14:51'
        '57', '7', '1', '2015-04-21 22:21:44', NULL, NULL, '100', 'DOWNLOADED', NULL, 'f2bbd9ea-3237-4119-8c03-8c0c570d153b', 'f2bbd9ea-3237-4119-8c03-8c0c570d153b', '0', '0', 'Ready', '2', '2015-04-21 22:22:40'
        '86', '8', '1', '2015-06-25 18:52:57', NULL, NULL, '100', 'DOWNLOADED', NULL, 'd4085d91-22fa-4965-bfa7-d1a1800f6aa7', 'd4085d91-22fa-4965-bfa7-d1a1800f6aa7', '0', '0', 'Ready', '2', '2015-06-25 18:54:05'

        From there you have the paths where CloudStack believes it should find your system VM templates downloaded on primary storage. That includes the UUID which you are seeing as invalid.

        2) So for each primary storage LUN do a folder listing and see if the VHD file is there - in this case you could see your UUID d4085d91-22fa-4965-bfa7-d1a1800f6aa7 (from your logs and from the query above) is invalid - so you need to check if that VHD file is in place.

        Your pool IDs:
            '5', 'RSFD-P01-C01-PRI3', 'FlexSAN2-LUN0'
            '6', 'RSFD-P01-C01-PRI4', 'FlexSAN2-LUN1'
            '7', 'RSFD-P01-C01-PRI2', 'FlexSAN1-LUN1'
            '8', 'RSFD-P01-C01-PRI1', 'FlexSAN1-LUN0'

        with corresponding UUIDs:

            uuid ( RO)          : befd4536-fdf1-6ab6-0adb-19ae532e0ee8
                name-label ( RW): FlexSAN1-LUN1
            uuid ( RO)          : 469b6dcd-8466-3d03-de0e-cc3983e1b6e2
                name-label ( RW): FlexSAN2-LUN1
            uuid ( RO)          : 94d4494c-1317-4ffc-f0e6-a9210b0a0daf
                name-label ( RW): FlexSAN2-LUN0
            uuid ( RO)          : 2a00a50b-764b-ce7f-589c-c67b353957da
                name-label ( RW): FlexSAN1-LUN0

        So you are looking for - *DO FOLDER LISTINGS TO CHECK THESE ARE PRESENT*:

        /var/run/sr-mount/94d4494c-1317-4ffc-f0e6-a9210b0a0daf/ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf.*
        /var/run/sr-mount/469b6dcd-8466-3d03-de0e-cc3983e1b6e2/bed64043-2208-415c-ad32-02ffeb4802d7.*
        /var/run/sr-mount/befd4536-fdf1-6ab6-0adb-19ae532e0ee8/f2bbd9ea-3237-4119-8c03-8c0c570d153b.*
        /var/run/sr-mount/2a00a50b-764b-ce7f-589c-c67b353957da/d4085d91-22fa-4965-bfa7-d1a1800f6aa7.*

        3) If they are NOT there then you have wiped them during your tidyup - which means CloudStack thinks it should have templates ready to roll out system VMs from but XenServer can't find them.

        Regards,
        Dag Sonstebo
        Cloud Architect
        ShapeBlue

        On 26/06/2017, 14:41, "Jeremy Peterson" <jpeterson@acentek.net> wrote:

            Ok so a couple things I've noted.


            [root@Flex-Xen5 5]# df -h
            Filesystem            Size  Used Avail Use% Mounted on
            /dev/sda1             4.0G  3.2G  648M  84% /
            none                  1.9G  132K  1.9G   1% /dev/shm
            /opt/xensource/packages/iso/XenCenter.iso
                                   56M   56M     0 100% /var/xen/xc-install
            10.90.2.51:/ha/c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7
                                   14G  2.6G   11G  20% /var/run/sr-mount/c8dc1be9-a9f8-6592-bd03-ee3a7a59dea7
            secstor.flexhost.local:/mnt/Volume_0/NFS/
                                   11T  3.9T  6.8T  37% /var/cloud_mount/87156045-e430-3fe3-aa4b-3d41c1af8df2

            I see my secstor mount but sr-mount just shows the HA mount not my primary storage luns?

            Results from the DB query show storage pools uuid  and the cloustack names to be correct.

            SELECT id,name,uuid FROM cloud.storage_pool;

            '5', 'RSFD-P01-C01-PRI3', 'FlexSAN2-LUN0'
            '6', 'RSFD-P01-C01-PRI4', 'FlexSAN2-LUN1'
            '7', 'RSFD-P01-C01-PRI2', 'FlexSAN1-LUN1'
            '8', 'RSFD-P01-C01-PRI1', 'FlexSAN1-LUN0'

            Where as the

            xe sr-list params=uuid,name-label

            uuid ( RO)          : befd4536-fdf1-6ab6-0adb-19ae532e0ee8
                name-label ( RW): FlexSAN1-LUN1
            ...
            uuid ( RO)          : 469b6dcd-8466-3d03-de0e-cc3983e1b6e2
                name-label ( RW): FlexSAN2-LUN1
            ...
            uuid ( RO)          : 94d4494c-1317-4ffc-f0e6-a9210b0a0daf
                name-label ( RW): FlexSAN2-LUN0
            ...
            uuid ( RO)          : 2a00a50b-764b-ce7f-589c-c67b353957da
                name-label ( RW): FlexSAN1-LUN0


            Show the uuid's are names same as above. So that tells me that cloudstack and xenserver know of the storage.

            I can launch VM's from secondary storage to primary find as noted below.

            I can deploy VM's from templates and ISO's so that tells me I can access secondary storage.

            This all goes back to my job call and that the "path" is saying uuid d4085d91-22fa-4965-bfa7-d1a1800f6aa7 and apparently that is invalid.  I still have no idea what that uuid is.  I was thinking primary storage uuid?  I was thinking template uuid?  I've rebooted all the hosts. I've rebooted the management server.  I've validated secondary storage is mounted.  I've applied updates to xenserver. I've restarted cloudstack-management. I've deployed from template. I've deployed from iso. Everything looks clean except those stupid system vm's keep recycling.  And all they ever do us complain uuid invalid.  And then InsufficentServerCapacityException

            2017-06-26 08:14:19,231 DEBUG [c.c.a.t.Request] (Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Seq 1-6981705322331213134: Sending  { Cmd , MgmtId: 345050411715, via: 1(Flex-Xen2.flexhost.local), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":

            {"path":"d4085d91-22fa-4965-bfa7-d1a1800f6aa7"

            ,"origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM Template (XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"6547906c-c0c6-408f-9d9a-44d8250305b4","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"ROOT-29510","size":2689602048,"volumeId":34429,"vmName":"s-29510-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":34429,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}] }
            2017-06-26 08:14:19,231 DEBUG [c.c.a.t.Request] (Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Seq 1-6981705322331213134: Executing:  { Cmd , MgmtId: 345050411715, via: 1(Flex-Xen2.flexhost.local), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"d4085d91-22fa-4965-bfa7-d1a1800f6aa7","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM Template (XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"6547906c-c0c6-408f-9d9a-44d8250305b4","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"ROOT-29510","size":2689602048,"volumeId":34429,"vmName":"s-29510-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":34429,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}] }
            2017-06-26 08:14:19,231 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-310:ctx-b7042ce7) Seq 1-6981705322331213134: Executing request
            2017-06-26 08:14:19,236 DEBUG [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-310:ctx-b7042ce7) Catch Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid: d4085d91-22fa-4965-bfa7-d1a1800f6aa7 failed due to The uuid you supplied was invalid.
            2017-06-26 08:14:19,236 WARN  [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-310:ctx-b7042ce7) Unable to create volume; Pool=volumeTO[uuid=6547906c-c0c6-408f-9d9a-44d8250305b4|path=null|datastore=PrimaryDataStoreTO[uuid=FlexSAN1-LUN0|name=null|id=8|pooltype=PreSetup]]; Disk:
            com.cloud.utils.exception.CloudRuntimeException: Catch Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid: d4085d91-22fa-4965-bfa7-d1a1800f6aa7 failed due to The uuid you supplied was invalid.



            2017-06-26 08:14:19,354 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-30:ctx-00f1af56 job-342/job-183150) Unable to complete AsyncJobVO {id:183150, userId: 1, accountId: 1, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAABAAAAAAAAAAEAAAAAAABXi3QAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAHBwcHBwcHBwcHA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 345050411715, completeMsid: null, lastUpdated: null, lastPolled: null, created: Mon Jun 26 08:14:15 CDT 2017}, job origin:342
            com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[ConsoleProxy|v-22411-VM]Scope=interface com.cloud.dc.DataCenter; id=1


            So that makes me look at the management-server.log for a job lets find one quickly.

            Cat /var/log/cloudstack/management/management-server.log | grep ctx-302e3fd5

            And I see the following.


            2017-06-26 08:14:19,370 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] (Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Found pools matching tags: [Pool[5|PreSetup], Pool[6|PreSetup], Pool[7|PreSetup], Pool[8|PreSetup]]
            2017-06-26 08:14:19,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Checking if storage pool is suitable, name: null ,poolId: 5
            2017-06-26 08:14:19,371 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) StoragePool is in avoid set, skipping this pool
            2017-06-26 08:14:19,372 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Checking if storage pool is suitable, name: null ,poolId: 6
            2017-06-26 08:14:19,372 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) StoragePool is in avoid set, skipping this pool
            2017-06-26 08:14:19,373 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Checking if storage pool is suitable, name: null ,poolId: 7
            2017-06-26 08:14:19,373 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) StoragePool is in avoid set, skipping this pool
            2017-06-26 08:14:19,373 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) Checking if storage pool is suitable, name: null ,poolId: 8
            2017-06-26 08:14:19,373 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] (Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) StoragePool is in avoid set, skipping this pool
            2017-06-26 08:14:19,373 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] (Work-Job-Executor-18:ctx-302e3fd5 job-1042/job-183149 ctx-835c48e1) ClusterScopeStoragePoolAllocator returning 0 suitable storage pools

            It again sees my PreSetup for 5 6 7 8 same as my LUN information from the DB

            But why does it say avoid set?

            Is that my problem?  SystemVM's are checking avoid set and then saying nope nope nope and throwing insufficient resources?

            I think I'm right here.

            Any idea where the avoid set option is and how to clear that?  Or reset it to no for 5 6 7 8?


            Jeremy


            -----Original Message-----
            From: Jeremy Peterson [mailto:jpeterson@acentek.net]
            Sent: Sunday, June 25, 2017 10:56 PM
            To: users@cloudstack.apache.org
            Subject: Re: Recreating SystemVM's

            if i am ssh'ed into the xenservers and i do a df -h i do not see those folders mounted but again i'll check when i get in the morning.

            Jeremy
            ________________________________________
            From: Jeremy Peterson <jpeterson@acentek.net>
            Sent: Sunday, June 25, 2017 10:15 PM
            To: users@cloudstack.apache.org
            Subject: Re: Recreating SystemVM's

            Thank you for the exact information provided I'll check this out in the morning.

            Jeremy



            Sent from my Verizon, Samsung Galaxy smartphone


            -------- Original message --------
            From: Dag Sonstebo <Dag.Sonstebo@shapeblue.com>
            Date: 6/25/17 5:47 PM (GMT-06:00)
            To: users@cloudstack.apache.org
            Subject: Re: Recreating SystemVM's

            Mount points are in /var/run/sr-mount/.

            Find your primary pool name-labels with "SELECT id,name,uuid FROM cloud.storage_pool;"

            Match the pool name label to XenServer mount point with "xe sr-list params=uuid,name-label" on the xenservers.

            From that you should find /var/run/sr-mount/<xe provided UUID of storage pool here>/

            Under this path you should find the system VM template entries - which are the same as the "local_path" from template_spool_ref.

            Regards,
            Dag Sonstebo
            Cloud Architect
            ShapeBlue

            On 25/06/2017, 22:17, "Jeremy Peterson" <jpeterson@acentek.net> wrote:

                I'm having issues looking for primary storage mount in xenserver. I see the lv but no mount points to navigate to discover files.

                Also I didn't see actual pathes in that db query. Did you?



                Sent from my Verizon, Samsung Galaxy smartphone


                -------- Original message --------
                From: Dag Sonstebo <Dag.Sonstebo@shapeblue.com>
                Date: 6/25/17 4:30 AM (GMT-06:00)
                To: users@cloudstack.apache.org
                Subject: Re: Recreating SystemVM's

                As per previous email - you need to check that the path you have for your system templates in template_spool_ref exists on your primary storage. You have admitted primary storage was tidied up ungracefully so you are trying to work out if the templates are actually still on primary storage like your DB thinks they are.

                Regards,
                Dag Sonstebo
                Cloud Architect
                ShapeBlue

                On 23/06/2017, 20:09, "Jeremy Peterson" <jpeterson@acentek.net> wrote:

                    Ok so Primary storage.

                    Since I am able to deploy new VM's from ISO and Template that means access to the secondary storage is good.

                    Ok so my XenServers show my Storage LUN's have no failures.

                    Select * from cloud.template_store_ref;

                    https://drive.google.com/open?id=0B5IXhrpPAT9qRVRGVmY3TkR3ZGM

                    Now what gets me is the last_updated is the same day that my problems started.  When my storage PIF was calling errors.  I lost network connectivity to my iscsi primary storage and all of my VM's dropped and came back online.

                    What should I validate in this query as being correct because all of the instance VM's came back (except the 1 that we talked about yesterday) but my system vm's are still down.


                    Jeremy

                    -----Original Message-----
                    From: Dag Sonstebo [mailto:Dag.Sonstebo@shapeblue.com]
                    Sent: Thursday, June 22, 2017 11:50 AM
                    To: users@cloudstack.apache.org
                    Subject: Re: Recreating SystemVM's

                    In short there's no reason for CloudStack to download the system VM template from secondary to primary again if it's there and working - hence your 2015 dates. Template_spool_ref shows the download state to primary, template_store_ref shows status to secondary.

                    You can access your primary storage directly from command line on your XenServers - so can you check all those paths on your primary storage pools?

                    Regards,
                    Dag Sonstebo
                    Cloud Architect
                    ShapeBlue

                    On 22/06/2017, 17:22, "Jeremy Peterson" <jpeterson@acentek.net> wrote:

                        1.       I downloaded the system template because my system template vm's were not launching so I downloaded it thinking something might be off.

                        2.       '1', 'routing-1', 'SystemVM Template (XenServer)', '8a4039f2-bb71-11e4-8c76-0050569b1662', '0', '0', 'SYSTEM', '0', '64', 'http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2', 'VHD', '2015-02-23 09:35:05', NULL, '1', '2b15ab4401c2d655264732d3fc600241', 'SystemVM Template (XenServer)', '0', '0', '184', '1', '0', '1', '0', 'XenServer', NULL, NULL, '0', '2689602048', 'Active', '0', NULL, '0'

                        a.       Ok so that tells me that my template id is 1

                        3.

                        a.       '53', '5', '1', '2015-04-13 12:50:37', NULL, NULL, '100', 'DOWNLOADED', NULL, 'ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf', 'ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf', '0', '0', 'Ready', '2', '2015-04-13 13:16:11'

                        b.      '52', '6', '1', '2015-04-13 12:50:31', NULL, NULL, '100', 'DOWNLOADED', NULL, 'bed64043-2208-415c-ad32-02ffeb4802d7', 'bed64043-2208-415c-ad32-02ffeb4802d7', '0', '0', 'Ready', '2', '2015-04-13 13:14:51'

                        c.       '57', '7', '1', '2015-04-21 22:21:44', NULL, NULL, '100', 'DOWNLOADED', NULL, 'f2bbd9ea-3237-4119-8c03-8c0c570d153b', 'f2bbd9ea-3237-4119-8c03-8c0c570d153b', '0', '0', 'Ready', '2', '2015-04-21 22:22:40'

                        d.      '86', '8', '1', '2015-06-25 18:52:57', NULL, NULL, '100', 'DOWNLOADED', NULL, 'd4085d91-22fa-4965-bfa7-d1a1800f6aa7', 'd4085d91-22fa-4965-bfa7-d1a1800f6aa7', '0', '0', 'Ready', '2', '2015-06-25 18:54:05'

                                                                                       i.      This is from select * from cloud.template_spool_ref where template_id=1;

                                                                                     ii.      In my logs I can see the third template.



                        2017-06-22 09:23:49,103 DEBUG [c.c.a.t.Request] (Work-Job-Executor-117:ctx-03e815cd job-1042/job-154364 ctx-0c08652e) Seq 18-3646226848309841098: Sending  { Cmd , MgmtId: 345050411715, via: 18(Flex-Xen5.flexhost.local), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"f2bbd9ea-3237-4119-8c03-8c0c570d153b","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM Template (XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN1","id":7,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN1","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN1/?ROLE=Primary&STOREUUID=FlexSAN1-LUN1"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"7726194f-821c-4df2-90cf-f61ac06a362d","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN1","id":7,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN1","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN1/?ROLE=Primary&STOREUUID=FlexSAN1-LUN1"}},"name":"ROOT-23819","size":2689602048,"volumeId":28738,"vmName":"s-23819-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":28738,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}] }

                        2017-06-22 09:23:49,103 DEBUG [c.c.a.t.Request] (Work-Job-Executor-117:ctx-03e815cd job-1042/job-154364 ctx-0c08652e) Seq 18-3646226848309841098: Executing:  { Cmd , MgmtId: 345050411715, via: 18(Flex-Xen5.flexhost.local), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"f2bbd9ea-3237-4119-8c03-8c0c570d153b","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM Template (XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN1","id":7,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN1","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN1/?ROLE=Primary&STOREUUID=FlexSAN1-LUN1"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"7726194f-821c-4df2-90cf-f61ac06a362d","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN1","id":7,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN1","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN1/?ROLE=Primary&STOREUUID=FlexSAN1-LUN1"}},"name":"ROOT-23819","size":2689602048,"volumeId":28738,"vmName":"s-23819-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":28738,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}] }

                        2017-06-22 09:23:49,104 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-124:ctx-ab245ccb) Seq 18-3646226848309841098: Executing request

                        2017-06-22 09:23:49,109 DEBUG [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-124:ctx-ab245ccb) Catch Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid: f2bbd9ea-3237-4119-8c03-8c0c570d153b failed due to The uuid you supplied was invalid.

                        2017-06-22 09:23:49,110 WARN  [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-124:ctx-ab245ccb) Unable to create volume; Pool=volumeTO[uuid=7726194f-821c-4df2-90cf-f61ac06a362d|path=null|datastore=PrimaryDataStoreTO[uuid=FlexSAN1-LUN1|name=null|id=7|pooltype=PreSetup]]; Disk:

                        com.cloud.utils.exception.CloudRuntimeException: Catch Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid: f2bbd9ea-3237-4119-8c03-8c0c570d153b failed due to The uuid you supplied was invalid.



                        And again I see another deployment of a vm



                        2017-06-22 09:23:49,918 DEBUG [c.c.a.t.Request] (Work-Job-Executor-125:ctx-7dda0875 job-342/job-154365 ctx-79440317) Seq 19-2522578741280910778: Sending  { Cmd , MgmtId: 345050411715, via: 19(Flex-Xen1.flexhost.local), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"d4085d91-22fa-4965-bfa7-d1a1800f6aa7","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM Template (XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"a2456229-2942-4d9b-9bff-c6d9ea004fbd","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"ROOT-22411","size":2689602048,"volumeId":27330,"vmName":"v-22411-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":27330,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}] }

                        2017-06-22 09:23:49,918 DEBUG [c.c.a.t.Request] (Work-Job-Executor-125:ctx-7dda0875 job-342/job-154365 ctx-79440317) Seq 19-2522578741280910778: Executing:  { Cmd , MgmtId: 345050411715, via: 19(Flex-Xen1.flexhost.local), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"d4085d91-22fa-4965-bfa7-d1a1800f6aa7","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM Template (XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"a2456229-2942-4d9b-9bff-c6d9ea004fbd","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN1-LUN0","id":8,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN1-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN1-LUN0/?ROLE=Primary&STOREUUID=FlexSAN1-LUN0"}},"name":"ROOT-22411","size":2689602048,"volumeId":27330,"vmName":"v-22411-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":27330,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}] }

                        2017-06-22 09:23:49,918 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-4:ctx-8db8a7ec) Seq 19-2522578741280910778: Executing request

                        2017-06-22 09:23:49,925 DEBUG [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-4:ctx-8db8a7ec) Catch Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid: d4085d91-22fa-4965-bfa7-d1a1800f6aa7 failed due to The uuid you supplied was invalid.

                        2017-06-22 09:23:49,925 WARN  [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-4:ctx-8db8a7ec) Unable to create volume; Pool=volumeTO[uuid=a2456229-2942-4d9b-9bff-c6d9ea004fbd|path=null|datastore=PrimaryDataStoreTO[uuid=FlexSAN1-LUN0|name=null|id=8|pooltype=PreSetup]]; Disk:

                        com.cloud.utils.exception.CloudRuntimeException: Catch Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid: d4085d91-22fa-4965-bfa7-d1a1800f6aa7 failed due to The uuid you supplied was invalid.



                        4.       Download_state shows DOWNLOADED and download_pct is 100 on all of my id's

                        5.       My storage is firmware based dell 3200 so there is no OS to log into to view the install_path.  How do I validate that?



                        Now is it weird that my systemvm install date is 2015 but yet the command I used above completed successfully?



                        Jeremy





                        -----Original Message-----
                        From: Dag Sonstebo [mailto:Dag.Sonstebo@shapeblue.com]
                        Sent: Thursday, June 22, 2017 9:58 AM
                        To: users@cloudstack.apache.org
                        Subject: Re: Recreating SystemVM's



                        1) You've not told us why you chose to redownload the system VM template - can you elaborate?

                        2) Can you run: "SELECT * FROM cloud.vm_template where name like '%system%' and hypervisor_type='XenServer';"

                        3) "So after cleaning up the 5TB worth of data (last Friday).." - did you check all your disk chains to ensure you didn't wipe a base disk? If not then chances are you wiped a template disk CloudStack now thinks is there.

                        Check this in template_spool_ref - work out from point 2) above what your template ID is, as well as what your primary storage pool ID is, something like this: "SELECT * FROM cloud.template_spool_ref where template_id=XYZ and pool_id=12345;"

                        What is the downloaded state?

                        Check the install_path on your primary storage - does it exist?



                        Regards,

                        Dag Sonstebo

                        Cloud Architect

                        ShapeBlue



                        On 22/06/2017, 15:14, "Jeremy Peterson" <jpeterson@acentek.net<mailto:jpeterson@acentek.net>> wrote:



                            Sorry I am using 4.5.0 I mistyped my versions.



                            http://prntscr.com/fmuluj



                            My router.template.xenserver shows

                                        SystemVM Template (XenServer)



                            If I go Templates -> SystemVM Template (XenServer) the Status is Download Complete and Read shows Yes



                            This is the command I ran on my mananagement server to redownload systemvm template.

                                        /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m /secondary -u http://cloudstack.apt-get.eu/systemvm/4.5/systemvm64template-4.5-xen.vhd.bz2 -h xenserver -F



                            i-153-446-VM was a working VM that powered off during this whole set of problems and had not been able to power back online since.  The oddity is I get the error visually "insufficient capacity" yes during the call to start that VM it finds a valid host for Memory and CPU but then errors with UUID is invalid.  I am more focused on SSVM and ConsoleProxyVM not starting at this time. As I have replaced i-153-446 with a new VM.  Now this is puzzling.  I can launch new VM's for ISO's and templates.



                                        Display name     test-launch-from-template

                                        Name    test-launch-from-template

                                        State     Running

                                        Template             CentOS 7 40GB

                                        Dynamically Scalable       Yes

                                        OS Type               CentOS 7

                                        Hypervisor          XenServer

                                        Attached ISO

                                        Compute offering            2vCPU,4GB RAM,HA

                                        # of CPU Cores  2

                                        CPU (in MHz)     2000

                                        Memory (in MB)              4096

                                        VGPU

                                        HA Enabled         Yes

                                        Group

                                        Zone name         Rushford

                                        Host       Flex-Xen4.flexhost.local

                                        Domain ROOT

                                        Account               admin

                                        Created                22 Jun 2017 08:33:30



                            I suspected storage a while back and noticed that the SSVM was recreating its 2.5GB disk over and over and over on all of my storage luns.  So after cleaning up the 5TB worth of data (last Friday) I don't see a storage issue with my SAN iscsi connections.



                            http://prntscr.com/fmus2i



                            Again thanks Dag for your response here's to hoping some of that helps track down what's broke.



                            Whats killing me is the amount of logs.  It seems like its creating multiple system vm's at the same time





                            Jeremy



                            -----Original Message-----

                            From: Dag Sonstebo [mailto:Dag.Sonstebo@shapeblue.com]

                            Sent: Thursday, June 22, 2017 6:49 AM

                            To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org>

                            Subject: Re: Recreating SystemVM's



                            OK, you seem to have a handful of issues here.



                            1) You have stated at the start of this thread you are using CloudStack 4.9.0 and XS6.5.

                            In this log dump - https://pastebin.com/2DhzFVDZ - all your downloads are for 4.5 system VM templates, e.g.



                            2017-06-21 10:46:16,440 DEBUG [c.c.a.t.Request] (Work-Job-Executor-39:ctx-ca13c13b job-1042/job-147411 ctx-ebfa1fb6) Seq 15-7914231920173516250: Executing:  { Cmd , MgmtId: 345050411715, via: 15(Flex-Xen3.flexhost.local), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid"

                            .



                            Your MySQL query confirms this:



                            - - -



                            SELECT * FROM cloud.vm_template where type='SYSTEM';



                                        1              routing-1             SystemVM Template (XenServer)            8a4039f2-bb71-11e4-8c76-0050569b1662                0              0              SYSTEM                0              64                http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2        VHD       2015-02-23 09:35:05                               1              2b15ab4401c2d655264732d3fc600241     SystemVM Template (XenServer)            0                0              184         1              0              1              0              XenServer                                           0              2689602048                Active   0                              0

                                        3              routing-3             SystemVM Template (KVM)       8a46062a-bb71-11e4-8c76-0050569b1662              0                0              SYSTEM                0              64           http://download.cloud.com/templates/4.5/systemvm64template-4.5-kvm.qcow2.bz2         QCOW2                2015-02-23 09:35:05                        1              aa9f501fecd3de1daeb9e2f357f6f002                SystemVM Template (KVM)       0              0              15           1              0              1              0              KVM                                      0                              Active   0                              0

                                        8              routing-8             SystemVM Template (vSphere)                8a4e70c6-bb71-11e4-8c76-0050569b1662                0              0              SYSTEM                0              64                http://download.cloud.com/templates/4.5/systemvm64template-4.5-vmware.ova        OVA       2015-02-23 09:35:05                               1              3106a79a4ce66cd7f6a7c50e93f2db57      SystemVM Template (vSphere)                0                0              15           1              0              1              0              VMware                                              0                              Active   0                                1

                                        9              routing-9             SystemVM Template (HyperV)  8a5184e6-bb71-11e4-8c76-0050569b1662             0                0              SYSTEM                0              64           http://download.cloud.com/templates/4.5/systemvm64template-4.5-hyperv.vhd.zip          VHD       2015-02-23 09:35:05                        1              70bd30ea02ee9ed67d2c6b85c179cee9                SystemVM Template (HyperV)  0              0              15           1              0              1              0              Hyperv                                 0                              Active   0                              0

                                        10           routing-10           SystemVM Template (LXC)          5bb9e71c-bb72-11e4-8c76-0050569b1662             0                0              SYSTEM                0              64           http://download.cloud.com/templates/4.5/systemvm64template-4.5-kvm.qcow2.bz2         QCOW2                2015-02-23 09:40:56                        1              aa9f501fecd3de1daeb9e2f357f6f002                SystemVM Template (LXC)          0              0              15           1              0              1              0              LXC                                         0                              Active   0                              0





                            - - -



                            In addition you have also stated "I redeployed systemcl64template-5.6-xen.vhd.bz2 last week does that not recreated the uuid ?"



                            So the questions here are:

                            - why are you using 4.5 templates with 4.9? Did you recently upgrade or was this put in wrong to start off with?

                            - what are you trying to do with "systemcl64template-5.6-xen.vhd.bz2"? My guess is this is a typo? If you were trying to install the 4.6 template what process did you follow?

                            - following on from this can you do a MySQL query listing the uploaded template? Can you also check what the status is of this in your GUI - is it uploaded to the zone in question and in a READY state? You can also check this in the template_store_ref table.

                            - what is your global setting for "router.template.xenserver" currently set to?



                            I get the impression your environment is possibly managing to limp along using 4.5 system VM templates - if so I'm surprised if anything is working. For 4.9 you should be using 4.6 templates (e.g. http://packages.shapeblue.com.s3-eu-west-1.amazonaws.com/systemvmtemplate/4.6/new/systemvm64template-4.6-xen.vhd.bz2 ) - although I think maybe this is what you are trying to achieve?



                            2) VM i-153-446 - as you can see from the logs there's not a log to go by: "Unable to start i-153-446-VM due to ". However - you haven't told us if this is a new VM or existing? If it's new it won't necessarily be able to start until you have the SSVM sorted in your Rushford zone. For further troubleshooting you should also check the logs on the XS host where this VM was trying to start.



                            3) Your issues could be storage related - do all SRs (like FlexSAN2-LUN0)  show as connected to your XS hosts in XenCenter? If not can you repair them from XenCenter?



                            Regards,

                            Dag Sonstebo

                            Cloud Architect

                            ShapeBlue



                            On 21/06/2017, 19:39, "Jeremy Peterson" <jpeterson@acentek.net<mailto:jpeterson@acentek.net>> wrote:



                                And combing through the logs I see that one of my VM's is trying to launch i-153-446 its passed all the cpu and memory checks and found primary storage but then when it goes to deploy.  I am getting a catch exception insufficient storage.







                                2017-06-21 13:10:19,219 DEBUG [c.c.h.x.r.CitrixResourceBase] (DirectAgent-252:ctx-717133c7) The VM is in stopped state, detected problem during startup : i-153-446-VM

                                2017-06-21 13:10:19,219 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-252:ctx-717133c7) Seq 19-2522578741280901601: Response Received:

                                2017-06-21 13:10:19,219 DEBUG [c.c.a.t.Request] (DirectAgent-252:ctx-717133c7) Seq 19-2522578741280901601: Processing:  { Ans: , MgmtId: 345050411715, via: 19, Ver: v1, Flags: 10, [{"com.cloud.agent.api.StartAnswer":{"vm":{"id":446,"name":"i-153-446-VM","bootloader":"PyGrub","type":"User","cpus":4,"minSpeed":500,"maxSpeed":2000,"minRam":3221225472,"maxRam":12884901888,"arch":"x86_64","os":"Windows Server 2012 R2 (64-bit)","platformEmulator":"Windows Server 2012 R2 (64-bit)","bootArgs":"","enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"a9QSgiSW/+6iG3aaGfgaJw==","params":{"memoryOvercommitRatio":"4","platform":"viridian:true;acpi:1;apic:true;viridian_reference_tsc:true;viridian_time_ref_count:true;pae:true;videoram:8;device_id:0002;nx:true;vga:std","Message.ReservedCapacityFreed.Flag":"true","cpuOvercommitRatio":"4","hypervisortoolsversion":"xenserver61"},"uuid":"896c1d67-3f1d-4f0b-bd2c-1548cd637faf","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"69eba19d-6f5d-4f5b-94eb-69cf467ab25e","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN2-LUN0","id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},"name":"ROOT-446","size":85899345920,"path":"8a64ab1c-3528-4b8a-bb9a-ef164c7f6385","volumeId":537,"vmName":"i-153-446-VM","accountId":153,"format":"VHD","provisioningType":"THIN","id":537,"deviceId":0,"hypervisorType":"XenServer"}},"diskSeq":0,"path":"8a64ab1c-3528-4b8a-bb9a-ef164c7f6385","type":"ROOT","_details":{"managed":"false","storagePort":"0","storageHost":"localhost","volumeSize":"85899345920"}},{"data":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"id":0,"format":"ISO","accountId":0,"hvm":false}},"diskSeq":3,"type":"ISO"}],"nics":[{"deviceId":0,"networkRateMbps":1000,"defaultNic":true,"pxeDisable":false,"nicUuid":"8ecdff5f-2e67-406c-8145-71a437b35ccb","uuid":"d6010ef5-ae7e-4b5a-a8fe-b33d701742dd","ip":"192.168.211.211","netmask":"255.255.255.0","gateway":"192.168.211.1","mac":"02:00:20:1c:00:05","dns1":"208.74.240.5","dns2":"208.74.247.245","broadcastType":"Vlan","type":"Guest","broadcastUri":"vlan://1611","isolationUri":"vlan://1611","isSecurityGroupEnabled":false,"name":"GUEST-PUB"}],"vcpuMaxLimit":16},"_iqnToPath":{},"result":false,"details":"Unable to start i-153-446-VM due to ","wait":0}}] }

                                2017-06-21 13:10:19,219 DEBUG [c.c.a.t.Request] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Seq 19-2522578741280901601: Received:  { Ans: , MgmtId: 345050411715, via: 19, Ver: v1, Flags: 10, { StartAnswer } }

                                2017-06-21 13:10:19,230 INFO  [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Unable to start VM on Host[-19-Routing] due to Unable to start i-153-446-VM due to

                                2017-06-21 13:10:19,237 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] (Work-Job-Executor-95:ctx-0d080cef job-1042/job-148142) Done executing com.cloud.vm.VmWorkStart for job-148142

                                2017-06-21 13:10:19,240 DEBUG [o.a.c.f.j.i.SyncQueueManagerImpl] (Work-Job-Executor-95:ctx-0d080cef job-1042/job-148142) Sync queue (128149) is currently empty

                                2017-06-21 13:10:19,240 INFO  [o.a.c.f.j.i.AsyncJobMonitor] (Work-Job-Executor-95:ctx-0d080cef job-1042/job-148142) Remove job-148142 from job monitoring

                                2017-06-21 13:10:19,245 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Cleaning up resources for the vm VM[User|i-153-446-VM] in Starting state

                                2017-06-21 13:10:19,246 DEBUG [c.c.a.t.Request] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Seq 19-2522578741280901602: Sending  { Cmd , MgmtId: 345050411715, via: 19(Flex-Xen1.flexhost.local), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":false,"vmName":"i-153-446-VM","wait":0}}] }

                                2017-06-21 13:10:19,247 DEBUG [c.c.a.t.Request] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Seq 19-2522578741280901602: Executing:  { Cmd , MgmtId: 345050411715, via: 19(Flex-Xen1.flexhost.local), Ver: v1, Flags: 100011, [{"com.cloud.agent.api.StopCommand":{"isProxy":false,"executeInSequence":false,"checkBeforeCleanup":false,"vmName":"i-153-446-VM","wait":0}}] }

                                2017-06-21 13:10:19,247 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-82:ctx-9ee1039e) Seq 19-2522578741280901602: Executing request

                                2017-06-21 13:10:19,250 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-82:ctx-9ee1039e) Seq 19-2522578741280901602: Response Received:

                                2017-06-21 13:10:19,250 DEBUG [c.c.a.t.Request] (DirectAgent-82:ctx-9ee1039e) Seq 19-2522578741280901602: Processing:  { Ans: , MgmtId: 345050411715, via: 19, Ver: v1, Flags: 10, [{"com.cloud.agent.api.StopAnswer":{"result":true,"details":"VM does not exist","wait":0}}] }

                                2017-06-21 13:10:19,250 DEBUG [c.c.a.t.Request] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Seq 19-2522578741280901602: Received:  { Ans: , MgmtId: 345050411715, via: 19, Ver: v1, Flags: 10, { StopAnswer } }

                                2017-06-21 13:10:19,254 DEBUG [c.c.n.NetworkModelImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Service SecurityGroup is not supported in the network id=298

                                2017-06-21 13:10:19,256 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Changing active number of nics for network id=298 on -1

                                2017-06-21 13:10:19,257 WARN  [o.a.c.s.SecondaryStorageManagerImpl] (secstorage-1:ctx-b13acfc7) Exception while trying to start secondary storage vm

                                com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[SecondaryStorageVm|s-22605-VM]Scope=interface com.cloud.dc.DataCenter; id=1

                                        at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:941)

                                        at com.cloud.vm.VirtualMachineManagerImpl.orchestrateStart(VirtualMachineManagerImpl.java:4471)

                                        at sun.reflect.GeneratedMethodAccessor246.invoke(Unknown Source)

                                        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

                                        at java.lang.reflect.Method.invoke(Method.java:606)

                                        at com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)

                                        at com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:4627)

                                        at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:103)

                                        at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:536)

                                        at org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)

                                        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)

                                        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)

                                        at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)

                                        at org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)

                                        at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:493)

                                        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

                                        at java.util.concurrent.FutureTask.run(FutureTask.java:262)

                                        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

                                        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

                                        at java.lang.Thread.run(Thread.java:745)

                                2017-06-21 13:10:19,259 INFO  [o.a.c.s.SecondaryStorageManagerImpl] (secstorage-1:ctx-b13acfc7) Unable to start secondary storage vm for standby capacity, secStorageVm vm Id : 22605, will recycle it and start a new one

                                2017-06-21 13:10:19,259 DEBUG [c.c.a.SecondaryStorageVmAlertAdapter] (secstorage-1:ctx-b13acfc7) received secondary storage vm alert

                                2017-06-21 13:10:19,259 DEBUG [c.c.a.SecondaryStorageVmAlertAdapter] (secstorage-1:ctx-b13acfc7) Secondary Storage Vm creation failure, zone: Rushford

                                2017-06-21 13:10:19,260 WARN  [o.a.c.alerts] (secstorage-1:ctx-b13acfc7)  alertType:: 19 // dataCenterId:: 1 // podId:: null // clusterId:: null // message:: Secondary Storage Vm creation failure. zone: Rushford, error details: null

                                2017-06-21 13:10:19,267 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Asking VpcVirtualRouter to release NicProfile[1052-446-987e8cca-ec26-46cd-aec2-a1f4b2283dff-192.168.211.211-null

                                2017-06-21 13:10:19,267 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Successfully released network resources for the vm VM[User|i-153-446-VM]

                                2017-06-21 13:10:19,267 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Successfully cleanued up resources for the vm VM[User|i-153-446-VM] in Starting state

                                2017-06-21 13:10:19,269 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Root volume is ready, need to place VM in volume's cluster

                                2017-06-21 13:10:19,274 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Deploy avoids pods: [], clusters: [], hosts: [19]

                                2017-06-21 13:10:19,274 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) DeploymentPlanner allocation algorithm: com.cloud.deploy.UserDispersingPlanner@4cafa203<mailto:com.cloud.deploy.UserDispersingPlanner@4cafa203>

                                2017-06-21 13:10:19,276 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 8000, requested ram: 12884901888

                                2017-06-21 13:10:19,276 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Is ROOT volume READY (pool already allocated)?: Yes

                                2017-06-21 13:10:19,276 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) DeploymentPlan has host_id specified, choosing this host and making no checks on this host: 19

                                2017-06-21 13:10:19,276 INFO  [o.a.c.s.PremiumSecondaryStorageManagerImpl] (secstorage-1:ctx-b13acfc7) Primary secondary storage is not even started, wait until next turn

                                2017-06-21 13:10:19,276 ERROR [c.c.a.AlertManagerImpl] (Email-Alerts-Sender-25:null)  Failed to send email alert javax.mail.MessagingException: Could not connect to SMTP host: spam.acentek.net, port: 465 (java.net.ConnectException: Connection refused)

                                2017-06-21 13:10:19,276 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) The specified host is in avoid set

                                2017-06-21 13:10:19,276 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Cannnot deploy to specified host, returning.

                                2017-06-21 13:10:19,297 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) VM state transitted from :Starting to Stopped with event: OperationFailedvm's original host id: 1 new host id: null host id before state transition: 19

                                2017-06-21 13:10:19,300 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Hosts's actual total CPU: 48000 and CPU after applying overprovisioning: 192000

                                2017-06-21 13:10:19,300 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Hosts's actual total RAM: 128790209280 and RAM after applying overprovisioning: 515160834048

                                2017-06-21 13:10:19,300 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) release cpu from host: 19, old used: 24000,reserved: 0, actual total: 48000, total with overprovisioning: 192000; new used: 16000,reserved:0; movedfromreserved: false,moveToReserveredfalse

                                2017-06-21 13:10:19,300 DEBUG [c.c.c.CapacityManagerImpl] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) release mem from host: 19, old used: 38654705664,reserved: 0, total: 515160834048; new used: 25769803776,reserved:0; movedfromreserved: false,moveToReserveredfalse

                                2017-06-21 13:10:19,332 ERROR [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Invocation exception, caused by: com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-153-446-VM]Scope=interface com.cloud.dc.DataCenter; id=1

                                2017-06-21 13:10:19,332 INFO  [c.c.v.VmWorkJobHandlerProxy] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145 ctx-f018393f) Rethrow exception com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-153-446-VM]Scope=interface com.cloud.dc.DataCenter; id=1

                                2017-06-21 13:10:19,332 DEBUG [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145) Done with run of VM work job: com.cloud.vm.VmWorkStart for VM 446, job origin: 148144

                                2017-06-21 13:10:19,332 ERROR [c.c.v.VmWorkJobDispatcher] (Work-Job-Executor-100:ctx-e1276898 job-148144/job-148145) Unable to complete AsyncJobVO {id:148145, userId: 2, accountId: 2, instanceType: null, instanceId: null, cmd: com.cloud.vm.VmWorkStart, cmdInfo: rO0ABXNyABhjb20uY2xvdWQudm0uVm1Xb3JrU3RhcnR9cMGsvxz73gIAC0oABGRjSWRMAAZhdm9pZHN0ADBMY29tL2Nsb3VkL2RlcGxveS9EZXBsb3ltZW50UGxhbm5lciRFeGNsdWRlTGlzdDtMAAljbHVzdGVySWR0ABBMamF2YS9sYW5nL0xvbmc7TAAGaG9zdElkcQB-AAJMAAtqb3VybmFsTmFtZXQAEkxqYXZhL2xhbmcvU3RyaW5nO0wAEXBoeXNpY2FsTmV0d29ya0lkcQB-AAJMAAdwbGFubmVycQB-AANMAAVwb2RJZHEAfgACTAAGcG9vbElkcQB-AAJMAAlyYXdQYXJhbXN0AA9MamF2YS91dGlsL01hcDtMAA1yZXNlcnZhdGlvbklkcQB-AAN4cgATY29tLmNsb3VkLnZtLlZtV29ya5-ZtlbwJWdrAgAESgAJYWNjb3VudElkSgAGdXNlcklkSgAEdm1JZEwAC2hhbmRsZXJOYW1lcQB-AAN4cAAAAAAAAAACAAAAAAAAAAIAAAAAAAABvnQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAAAAAAAAAXBzcgAOamF2YS5sYW5nLkxvbmc7i-SQzI8j3wIAAUoABXZhbHVleHIAEGphdmEubGFuZy5OdW1iZXKGrJUdC5TgiwIAAHhwAAAAAAAAAAFzcQB-AAgAAAAAAAAAE3BwcHEAfgAKcHNyABFqYXZhLnV0aWwuSGFzaE1hcAUH2sHDFmDRAwACRgAKbG9hZEZhY3RvckkACXRocmVzaG9sZHhwP0AAAAAAAAx3CAAAABAAAAABdAAKVm1QYXNzd29yZHQAHHJPMEFCWFFBRG5OaGRtVmtYM0JoYzNOM2IzSmt4cA, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: null, initMsid: 345050411715, completeMsid: null, lastUpdated: null, lastPolled: null, created: Wed Jun 21 13:10:16 CDT 2017}, job origin:148144

                                com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment for VM[User|i-153-446-VM]Scope=interface com.cloud.dc.DataCenter; id=1





                                My logs are just rolling with these errors.



                                Jeremy





                                -----Original Message-----

                                From: Jeremy Peterson [mailto:jpeterson@acentek.net]

                                Sent: Wednesday, June 21, 2017 1:10 PM

                                To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org>; S. Brüseke - proIO GmbH <s.brueseke@proio.com<mailto:s.brueseke@proio.com>>

                                Subject: RE: Recreating SystemVM's



                                Why is my DEBUG show uuid of 8a4039f2-bb71-11e4-8c76-0050569b1662 but below in my catch exception it shows uuid ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf.



                                See below.



                                2017-06-21 10:46:16,431 DEBUG [c.c.a.t.Request] (Work-Job-Executor-45:ctx-7cdfe536 job-342/job-147412 ctx-39b2bc63) Seq 1-6981705322331112844:

                                Sending  { Cmd , MgmtId: 345050411715, via: 1(Flex-Xen2.flexhost.local), Ver: v1,

                                Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":

                                {"path":"ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2",

                                "uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,

                                "displayText":"SystemVM Template (XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN2-LUN0",

                                "id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},

                                "name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{

                                "uuid":"a2456229-2942-4d9b-9bff-c6d9ea004fbd","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{

                                "uuid":"FlexSAN2-LUN0","id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":

                                "PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},"name":"ROOT-22411","size":2689602048,"volumeId":27330,"vmName":

                                "v-22411-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":27330,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}] }

                                2017-06-21 10:46:16,431 DEBUG [c.c.a.t.Request] (Work-Job-Executor-45:ctx-7cdfe536 job-342/job-147412 ctx-39b2bc63) Seq 1-6981705322331112844:



                                2017-06-21 10:46:16,444 WARN  [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-152:ctx-385c99e9) Unable to create volume; Pool=volumeTO[uuid=a2456229-2942-4d9b-9bff-c6d9ea004fbd|path=null|datastore=PrimaryDataStoreTO[uuid=FlexSAN2-LUN0|name=null|id=5|pooltype=PreSetup]];

                                Disk:com.cloud.utils.exception.CloudRuntimeException: Catch Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for

                                uuid: ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf failed due to The uuid you supplied was invalid.





                                Now if I check the DB to find out what my templates uuid should be.



                                SELECT * FROM cloud.vm_template where type='SYSTEM';



                                        1              routing-1             SystemVM Template (XenServer)            8a4039f2-bb71-11e4-8c76-0050569b1662                0              0              SYSTEM                0              64                http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2        VHD       2015-02-23 09:35:05                               1              2b15ab4401c2d655264732d3fc600241     SystemVM Template (XenServer)            0                0              184         1              0              1              0              XenServer                                           0              2689602048                Active   0                              0

                                        3              routing-3             SystemVM Template (KVM)       8a46062a-bb71-11e4-8c76-0050569b1662              0                0              SYSTEM                0              64           http://download.cloud.com/templates/4.5/systemvm64template-4.5-kvm.qcow2.bz2         QCOW2                2015-02-23 09:35:05                        1              aa9f501fecd3de1daeb9e2f357f6f002                SystemVM Template (KVM)       0              0              15           1              0              1              0              KVM                                      0                              Active   0                              0

                                        8              routing-8             SystemVM Template (vSphere)                8a4e70c6-bb71-11e4-8c76-0050569b1662                0              0              SYSTEM                0              64                http://download.cloud.com/templates/4.5/systemvm64template-4.5-vmware.ova        OVA       2015-02-23 09:35:05                               1              3106a79a4ce66cd7f6a7c50e93f2db57      SystemVM Template (vSphere)                0                0              15           1              0              1              0              VMware                                              0                              Active   0                                1

                                        9              routing-9             SystemVM Template (HyperV)  8a5184e6-bb71-11e4-8c76-0050569b1662             0                0              SYSTEM                0              64           http://download.cloud.com/templates/4.5/systemvm64template-4.5-hyperv.vhd.zip          VHD       2015-02-23 09:35:05                        1              70bd30ea02ee9ed67d2c6b85c179cee9                SystemVM Template (HyperV)  0              0              15           1              0              1              0              Hyperv                                 0                              Active   0                              0

                                        10           routing-10           SystemVM Template (LXC)          5bb9e71c-bb72-11e4-8c76-0050569b1662             0                0              SYSTEM                0              64           http://download.cloud.com/templates/4.5/systemvm64template-4.5-kvm.qcow2.bz2         QCOW2                2015-02-23 09:40:56                        1              aa9f501fecd3de1daeb9e2f357f6f002                SystemVM Template (LXC)          0              0              15           1              0              1              0              LXC                                         0                              Active   0                              0





                                Ok so that shows me my system templates UUID is 8a4039f2-bb71-11e4-8c76-0050569b1662 and that lines up correctly with my debug command uuid 8a4039f2-bb71-11e4-8c76-0050569b1662.



                                Suggestions ?  Ideas?  Thoughts?



                                Thank you.





                                Jeremy





                                -----Original Message-----

                                From: Jeremy Peterson [mailto:jpeterson@acentek.net]

                                Sent: Wednesday, June 21, 2017 11:58 AM

                                To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org>; S. Brüseke - proIO GmbH <s.brueseke@proio.com<mailto:s.brueseke@proio.com>>

                                Subject: RE: Recreating SystemVM's



                                You are correct I had 2 hosts disabled when I tried to launch that VM.  But my hosts all show state Up.



                                Heres Flex-Xen1.flexhost.local

                                http://prntscr.com/fmi0tw



                                Heres the info page of the host



                                http://prntscr.com/fmi16g



                                Resource state:        Enabled

                                State up:      Up



                                I did a force reconnect on all hosts and that cleared the avoid set error.



                                But now I am getting UUID invalid when trying to launch a VM.  This is whats happening to the system VM's.



                                https://pastebin.com/2DhzFVDZ



                                You see it errors : The uuid you supplied was invalid.



                                Now I see the above command declared the host and storage but the UUID is "uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662"



                                How can I see what that ties to ?



                                I redeployed systemcl64template-5.6-xen.vhd.bz2 last week does that not recreated the uuid ?



                                Jeremy





                                -----Original Message-----

                                From: Dag Sonstebo [mailto:Dag.Sonstebo@shapeblue.com]

                                Sent: Wednesday, June 21, 2017 11:12 AM

                                To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org>; S. Brüseke - proIO GmbH <s.brueseke@proio.com<mailto:s.brueseke@proio.com>>

                                Subject: Re: Recreating SystemVM's



                                Hi Jeremy,



                                You have 6 hosts: "List of hosts in ascending order of number of VMs: [15, 17, 19, 1, 16, 18]" - my guess is you have disabled hosts 16+18 for their reboot.

                                You immediately have the rest of the hosts in an avoid set: "Deploy avoids pods: [], clusters: [], hosts: [17, 1, 19, 15]".



                                So you need to work out why those hosts are considered non-valid. Do they show up as live in your CloudStack GUI? Are they all enabled as well as out of maintenance mode?



                                Regards,

                                Dag Sonstebo

                                Cloud Architect

                                ShapeBlue



                                On 21/06/2017, 15:13, "Jeremy Peterson" <jpeterson@acentek.net<mailto:jpeterson@acentek.net>> wrote:



                                    So this morning I reconnected all hosts.



                                    I also disabled my two hosts that need to reboot and powered on a VM and now I am getting a Insufficient Resources.



                                    Whats odd is that Host Allocator returning 0 suitable hosts?



                                    2017-06-21 08:43:53,695 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Root volume is ready, need to place VM in volume's cluster

                                    2017-06-21 08:43:53,695 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Vol[537|vm=446|ROOT] is READY, changing deployment plan to use this pool's dcId: 1 , podId: 1 , and clusterId: 1

                                    2017-06-21 08:43:53,702 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Deploy avoids pods: [], clusters: [], hosts: [17, 1, 19, 15]

                                    2017-06-21 08:43:53,703 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) DeploymentPlanner allocation algorithm: com.cloud.deploy.UserDispersingPlanner@4cafa203<mailto:com.cloud.deploy.UserDispersingPlanner@4cafa203>

                                    2017-06-21 08:43:53,703 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:1, requested cpu: 8000, requested ram: 12884901888

                                    2017-06-21 08:43:53,703 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Is ROOT volume READY (pool already allocated)?: Yes

                                    2017-06-21 08:43:53,703 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) This VM has last host_id specified, trying to choose the same host: 1

                                    2017-06-21 08:43:53,704 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) The last host of this VM is in avoid set

                                    2017-06-21 08:43:53,704 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Cannot choose the last host to deploy this VM

                                    2017-06-21 08:43:53,704 DEBUG [c.c.d.FirstFitPlanner] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Searching resources only under specified Cluster: 1

                                    2017-06-21 08:43:53,714 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Checking resources in Cluster: 1 under Pod: 1

                                    2017-06-21 08:43:53,714 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Looking for hosts in dc: 1  pod:1  cluster:1

                                    2017-06-21 08:43:53,718 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) List of hosts in ascending order of number of VMs: [15, 17, 19, 1, 16, 18]

                                    2017-06-21 08:43:53,718 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) FirstFitAllocator has 4 hosts to check for allocation: [Host[-15-Routing], Host[-17-Routing], Host[-19-Routing], Host[-1-Routing]]

                                    2017-06-21 08:43:53,727 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Found 4 hosts for allocation after prioritization: [Host[-15-Routing], Host[-17-Routing], Host[-19-Routing], Host[-1-Routing]]

                                    2017-06-21 08:43:53,727 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Looking for speed=8000Mhz, Ram=12288

                                    2017-06-21 08:43:53,727 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Host name: Flex-Xen3.flexhost.local, hostId: 15 is in avoid set, skipping this and trying other available hosts

                                    2017-06-21 08:43:53,727 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Host name: Flex-Xen4.flexhost.local, hostId: 17 is in avoid set, skipping this and trying other available hosts

                                    2017-06-21 08:43:53,727 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Host name: Flex-Xen1.flexhost.local, hostId: 19 is in avoid set, skipping this and trying other available hosts

                                    2017-06-21 08:43:53,727 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Host name: Flex-Xen2.flexhost.local, hostId: 1 is in avoid set, skipping this and trying other available hosts

                                    2017-06-21 08:43:53,727 DEBUG [c.c.a.m.a.i.FirstFitAllocator] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348 FirstFitRoutingAllocator) Host Allocator returning 0 suitable hosts

                                    2017-06-21 08:43:53,727 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) No suitable hosts found

                                    2017-06-21 08:43:53,727 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) No suitable hosts found under this Cluster: 1

                                    2017-06-21 08:43:53,728 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Could not find suitable Deployment Destination for this VM under any clusters, returning.

                                    2017-06-21 08:43:53,728 DEBUG [c.c.d.FirstFitPlanner] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Searching resources only under specified Cluster: 1

                                    2017-06-21 08:43:53,729 DEBUG [c.c.d.FirstFitPlanner] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) The specified cluster is in avoid set, returning.

                                    2017-06-21 08:43:53,736 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Deploy avoids pods: [], clusters: [1], hosts: [17, 1, 19, 15]

                                    2017-06-21 08:43:53,737 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) DeploymentPlanner allocation algorithm: com.cloud.deploy.UserDispersingPlanner@4cafa203<mailto:com.cloud.deploy.UserDispersingPlanner@4cafa203>

                                    2017-06-21 08:43:53,737 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Trying to allocate a host and storage pools from dc:1, pod:1,cluster:null, requested cpu: 8000, requested ram: 12884901888

                                    2017-06-21 08:43:53,737 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) Is ROOT volume READY (pool already allocated)?: No

                                    2017-06-21 08:43:53,737 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) This VM has last host_id specified, trying to choose the same host: 1

                                    2017-06-21 08:43:53,739 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-71:ctx-f01a90b9 job-146764/job-146768 ctx-66c78348) The last host of this VM is in avoid set





                                    All oddities.



                                    So I did a force reconnect on all 6 hosts and enabled the two hosts that were pending updates.



                                    Jeremy



                                    -----Original Message-----

                                    From: Jeremy Peterson [mailto:jpeterson@acentek.net]

                                    Sent: Tuesday, June 20, 2017 12:33 PM

                                    To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org>; S. Brüseke - proIO GmbH <s.brueseke@proio.com<mailto:s.brueseke@proio.com>>

                                    Subject: RE: Recreating SystemVM's



                                    Ok so my issues have not gone away.



                                    I have two hosts that have not rebooted yet tonight I will be maintenancing those hosts out and migrating vm's away from those hosts and then performing a reboot of the host and installing a couple xenserver updates.



                                    One thing is I am not getting the CANNOT ATTACH NETWORK error anymore which is cool but.



                                    https://drive.google.com/open?id=0B5IXhrpPAT9qQ0FFUmRyRjN4NlE



                                    Take a look at creation of VM 20685



                                    2017-06-20 12:15:48,083 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) Found a potential host id: 1 name: Flex-Xen2.flexhost.local and associated storage pools for this VM

                                    2017-06-20 12:15:48,084 DEBUG [c.c.d.DeploymentPlanningManagerImpl] (Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) Returning Deployment Destination: Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(25604|ROOT-->Pool(5))]

                                    2017-06-20 12:15:48,084 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) Deployment found  - P0=VM[SecondaryStorageVm|s-20685-VM], P0=Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] : Dest[Zone(1)-Pod(1)-Cluster(1)-Host(1)-Storage(Volume(25604|ROOT-->Pool(5))]



                                    So it found a host and storage pool



                                    Networks were already created on line 482-484



                                    But then look it fails on create volume  UUID is invalid???





                                    2017-06-20 12:15:48,262 DEBUG [c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-88:ctx-c51dafa0 job-342/job-138604 ctx-75edebb0) VM is being created in podId: 1

                                    2017-06-20 12:15:48,264 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-88:ctx-c51dafa0 job-342/job-138604 ctx-75edebb0) Network id=200 is already implemented

                                    2017-06-20 12:15:48,269 DEBUG [c.c.n.g.PodBasedNetworkGuru] (Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) Allocated a nic NicProfile[81905-20685-0493941d-d193-4325-84bc-d325a8900332-10.90.2.207-null for VM[SecondaryStorageVm|s-20685-VM]

                                    2017-06-20 12:15:48,280 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) Network id=203 is already implemented

                                    2017-06-20 12:15:48,290 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-88:ctx-c51dafa0 job-342/job-138604 ctx-75edebb0) Network id=202 is already implemented

                                    2017-06-20 12:15:48,316 DEBUG [c.c.n.g.StorageNetworkGuru] (Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) Allocated a storage nic NicProfile[81906-20685-0493941d-d193-4325-84bc-d325a8900332-10.83.2.205-null for VM[SecondaryStorageVm|s-20685-VM]

                                    2017-06-20 12:15:48,336 DEBUG [o.a.c.e.o.VolumeOrchestrator] (Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) Checking if we need to prepare 1 volumes for VM[SecondaryStorageVm|s-20685-VM]

                                    2017-06-20 12:15:48,342 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) template 1 is already in store:5, type:Image

                                    2017-06-20 12:15:48,344 DEBUG [o.a.c.s.i.TemplateDataFactoryImpl] (Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) template 1 is already in store:5, type:Primary

                                    2017-06-20 12:15:48,346 DEBUG [o.a.c.e.o.NetworkOrchestrator] (Work-Job-Executor-88:ctx-c51dafa0 job-342/job-138604 ctx-75edebb0) Network id=201 is already implemented

                                    2017-06-20 12:15:48,372 DEBUG [c.c.d.d.DataCenterIpAddressDaoImpl] (Work-Job-Executor-88:ctx-c51dafa0 job-342/job-138604 ctx-75edebb0) Releasing ip address for instance=49817

                                    2017-06-20 12:15:48,381 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) copyAsync inspecting src type TEMPLATE copyAsync inspecting dest type VOLUME

                                    2017-06-20 12:15:48,386 DEBUG [c.c.a.t.Request] (Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) Seq 16-3622864425242874354: Sending  { Cmd , MgmtId: 345050411715, via: 16(Flex-Xen6.flexhost.local), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM Template (XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN2-LUN0","id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"4dba9def-2657-430e-8cd8-9369aebcaa25","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN2-LUN0","id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},"name":"ROOT-20685","size":2689602048,"volumeId":25604,"vmName":"s-20685-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":25604,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}] }

                                    2017-06-20 12:15:48,386 DEBUG [c.c.a.t.Request] (Work-Job-Executor-82:ctx-c39fa1f8 job-1042/job-138603 ctx-c17ce6fc) Seq 16-3622864425242874354: Executing:  { Cmd , MgmtId: 345050411715, via: 16(Flex-Xen6.flexhost.local), Ver: v1, Flags: 100111, [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.TemplateObjectTO":{"path":"ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf","origUrl":"http://download.cloud.com/templates/4.5/systemvm64template-4.5-xen.vhd.bz2","uuid":"8a4039f2-bb71-11e4-8c76-0050569b1662","id":1,"format":"VHD","accountId":1,"checksum":"2b15ab4401c2d655264732d3fc600241","hvm":false,"displayText":"SystemVM Template (XenServer)","imageDataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN2-LUN0","id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},"name":"routing-1","hypervisorType":"XenServer"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"4dba9def-2657-430e-8cd8-9369aebcaa25","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"FlexSAN2-LUN0","id":5,"poolType":"PreSetup","host":"localhost","path":"/FlexSAN2-LUN0","port":0,"url":"PreSetup://localhost/FlexSAN2-LUN0/?ROLE=Primary&STOREUUID=FlexSAN2-LUN0"}},"name":"ROOT-20685","size":2689602048,"volumeId":25604,"vmName":"s-20685-VM","accountId":1,"format":"VHD","provisioningType":"THIN","id":25604,"deviceId":0,"hypervisorType":"XenServer"}},"executeInSequence":true,"options":{},"wait":0}}] }

                                    2017-06-20 12:15:48,386 DEBUG [c.c.a.m.DirectAgentAttache] (DirectAgent-74:ctx-0acdd419) Seq 16-3622864425242874354: Executing request

                                    2017-06-20 12:15:48,387 DEBUG [c.c.n.g.PodBasedNetworkGuru] (Work-Job-Executor-88:ctx-c51dafa0 job-342/job-138604 ctx-75edebb0) Allocated a nic NicProfile[49817-12662-629b85e7-ce19-4568-9df7-143c76d24300-10.90.2.204-null for VM[ConsoleProxy|v-12662-VM]





                                    So how do I check UUID's to validate that they are correct ?



                                    2017-06-20 12:15:48,391 DEBUG [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-74:ctx-0acdd419) Catch Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid: ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf failed due to The uuid you supplied was invalid.

                                    2017-06-20 12:15:48,391 WARN  [c.c.h.x.r.XenServerStorageProcessor] (DirectAgent-74:ctx-0acdd419) Unable to create volume; Pool=volumeTO[uuid=4dba9def-2657-430e-8cd8-9369aebcaa25|path=null|datastore=PrimaryDataStoreTO[uuid=FlexSAN2-LUN0|name=null|id=5|pooltype=PreSetup]]; Disk:

                                    com.cloud.utils.exception.CloudRuntimeException: Catch Exception com.xensource.xenapi.Types$UuidInvalid :VDI getByUuid for uuid: ab6f3bcd-4c3e-4a7a-9f8b-45a822dbaaaf failed due to The uuid you supplied was invalid.









                                    Jeremy



                                    -----Original Message-----

                                    From: Jeremy Peterson [mailto:jpeterson@acentek.net]

                                    Sent: Thursday, June 15, 2017 4:20 PM

                                    To: users@cloudstack.apache.org<mailto:users@cloudstack.apache.org>; S. Brüseke - proIO GmbH <s.brueseke@proio.com<mailto:s.brueseke@proio.com>>

                                    Subject: RE: Recreating SystemVM's



                                    What type of networking are you using on the XenServers?

                                        XenServers are connected with 6 nic's per host connected to separate nexus 5k switches

                                        NIC 0 and NIC 1 are Bond 0+1 10Gb nics

                                        NIC 2 and NIC 3 are Bond 2+3 10Gb nics

                                        NIC 4 and NIC 5 are Bond 4+5 2Gb nics

                                        Cloudstack is running Advanced networking

                                        Bond 0+1 is primary storage

                                        Bond 2+3 is secondary storage

                                        Bond 4+5 is Management

                                    What version of os does the ms run on?

                                        CentOS release 6.9 (Final)

                                    What are the systemvm templates defined in your env?

                                        http://cloudstack.apt-get.eu/systemvm/4.5/systemvm64template-4.5-xen.vhd.bz2

                                    What is the version of the systemvm.iso?

                                        Successfully installed system VM template  to /secondary/template/tmpl/1/1/

                                        I just reinstalled systemvm's from the above 4.5-xen.vhd What is the capacity you have in your (test) environment?

                                        This is a production enviroment and currently cloudstack shows the following.

                                        Public IP Addresses 61%

                                        VLAN 35%

                                        Management IP Addresses 20%

                                        Primary Storage 44%

                                        CPU 21%

                                        Memory 5%

                                        Of cource Secondary Storage shows 0%

                                    What is the host os version for the hypervisors?

                                        XenServer 6.5 SP1

                                    What is the management network range?

                                        management.network.cidr 10.90.1.0/24

                                    What are the other physical networks?

                                        ?? Not sure what more you need

                                    What storage do you use?

                                        Primary - ISCSI

                                        Secondary - NFS

                                    Is it reachable from the systemvm?

                                        All of my CS management servers have internet access Is the big bad internet reachable for your SSVM's public interface?

                                        My SSVM does not go online but yes the public network is the same as the VR public vlan and all instances behind VR's are connected to the internet at this time



                                    Jeremy





                                    -----Original Message-----

                                    From: Daan Hoogland [mailto:daan.hoogland@shapeblue.com]

                                    Sent: Thursday, June 15, 2017 9:34 AM

                                    To: users@cloudstack.apache.org; S. Brüseke - proIO GmbH <s.brueseke@proio.com>

                                    Subject: Re: Recreating SystemVM's



                                    Your problem might be like what Swen says, Jeremy but also a wrong systemvm offering or a fault in your management network definition.

                                    I am going to sum up some trivialities so bear with me;



                                    What type of networking are you using on the XenServers?

                                    What version of os does the ms run on?

                                    What are the systemvm templates defined in your env?

                                    What is the version of the systemvm.iso?

                                    What is the capacity you have in your (test) environment?

                                    What is the host os version for the hypervisors?

                                    What is the management network range?

                                    What are the other physical networks?

                                    What storage do you use?

                                    Is it reachable from the systemvm?

                                    Is the big bad internet reachable for your SSVM's public interface?



                                    And of course,



                                    How is the weather, where you are at?



                                    I am not sure any of these question is going to lead you in the right direction but one of them should.



                                    On 15/06/17 13:56, "S. Brüseke - proIO GmbH" <s.brueseke@proio.com> wrote:



                                        I once did have some similar problem with my systemvms and my root cause was that in the global settings it referred to the wrong systemvm template. I am not sure if this helps you, but wanted to tell you.



                                        Mit freundlichen Grüßen / With kind regards,



                                        Swen



                                        -----Ursprüngliche Nachricht-----

                                        Von: Jeremy Peterson [mailto:jpeterson@acentek.net]

                                        Gesendet: Donnerstag, 15. Juni 2017 01:55

                                        An: users@cloudstack.apache.org

                                        Betreff: RE: Recreating SystemVM's



                                        Hahaha.  The best response ever.



                                        I dug through these emails and someone had soft of the same log messages cannot attach network and blamed xenserver. Ok I'm cool with that but why oh why is it only system vms?



                                        Jeremy

                                        ________________________________________

                                        From: Imran Ahmed [imran@eaxiom.net]

                                        Sent: Wednesday, June 14, 2017 6:22 PM

                                        To: users@cloudstack.apache.org

                                        Subject: RE: Recreating SystemVM's



                                        Yes,



                                        -----Original Message-----

                                        From: Jeremy Peterson [mailto:jpeterson@acentek.net]

                                        Sent: Wednesday, June 14, 2017 9:59 PM

                                        To: users@cloudstack.apache.org

                                        Subject: RE: Recreating SystemVM's



                                        Is there anyone out there reading these messages?



                                        Am I just not seeing responses?



                                        Jeremy





                                        -----Original Message-----

                                        From: Jeremy Peterson [mailto:jpeterson@acentek.net]

                                        Sent: Wednesday, June 14, 2017 8:12 AM

                                        To: users@cloudstack.apache.org

                                        Subject: RE: Recreating SystemVM's



                                        I opened an issue since this is still an issue.  CLOUDSTACK-9960



                                        Jeremy



                                        -----Original Message-----

                                        From: Jeremy Peterson [mailto:jpeterson@acentek.net]

                                        Sent: Sunday, June 11, 2017 9:10 AM

                                        To: users@cloudstack.apache.org

                                        Subject: Re: Recreating SystemVM's



                                        Any other suggestions?



                                        I am going to be scheduling to run XenServer updates.  But this all points back to CANNOT_ATTACH_NETWORk.



                                        I've verified nothing is active on the Public IP space that those two VM's were living on.



                                        Jeremy

                                        ________________________________________

                                        From: Jeremy Peterson <jpeterson@acentek.net>

                                        Sent: Friday, June 9, 2017 9:58 AM

                                        To: users@cloudstack.apache.org

                                        Subject: RE: Recreating SystemVM's



                                        I see the vm's try to create on a host that I just removed from maintenance mode to install updates and here are the logs



                                        I don't see anything that sticks out to me as a failure message.



                                        Jun  9 09:53:54 Xen3 SM: [13068] ['ip', 'route', 'del', '169.254.0.0/16']

                                        Jun  9 09:53:54 Xen3 SM: [13068]   pread SUCCESS

                                        Jun  9 09:53:54 Xen3 SM: [13068] ['ifconfig', 'xapi12', '169.254.0.1', 'netmask', '255.255.0.0']

                                        Jun  9 09:53:54 Xen3 SM: [13068]   pread SUCCESS

                                        Jun  9 09:53:54 Xen3 SM: [13068] ['ip', 'route', 'add', '169.254.0.0/16', 'dev', 'xapi12', 'src', '169.254.0.1']

                                        Jun  9 09:53:54 Xen3 SM: [13068]   pread SUCCESS

                                        Jun  9 09:53:54 Xen3 SM: [13071] ['ip', 'route', 'del', '169.254.0.0/16']

                                        Jun  9 09:53:54 Xen3 SM: [13071]   pread SUCCESS

                                        Jun  9 09:53:54 Xen3 SM: [13071] ['ifconfig', 'xapi12', '169.254.0.1', 'netmask', '255.255.0.0']

                                        Jun  9 09:53:54 Xen3 SM: [13071]   pread SUCCESS

                                        Jun  9 09:53:54 Xen3 SM: [13071] ['ip', 'route', 'add', '169.254.0.0/16', 'dev', 'xapi12', 'src', '169.254.0.1']

                                        Jun  9 09:53:54 Xen3 SM: [13071]   pread SUCCESS





                                        Jun  9 09:54:00 Xen3 SM: [13115] on-slave.multi: {'vgName':

                                        'VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2', 'lvName1':

                                        'VHD-633338a7-6c40-4aa6-b88e-c798b6fdc04d', 'action1':

                                        'deactivateNoRefcount', 'action2': 'cleanupLock', 'uuid2':

                                        '633338a7-6c40-4aa6-b88e-c798b6fdc04d', 'ns2':

                                        'lvm-469b6dcd-8466-3d03-de0e-cc3983e1b6e2'}

                                        Jun  9 09:54:00 Xen3 SM: [13115] LVMCache created for

                                        VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2

                                        Jun  9 09:54:00 Xen3 SM: [13115] on-slave.action 1: deactivateNoRefcount Jun

                                        9 09:54:00 Xen3 SM: [13115] LVMCache: will initialize now Jun  9 09:54:00

                                        Xen3 SM: [13115] LVMCache: refreshing Jun  9 09:54:00 Xen3 SM: [13115] ['/usr/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags', '/dev/VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2']

                                        Jun  9 09:54:00 Xen3 SM: [13115]   pread SUCCESS

                                        Jun  9 09:54:00 Xen3 SM: [13115] ['/usr/sbin/lvchange', '-an',

                                        '/dev/VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2/VHD-633338a7-6c40-4

                                        aa6-b88e-c798b6fdc04d']

                                        Jun  9 09:54:00 Xen3 SM: [13115]   pread SUCCESS

                                        Jun  9 09:54:00 Xen3 SM: [13115] ['/sbin/dmsetup', 'status',

                                        'VG_XenStorage--469b6dcd--8466--3d03--de0e--cc3983e1b6e2-VHD--633338a7--6c40

                                        --4aa6--b88e--c798b6fdc04d']

                                        Jun  9 09:54:00 Xen3 SM: [13115]   pread SUCCESS

                                        Jun  9 09:54:00 Xen3 SM: [13115] on-slave.action 2: cleanupLock



                                        Jun  9 09:54:16 Xen3 SM: [13230] ['ip', 'route', 'del', '169.254.0.0/16']

                                        Jun  9 09:54:16 Xen3 SM: [13230]   pread SUCCESS

                                        Jun  9 09:54:16 Xen3 SM: [13230] ['ifconfig', 'xapi12', '169.254.0.1', 'netmask', '255.255.0.0']

                                        Jun  9 09:54:16 Xen3 SM: [13230]   pread SUCCESS

                                        Jun  9 09:54:16 Xen3 SM: [13230] ['ip', 'route', 'add', '169.254.0.0/16', 'dev', 'xapi12', 'src', '169.254.0.1']

                                        Jun  9 09:54:16 Xen3 SM: [13230]   pread SUCCESS

                                        Jun  9 09:54:19 Xen3 updatempppathd: [15446] The garbage collection routine

                                        returned: 0 Jun  9 09:54:23 Xen3 SM: [13277] ['ip', 'route', 'del', '169.254.0.0/16']

                                        Jun  9 09:54:23 Xen3 SM: [13277]   pread SUCCESS

                                        Jun  9 09:54:23 Xen3 SM: [13277] ['ifconfig', 'xapi12', '169.254.0.1', 'netmask', '255.255.0.0']

                                        Jun  9 09:54:23 Xen3 SM: [13277]   pread SUCCESS

                                        Jun  9 09:54:23 Xen3 SM: [13277] ['ip', 'route', 'add', '169.254.0.0/16', 'dev', 'xapi12', 'src', '169.254.0.1']

                                        Jun  9 09:54:23 Xen3 SM: [13277]   pread SUCCESS



                                        Jeremy





                                        -----Original Message-----

                                        From: Jeremy Peterson [mailto:jpeterson@acentek.net]

                                        Sent: Friday, June 9, 2017 9:53 AM

                                        To: users@cloudstack.apache.org

                                        Subject: RE: Recreating SystemVM's



                                        I am checking SMlog now on all hosts.



                                        Jeremy





                                        -----Original Message-----

                                        From: Rajani Karuturi [mailto:rajani@apache.org]

                                        Sent: Friday, June 9, 2017 9:00 AM

                                        To: Users <users@cloudstack.apache.org>

                                        Subject: Re: Recreating SystemVM's



                                        on xenserver log, did you check what is causing "

                                        HOST_CANNOT_ATTACH_NETWORK"?



                                        ~Rajani

                                        http://cloudplatform.accelerite.com/



                                        On Fri, Jun 9, 2017 at 7:00 PM, Jeremy Peterson <jpeterson@acentek.net>

                                        wrote:



                                        > 08:28:43        select * from vm_instance where name like 's-%' limit

                                        > 10000     7481 row(s) returned    0.000 sec / 0.032 sec

                                        >

                                        > All vm's 'state' returned Destoryed outside of the current vm 7873

                                        > which is in a Stopped state but that goes Destroyed and a new get created.

                                        >

                                        > Any other suggestions?

                                        >

                                        > Jeremy

                                        >

                                        >

                                        > -----Original Message-----

                                        > From: Jeremy Peterson [mailto:jpeterson@acentek.net]

                                        > Sent: Thursday, June 8, 2017 12:47 AM

                                        > To: users@cloudstack.apache.org

                                        > Subject: Re: Recreating SystemVM's

                                        >

                                        > I'll make that change in the am.

                                        >

                                        > Today I put a host in maintence and rebooted because proxy and

                                        > secstore vm were constantly being created on that host and still no

                                        change.

                                        >

                                        > Let you know tomorrow.

                                        >

                                        > Jeremy

                                        >

                                        >

                                        > Sent from my Verizon, Samsung Galaxy smartphone

                                        >

                                        >

                                        > -------- Original message --------

                                        > From: Rajani Karuturi <rajani@apache.org>

                                        > Date: 6/8/17 12:07 AM (GMT-06:00)

                                        > To: Users <users@cloudstack.apache.org>

                                        > Subject: Re: Recreating SystemVM's

                                        >

                                        > Did you check SMLog on xenserver?

                                        > unable to destroy task(com.xensource.xenapi.Task@256829a8) on

                                        > host(b34f086e-fabf-471e-9feb-8f54362d7d0f) due to You gave an invalid

                                        > object reference.  The object may have recently been deleted.  The

                                        > class parameter gives the type of reference given, and the handle

                                        > parameter echoes the bad value given.

                                        >

                                        > Looks like Destroy of SSVM failed. What state is SSVM in? mark it as

                                        > Destroyed in cloud DB and wait for cloudstack to create a new SSVM.

                                        >

                                        > ~Rajani

                                        > http://cloudplatform.accelerite.com/

                                        >

                                        > On Thu, Jun 8, 2017 at 1:11 AM, Jeremy Peterson

                                        > <jpeterson@acentek.net>

                                        > wrote:

                                        >

                                        > > Probably agreed.

                                        > >

                                        > > But I ran toolstack restart on all hypervisors and v-3193 just tried

                                        > > to create and fail along with s-5398.

                                        > >

                                        > > The PIF error went away. But VM's are still recreating

                                        > >

                                        > > https://pastebin.com/4n4xBgMT

                                        > >

                                        > > New log from this afternoon.

                                        > >

                                        > > My catalina.out is over 4GB

                                        > >

                                        > > Jeremy

                                        > >

                                        > >

                                        > > -----Original Message-----

                                        > > From: Makrand [mailto:makrandsanap@gmail.com]

                                        > > Sent: Wednesday, June 7, 2017 12:52 AM

                                        > > To: users@cloudstack.apache.org

                                        > > Subject: Re: Recreating SystemVM's

                                        > >

                                        > > Hi there,

                                        > >

                                        > > Looks more like hypervisor issue.

                                        > >

                                        > > Just run *xe-toolstack-restart* on hosts where these VMs are trying

                                        > > to start or if you don't have too many hosts, better run on all

                                        > > members including master. most of i/o related issues squared off by

                                        > > toolstack bounce.

                                        > >

                                        > > --

                                        > > Makrand

                                        > >

                                        > >

                                        > > On Wed, Jun 7, 2017 at 3:01 AM, Jeremy Peterson

                                        > > <jpeterson@acentek.net>

                                        > > wrote:

                                        > >

                                        > > > Ok so I pulled this from Sunday morning.

                                        > > >

                                        > > > https://pastebin.com/nCETw1sC

                                        > > >

                                        > > >

                                        > > > errorInfo: [HOST_CANNOT_ATTACH_NETWORK,

                                        > > > OpaqueRef:65d0c844-bd70-81e9-4518-8809e1dc0ee7,

                                        > > > OpaqueRef:0093ac3f-9f3a-37e1-9cdb-581398d27ba2]

                                        > > >

                                        > > > XenServer error.

                                        > > >

                                        > > > Now this still gets me because all of the other VM's launched just

                                        > fine.

                                        > > >

                                        > > > Going into XenCenter I see an error at the bottom This PIF is a

                                        > > > bond slave and cannot be plugged.

                                        > > >

                                        > > > ???

                                        > > >

                                        > > > If I go to networking on the hosts I see the storage vlans and

                                        > > > bonds are all there.

                                        > > >

                                        > > > I see my GUEST-PUB bond is there and LACP is setup correct.

                                        > > >

                                        > > > Any suggestions ?

                                        > > >

                                        > > >

                                        > > > Jeremy

                                        > > >

                                        > > >

                                        > > > -----Original Message-----

                                        > > > From: Jeremy Peterson [mailto:jpeterson@acentek.net]

                                        > > > Sent: Tuesday, June 6, 2017 9:23 AM

                                        > > > To: users@cloudstack.apache.org

                                        > > > Subject: RE: Recreating SystemVM's

                                        > > >

                                        > > > Thank you all for those responses.

                                        > > >

                                        > > > I'll comb through my management-server.log and post a pastebin if

                                        > > > I'm scratching my head.

                                        > > >

                                        > > > Jeremy

                                        > > >

                                        > > > -----Original Message-----

                                        > > > From: Rajani Karuturi [mailto:rajani@apache.org]

                                        > > > Sent: Tuesday, June 6, 2017 6:53 AM

                                        > > > To: users@cloudstack.apache.org

                                        > > > Subject: Re: Recreating SystemVM's

                                        > > >

                                        > > > If the zone is enabled, cloudstack should recreate them automatically.

                                        > > >

                                        > > > ~ Rajani

                                        > > >

                                        > > > http://cloudplatform.accelerite.com/

                                        > > >

                                        > > > On June 6, 2017 at 11:37 AM, Erik Weber (terbolous@gmail.com)

                                        > > > wrote:

                                        > > >

                                        > > > CloudStack should recreate automatically, check the mgmt server

                                        > > > logs for hints of why it doesn't happen.

                                        > > >

                                        > > > --

                                        > > > Erik

                                        > > >

                                        > > > tir. 6. jun. 2017 kl. 04.29 skrev Jeremy Peterson

                                        > > > <jpeterson@acentek.net>:

                                        > > >

                                        > > > I had an issue Sunday morning with cloudstack 4.9.0 and xenserver

                                        > 6.5.0.

                                        > > > My hosts stop sending LACP PDU's and caused a network drop to

                                        > > > iSCSI primary storage.

                                        > > >

                                        > > > So all my instances recovered via HA enabled.

                                        > > >

                                        > > > But my console proxy and secondary storage system VM's got stuck

                                        > > > in a boot state that would not power on.

                                        > > >

                                        > > > At this time they are expunged and gone.

                                        > > >

                                        > > > How do I tell cloudstack-management to recreate system VM's?

                                        > > >

                                        > > > I'm drawing a blank since deploying CS two years ago and just

                                        > > > keeping things running and adding hosts and more storage

                                        > > > everything has been so stable.

                                        > > >

                                        > > > Jeremy

                                        > > >

                                        > >

                                        >







                                        - proIO GmbH -

                                        Geschäftsführer: Swen Brüseke

                                        Sitz der Gesellschaft: Frankfurt am Main



                                        USt-IdNr. DE 267 075 918

                                        Registergericht: Frankfurt am Main - HRB 86239



                                        Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.

                                        Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben,

                                        informieren Sie bitte sofort den Absender und vernichten Sie diese Mail.

                                        Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht gestattet.



                                        This e-mail may contain confidential and/or privileged information.

                                        If you are not the intended recipient (or have received this e-mail in error) please notify

                                        the sender immediately and destroy this e-mail.

                                        Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden.











                                    daan.hoogland@shapeblue.com

                                    www.shapeblue.com<http://www.shapeblue.com>

                                    53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue













                                Dag.Sonstebo@shapeblue.com

                                www.shapeblue.com<http://www.shapeblue.com>

                                53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue













                            Dag.Sonstebo@shapeblue.com

                            www.shapeblue.com<http://www.shapeblue.com>

                            53 Chandos Place, Covent Garden, London  WC2N 4HSUK

                            @shapeblue













                        Dag.Sonstebo@shapeblue.com

                        www.shapeblue.com<http://www.shapeblue.com>

                        53 Chandos Place, Covent Garden, London  WC2N 4HSUK

                        @shapeblue









                    Dag.Sonstebo@shapeblue.com
                    www.shapeblue.com<http://www.shapeblue.com>
                    53 Chandos Place, Covent Garden, London  WC2N 4HSUK
                    @shapeblue






                Dag.Sonstebo@shapeblue.com
                www.shapeblue.com<http://www.shapeblue.com>
                53 Chandos Place, Covent Garden, London  WC2N 4HSUK
                @shapeblue







            Dag.Sonstebo@shapeblue.com
            www.shapeblue.com<http://www.shapeblue.com>
            53 Chandos Place, Covent Garden, London  WC2N 4HSUK
            @shapeblue






        Dag.Sonstebo@shapeblue.com
        www.shapeblue.com
        53 Chandos Place, Covent Garden, London  WC2N 4HSUK
        @shapeblue






    Dag.Sonstebo@shapeblue.com
    www.shapeblue.com
    53 Chandos Place, Covent Garden, London  WC2N 4HSUK
    @shapeblue






Dag.Sonstebo@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue




Mime
View raw message