cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justyn Shull <just...@codero.com>
Subject Unsupported data object ... no need to delete from object in store ref table [Cloudstack 4.4.0]
Date Mon, 08 Sep 2014 17:18:00 GMT
These errors seem to be repeating pretty frequently in the cloudstack management server logs:

…
2014-09-08 09:31:25,463 DEBUG [c.c.s.StorageManagerImpl] (StorageManager-Scavenger-3:ctx-afc06fa1)
Secondary storage garbage collector found 0 templates to cleanup on template_store_ref for
store: 899e97d7-8d84-4b3a-99ce-a8c301a1407f
2014-09-08 09:31:25,465 DEBUG [c.c.s.StorageManagerImpl] (StorageManager-Scavenger-3:ctx-afc06fa1)
Secondary storage garbage collector found 0 snapshots to cleanup on snapshot_store_ref for
store: 899e97d7-8d84-4b3a-99ce-a8c301a1407f
2014-09-08 09:31:25,467 DEBUG [c.c.s.StorageManagerImpl] (StorageManager-Scavenger-3:ctx-afc06fa1)
Secondary storage garbage collector found 0 volumes to cleanup on volume_store_ref for store:
899e97d7-8d84-4b3a-99ce-a8c301a1407f
2014-09-08 09:31:25,611 DEBUG [c.c.a.m.ClusteredAgentAttache] (StorageManager-Scavenger-3:ctx-afc06fa1)
Seq 21-471189111013644472: Forwarding Seq 21-471189111013644472:  { Cmd , MgmtId: 182571079363322,
via: 21(hv-23-1.phx), Ver: v1, Flags: 100011, [{"org.apache.cloudstack.storage.command.DeleteCommand":{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"4b4a1d43-61a7-428f-b56a-f3d879607aeb","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"a26cba11-e71a-57c8-ee7d-8809ec6a29db","id":217,"poolType":"LVM","host":"10.48.100.123","path":"lvm","port":0,"url":"LVM://10.48.100.123/lvm/?ROLE=Primary&STOREUUID=a26cba11-e71a-57c8-ee7d-8809ec6a29db"}},"name":"ROOT-5821","size":2621440000,"path":"e7a42e13-8a2c-4f72-9c07-365c5d2de4fd","volumeId":5834,"vmName":"r-5821-VM","accountId":541,"format":"VHD","id":5834,"deviceId":0,"cacheMode":"NONE","hypervisorType":"XenServer"}},"wait":0}}]
} to 33862771676063
2014-09-08 09:31:25,688 DEBUG [c.c.a.t.Request] (StorageManager-Scavenger-3:ctx-afc06fa1)
Seq 21-471189111013644472: Received:  { Ans: , MgmtId: 182571079363322, via: 21, Ver: v1,
Flags: 10, { Answer } }
2014-09-08 09:31:25,712 WARN  [o.a.c.s.d.ObjectInDataStoreManagerImpl] (StorageManager-Scavenger-3:ctx-afc06fa1)
Unsupported data object (VOLUME, org.apache.cloudstack.storage.datastore.PrimaryDataStoreImpl@1fb13f28),
no need to delete from object in store ref table
2014-09-08 09:31:25,772 DEBUG [c.c.a.m.ClusteredAgentAttache] (StorageManager-Scavenger-3:ctx-afc06fa1)
Seq 21-471189111013644473: Forwarding Seq 21-471189111013644473:  { Cmd , MgmtId: 182571079363322,
via: 21(hv-23-1.phx), Ver: v1, Flags: 100011, [{"org.apache.cloudstack.storage.command.DeleteCommand":{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"a7c67da8-a5d5-4729-b134-90312f1777b4","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"a26cba11-e71a-57c8-ee7d-8809ec6a29db","id":217,"poolType":"LVM","host":"10.48.100.123","path":"lvm","port":0,"url":"LVM://10.48.100.123/lvm/?ROLE=Primary&STOREUUID=a26cba11-e71a-57c8-ee7d-8809ec6a29db"}},"name":"ROOT-5827","size":2621440000,"path":"fa53ba6c-1eee-4fe4-802b-7f3cc8d02239","volumeId":5840,"vmName":"r-5827-VM","accountId":635,"format":"VHD","id":5840,"deviceId":0,"cacheMode":"NONE","hypervisorType":"XenServer"}},"wait":0}}]
} to 33862771676063
2014-09-08 09:31:25,809 DEBUG [c.c.a.t.Request] (StorageManager-Scavenger-3:ctx-afc06fa1)
Seq 21-471189111013644473: Received:  { Ans: , MgmtId: 182571079363322, via: 21, Ver: v1,
Flags: 10, { Answer } }
2014-09-08 09:31:25,833 WARN  [o.a.c.s.d.ObjectInDataStoreManagerImpl] (StorageManager-Scavenger-3:ctx-afc06fa1)
Unsupported data object (VOLUME, org.apache.cloudstack.storage.datastore.PrimaryDataStoreImpl@532147ce),
no need to delete from object in store ref table
...


It repeats the same message for several different volumes. All of the ones I’ve checked
so far seem to be root volumes for virtual routers. I checked on the HV for the first volume
above, and the lv is definitely still in the local storage pool.

Looking into the database, I noticed the virtual router has two volumes associated with it,
on two different pools:

select pool_id, instance_id, name, state, created, removed from volumes where instance_id
= 5821;
pool_id instance_id name    state   created removed
217 5821    ROOT-5821   Destroy 2014-08-10 05:55:07 NULL
208 5821    ROOT-5821   Ready   2014-08-22 01:51:34 NULL



The one with a state of Destroy is what is generating these errors. At first glance, it doesn’t
look like there are multiple copies of the VR running on the two HVs above, but I don’t
know if there is anything else that did not get cleaned up properly.

Anyone have any ideas on where to go from here?

Thanks,


--
Justyn Shull
DevOps
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message