cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Daan Hoogland <daan.hoogl...@gmail.com>
Subject Re: new primary storage
Date Mon, 20 Jan 2020 11:33:05 GMT
Why do you think that Charlie? Is it in the logs like that somewhere?

On Mon, Jan 20, 2020 at 9:52 AM Charlie Holeowsky <
charlie.holeowsky@gmail.com> wrote:

> Hi Daan,
> in fact I find the volume file (39148fe1-842b-433a-8a7f-85e90f316e04) in
> the repositry id = 3 (the new one) but it seems to me that the cloudstack
> system goes looking for the volume with its "old" name (path) that doesn't
> exist...
>
> Il giorno sab 18 gen 2020 alle ore 21:41 Daan Hoogland <
> daan.hoogland@gmail.com> ha scritto:
>
>> Charlie,
>> forgive my not replying in a timely manner. This might happen if the disk
>> was migrated. In this case probably from primary storage with id 1 to the
>> one with id 3. the second record (pool_id 1) is removed, so you can ignore
>> that one. The first seems legit. You should be able to find that disks on
>> your primary storage with id 3.
>> hope this helps.
>>
>> On Thu, Jan 16, 2020 at 2:07 PM Charlie Holeowsky <
>> charlie.holeowsky@gmail.com> wrote:
>>
>>> Hi Daan and users,
>>> to better explain I show you the two records related to the disk that
>>> generates the error message.
>>>
>>> In the first query there are the data of the disk currently in use which
>>> has the "uuid" equal to the name searched by the procedure
>>> com.cloud.hypervisor.kvm.storage.LibvirtStoragePool.getPhysicalDiskand a
>>> has a different "path" field.
>>>
>>> In the second one we note that the "path" field is equal to the "uuid"
>>> of the volume in use but has null "uuid" (and state=Expunged).
>>>
>>>
>>> mysql> select * from volumes where
>>> uuid='d93d3c0a-3859-4473-951d-9b5c5912c767';
>>>
>>> +-----+------------+-----------+---------+--------------+-------------+-----------+----------+--------------------------------------+-------------+---------------------+--------------------------------------+--------+----------------+------------+---------+-------------+-----------+------------------+-------------+----------------------------+-------------+---------------------+----------+---------------------+---------+-------+------------+--------------+-----------+------------------------+--------+----------------+--------+----------+----------+---------------+-------------------+
>>> | id  | account_id | domain_id | pool_id | last_pool_id | instance_id |
>>> device_id | name     | uuid                                 | size        |
>>> folder              | path                                 | pod_id |
>>> data_center_id | iscsi_name | host_ip | volume_type | pool_type |
>>> disk_offering_id | template_id | first_snapshot_backup_uuid | recreatable |
>>> created             | attached | updated             | removed | state |
>>> chain_info | update_count | disk_type | vm_snapshot_chain_size | iso_id |
>>> display_volume | format | min_iops | max_iops | hv_ss_reserve |
>>> provisioning_type |
>>>
>>> +-----+------------+-----------+---------+--------------+-------------+-----------+----------+--------------------------------------+-------------+---------------------+--------------------------------------+--------+----------------+------------+---------+-------------+-----------+------------------+-------------+----------------------------+-------------+---------------------+----------+---------------------+---------+-------+------------+--------------+-----------+------------------------+--------+----------------+--------+----------+----------+---------------+-------------------+
>>> | 213 |          2 |         1 |       3 |            1 |         148 |
>>>         1 | DATA-148 | d93d3c0a-3859-4473-951d-9b5c5912c767 | 53687091200 |
>>> /srv/primary | 39148fe1-842b-433a-8a7f-85e90f316e04 |   NULL |
>>>  1 | NULL       | NULL    | DATADISK    | NULL      |               34 |
>>>      NULL | NULL                       |           0 | 2019-11-26 10:41:46
>>> | NULL     | 2019-11-26 10:41:50 | NULL    | Ready | NULL       |
>>>  2 | NULL      |                   NULL |   NULL |              1 | QCOW2
>>>  |     NULL |     NULL |          NULL | thin              |
>>>
>>> +-----+------------+-----------+---------+--------------+-------------+-----------+----------+--------------------------------------+-------------+---------------------+--------------------------------------+--------+----------------+------------+---------+-------------+-----------+------------------+-------------+----------------------------+-------------+---------------------+----------+---------------------+---------+-------+------------+--------------+-----------+------------------------+--------+----------------+--------+----------+----------+---------------+-------------------+
>>> 1 row in set (0.00 sec)
>>>
>>> mysql> select * from volumes where
>>> path='d93d3c0a-3859-4473-951d-9b5c5912c767';
>>>
>>> +-----+------------+-----------+---------+--------------+-------------+-----------+----------+------+-------------+--------+--------------------------------------+--------+----------------+------------+---------+-------------+-----------+------------------+-------------+----------------------------+-------------+---------------------+----------+---------------------+---------------------+----------+------------+--------------+-----------+------------------------+--------+----------------+--------+----------+----------+---------------+-------------------+
>>> | id  | account_id | domain_id | pool_id | last_pool_id | instance_id |
>>> device_id | name     | uuid | size        | folder | path
>>>               | pod_id | data_center_id | iscsi_name | host_ip |
>>> volume_type | pool_type | disk_offering_id | template_id |
>>> first_snapshot_backup_uuid | recreatable | created             | attached |
>>> updated             | removed             | state    | chain_info |
>>> update_count | disk_type | vm_snapshot_chain_size | iso_id | display_volume
>>> | format | min_iops | max_iops | hv_ss_reserve | provisioning_type |
>>>
>>> +-----+------------+-----------+---------+--------------+-------------+-----------+----------+------+-------------+--------+--------------------------------------+--------+----------------+------------+---------+-------------+-----------+------------------+-------------+----------------------------+-------------+---------------------+----------+---------------------+---------------------+----------+------------+--------------+-----------+------------------------+--------+----------------+--------+----------+----------+---------------+-------------------+
>>> | 212 |          2 |         1 |       1 |         NULL |         148 |
>>>         1 | DATA-148 | NULL | 53687091200 | NULL   |
>>> d93d3c0a-3859-4473-951d-9b5c5912c767 |   NULL |              1 | NULL
>>> | NULL    | DATADISK    | NULL      |               34 |        NULL | NULL
>>>                       |           0 | 2019-11-26 10:38:23 | NULL     |
>>> 2019-11-26 10:41:50 | 2019-11-26 10:41:50 | Expunged | NULL       |
>>>    8 | NULL      |                   NULL |   NULL |              1 | QCOW2
>>>  |     NULL |     NULL |          NULL | thin              |
>>>
>>> +-----+------------+-----------+---------+--------------+-------------+-----------+----------+------+-------------+--------+--------------------------------------+--------+----------------+------------+---------+-------------+-----------+------------------+-------------+----------------------------+-------------+---------------------+----------+---------------------+---------------------+----------+------------+--------------+-----------+------------------------+--------+----------------+--------+----------+----------+---------------+-------------------+
>>> 1 row in set (0.00 sec)
>>>
>>> Il giorno mar 14 gen 2020 alle ore 15:05 Daan Hoogland <
>>> daan.hoogland@gmail.com> ha scritto:
>>>
>>>> So Charlie,
>>>> d93d3c0a-3859-4473-951d-9b5c5912c767 is actually a valid disk? does it
>>>> exist on the backend nfs?
>>>> and the pool 9af0d1c6-85f2-3c55-94af-6ac17cb4024c does it exist both in
>>>> cloudstack and on the backend?
>>>>
>>>> if both are answered with yes, you probably have a permissions issue,
>>>> which might be in the network.
>>>>
>>>>
>>>> On Tue, Jan 14, 2020 at 10:21 AM Charlie Holeowsky <
>>>> charlie.holeowsky@gmail.com> wrote:
>>>>
>>>>> Hi Daan and users,
>>>>> the infrastructure is based on the Linux environment. The management
>>>>> server, hosts and storage are all Ubuntu 16.04 except the new storage
>>>>> server which is an Ubuntu 18.04. The hypervisor used is Qemu-kvm with
NFS
>>>>> to share the storage.
>>>>>
>>>>> We tried to add another primary storage and creating a VM that would
>>>>> use it we found no problems, the statistics update and no error messages
>>>>> appear.
>>>>>
>>>>> Gere is an excerpt of the logs of the most complete agent:
>>>>>
>>>>> 2020-01-14 09:01:45,749 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>> (agentRequest-Handler-2:null) (logid:c3851d3a) Trying to fetch storage
pool
>>>>> 171e90f4-511e-3b10-9310-b9eec0094be6 from libvirt
>>>>> 2020-01-14 09:01:45,752 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>> (agentRequest-Handler-2:null) (logid:c3851d3a) Asking libvirt to refresh
>>>>> storage pool 171e90f4-511e-3b10-9310-b9eec0094be6
>>>>> 2020-01-14 09:01:46,641 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>> (agentRequest-Handler-4:null) (logid:c3851d3a) Trying to fetch storage
pool
>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c from libvirt
>>>>> 2020-01-14 09:01:46,643 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>> (agentRequest-Handler-4:null) (logid:c3851d3a) Asking libvirt to refresh
>>>>> storage pool 9af0d1c6-85f2-3c55-94af-6ac17cb4024c
>>>>> 2020-01-14 09:05:51,529 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>> (agentRequest-Handler-1:null) (logid:2765ff88) Trying to fetch storage
pool
>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c from libvirt
>>>>> 2020-01-14 09:05:51,532 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>> (agentRequest-Handler-1:null) (logid:2765ff88) Asking libvirt to refresh
>>>>> storage pool 9af0d1c6-85f2-3c55-94af-6ac17cb4024c
>>>>> 2020-01-14 09:10:47,286 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>> (agentRequest-Handler-3:null) (logid:6d27b740) Trying to fetch storage
pool
>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c from libvirt
>>>>> 2020-01-14 09:10:47,419 WARN  [cloud.agent.Agent]
>>>>> (agentRequest-Handler-3:null) (logid:6d27b740) Caught:
>>>>> com.cloud.utils.exception.CloudRuntimeException: Can't find
>>>>> volume:d93d3c0a-3859-4473-951d-9b5c5912c767
>>>>> at
>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStoragePool.getPhysicalDisk(LibvirtStoragePool.java:149)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.getVolumeStat(LibvirtGetVolumeStatsCommandWrapper.java:63)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:52)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:40)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1450)
>>>>> at com.cloud.agent.Agent.processRequest(Agent.java:645)
>>>>> at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1083)
>>>>> at com.cloud.utils.nio.Task.call(Task.java:83)
>>>>> at com.cloud.utils.nio.Task.call(Task.java:29)
>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>> 2020-01-14 09:20:48,390 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>> (agentRequest-Handler-4:null) (logid:ec72387b) Trying to fetch storage
pool
>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c from libvirt
>>>>> 2020-01-14 09:20:48,536 WARN  [cloud.agent.Agent]
>>>>> (agentRequest-Handler-4:null) (logid:ec72387b) Caught:
>>>>> com.cloud.utils.exception.CloudRuntimeException: Can't find
>>>>> volume:d93d3c0a-3859-4473-951d-9b5c5912c767
>>>>> at
>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStoragePool.getPhysicalDisk(LibvirtStoragePool.java:149)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.getVolumeStat(LibvirtGetVolumeStatsCommandWrapper.java:63)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:52)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:40)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1450)
>>>>> at com.cloud.agent.Agent.processRequest(Agent.java:645)
>>>>> at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1083)
>>>>> at com.cloud.utils.nio.Task.call(Task.java:83)
>>>>> at com.cloud.utils.nio.Task.call(Task.java:29)
>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>> 2020-01-14 09:25:15,259 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>> (agentRequest-Handler-5:null) (logid:1a7e082e) Trying to fetch storage
pool
>>>>> 9af0d1c6-85f2-3c55-94af-6ac17cb4024c from libvirt
>>>>> 2020-01-14 09:25:15,261 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>> (agentRequest-Handler-5:null) (logid:1a7e082e) Asking libvirt to refresh
>>>>> storage pool 9af0d1c6-85f2-3c55-94af-6ac17cb4024c
>>>>>
>>>>>
>>>>> And here the management server log:
>>>>>
>>>>> 2020-01-14 09:21:27,105 DEBUG [c.c.a.t.Request]
>>>>> (AgentManager-Handler-2:null) (logid:) Seq 15-705657766613619075:
>>>>> Processing:  { Ans: , MgmtId: 220777304233416, via: 15, Ver: v1, Flags:
10,
>>>>> [{"com.cloud.agent.api.Answer":{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException:
>>>>> Can't find volume:d93d3c0a-3859-4473-951d-9b5c5912c767\n\tat
>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStoragePool.getPhysicalDisk(LibvirtStoragePool.java:149)\n\tat
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.getVolumeStat(LibvirtGetVolumeStatsCommandWrapper.java:63)\n\tat
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:52)\n\tat
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:40)\n\tat
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)\n\tat
>>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1450)\n\tat
>>>>> com.cloud.agent.Agent.processRequest(Agent.java:645)\n\tat
>>>>> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1083)\n\tat
>>>>> com.cloud.utils.nio.Task.call(Task.java:83)\n\tat
>>>>> com.cloud.utils.nio.Task.call(Task.java:29)\n\tat
>>>>> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>>>>> java.lang.Thread.run(Thread.java:748)\n","wait":0}}] }
>>>>> 2020-01-14 09:21:27,105 DEBUG [c.c.a.t.Request]
>>>>> (StatsCollector-6:ctx-fd801d0a) (logid:ec72387b) Seq 15-705657766613619075:
>>>>> Received:  { Ans: , MgmtId: 220777304233416, via: 15(csdell017), Ver:
v1,
>>>>> Flags: 10, { Answer } }
>>>>> 2020-01-14 09:21:27,105 DEBUG [c.c.a.m.AgentManagerImpl]
>>>>> (StatsCollector-6:ctx-fd801d0a) (logid:ec72387b) Details from executing
>>>>> class com.cloud.agent.api.GetVolumeStatsCommand:
>>>>> com.cloud.utils.exception.CloudRuntimeException: Can't find
>>>>> volume:d93d3c0a-3859-4473-951d-9b5c5912c767
>>>>> at
>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStoragePool.getPhysicalDisk(LibvirtStoragePool.java:149)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.getVolumeStat(LibvirtGetVolumeStatsCommandWrapper.java:63)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:52)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVolumeStatsCommandWrapper.execute(LibvirtGetVolumeStatsCommandWrapper.java:40)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
>>>>> at
>>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1450)
>>>>> at com.cloud.agent.Agent.processRequest(Agent.java:645)
>>>>> at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1083)
>>>>> at com.cloud.utils.nio.Task.call(Task.java:83)
>>>>> at com.cloud.utils.nio.Task.call(Task.java:29)
>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>> at java.lang.Thread.run(Thread.java:748)
>>>>>
>>>>>
>>>>> on 09/01/20 12:58, Daan Hoogland wrote:
>>>>>
>>>>> Charlie, I think you'll have to explain a bit more about your environment
>>>>> to get an answer. what type of storage is it? Where did you migrate the
VM
>>>>> from and to? What types() of hypervisors are you using? Though saying
**the**
>>>>> agent logs suggests KVM, you are still leaving people guessing a lot.
>>>>>
>>>>>
>>>>
>>>> --
>>>> Daan
>>>>
>>>
>>
>> --
>> Daan
>>
>

-- 
Daan

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message