cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Daznis <daz...@gmail.com>
Subject Re: Can't KVM migrate between local storage.
Date Wed, 16 May 2018 08:11:45 GMT
Hi Marc,


I have attached all the xml responses. I have hidden some of the info
in xml files to hide some information.




On Mon, May 14, 2018 at 9:25 PM, Marc-Aurèle Brothier <marco@exoscale.ch> wrote:
> Can you give us the result of those API calls:
>
> listZones
> listZones id=2
> listHosts
> listHosts id=5
> listStoragePools
> listStoragePools id=1
> listVirtualMachines id=19
> listVolumes id=70
>
> On Mon, May 14, 2018 at 5:30 PM, Daznis <daznis@gmail.com> wrote:
>
>> Hi,
>>
>> It has 1 zone. I'm not sure how it got zoneid2. Probably failed to add
>> whole zone and was added again. We have 4 hosts with local storage on
>> them for system vms and VMS that need ssd storage and ceph primary for
>> everything else plus one secondary  storage server.
>>
>> On Mon, May 14, 2018 at 5:38 PM, Marc-Aurèle Brothier <marco@exoscale.ch>
>> wrote:
>> > Hi Daznis,
>> >
>> > Reading the logs I see some inconsistency in the values. Can you describe
>> > the infrastructure you set up? The things that disturbs me is a zoneid=2,
>> > and a destination pool id=1. Aren't you trying to migrate a volume of a
>> VM
>> > between 2 regions/zones?
>> >
>> > On Sat, May 12, 2018 at 2:33 PM, Daznis <daznis@gmail.com> wrote:
>> >
>> >> Hi,
>> >> Actually that's the whole log. Above it just job starting. I have
>> >> attached the missing part of the log. Which tables do you need from
>> >> the database?
>> >> There are multiple records with allocated/creating inside
>> >> volume_store_ref. There is nothing that's looks wrong with
>> >> volumes/snapshots/snapshot_store_ref.
>> >>
>> >> On Thu, May 10, 2018 at 9:27 PM, Suresh Kumar Anaparti
>> >> <sureshkumar.anaparti@gmail.com> wrote:
>> >> > Hi Darius,
>> >> >
>> >> > From the logs, I could observe that image volume is already in the
>> >> creating
>> >> > state and trying to use the same for copying the volume between pools.
>> >> So,
>> >> > state transition failed. Could you please provide the complete log
for
>> >> the
>> >> > usecase to root cause the issue. Also, include volumes and snapshots
>> db
>> >> > details for the mentioned volume and snapshot.
>> >> >
>> >> > -Suresh
>> >> >
>> >> >
>> >> > On Thu, May 10, 2018 at 1:22 PM, Daznis <daznis@gmail.com> wrote:
>> >> >
>> >> >> Snapshots work fine. I can make a snapshot -> convert it to
template
>> >> >> and start the VM on a new node from that template. When I needed
to
>> >> >> move one VM for balance purposes. But I want to fix the migration
>> >> >> process. I have attached the error log to this email. Maybe I'm
>> >> >> looking at the wrong place were I get the error?
>> >> >>
>> >> >> On Wed, May 9, 2018 at 9:23 PM, Marc-Aurèle Brothier <
>> marco@exoscale.ch
>> >> >
>> >> >> wrote:
>> >> >> > Can you try to perform a snapshot of the volume on VM's that
are on
>> >> your
>> >> >> > host, to see if they get copied correctly other the NFS too.
>> >> >> >
>> >> >> > Otherwise you need to look into the management logs to catch
the
>> >> >> exception
>> >> >> > (stack trace) to have a better understanding of the issue.
>> >> >> >
>> >> >> > On Wed, May 9, 2018 at 1:58 PM, Daznis <daznis@gmail.com>
wrote:
>> >> >> >
>> >> >> >> Hello,
>> >> >> >>
>> >> >> >>
>> >> >> >> Yeah it's offline. I'm running 4.9.2 version. Running
it on the
>> same
>> >> >> >> zone with the  only NFS secondary storage.
>> >> >> >>
>> >> >> >> On Wed, May 9, 2018 at 10:49 AM, Marc-Aurèle Brothier
<
>> >> >> marco@exoscale.ch>
>> >> >> >> wrote:
>> >> >> >> > Hi Darius,
>> >> >> >> >
>> >> >> >> > Are you trying to perform an offline migration within
the same
>> >> zone,
>> >> >> >> > meaning that the source and destination hosts have
the same set
>> of
>> >> NFS
>> >> >> >> > secondary storage ?
>> >> >> >> >
>> >> >> >> > Marc-Aurèle
>> >> >> >> >
>> >> >> >> > On Tue, May 8, 2018 at 3:37 PM, Daznis <daznis@gmail.com>
>> wrote:
>> >> >> >> >
>> >> >> >> >> Hi,
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> I'm having an issue while migrating offline vm
disk within
>> local
>> >> >> >> >> storages. The particular error that has be baffled
is "Can't
>> find
>> >> >> >> >> staging storage in zone". From what I have gather
"staging
>> >> storage"
>> >> >> >> >> referred to secondary storage in cloudstack and
it's working
>> >> >> perfectly
>> >> >> >> >> fine with both the source and destination node.
Not sure where
>> to
>> >> go
>> >> >> >> >> next. Any help would be appreciated.
>> >> >> >> >>
>> >> >> >> >>
>> >> >> >> >> Regards,
>> >> >> >> >> Darius
>> >> >> >> >>
>> >> >> >>
>> >> >>
>> >>
>>

Mime
View raw message