cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From claude bariot <clobar...@gmail.com>
Subject Re: Iscsi Lun for Primary storage
Date Mon, 17 Sep 2012 15:37:56 GMT
find joinned the copy screen :

[image: Inline images 1]
I don't understand, why the error message refere to secondary storage.
"Failed to copy the volume from secondary storage to the destination
primary storage pool."

regards


On 17 September 2012 11:40, claude bariot <clobariot@gmail.com> wrote:

> Hello,
>
> I just test this, bat No success.
> When I using UI, it fail with the following message : "Failed ro copy the
> volume* from Secondary* storage to the destanation promary storage"
>
> Why it's refer to secondary storage ?
>
> regards
>
>
> On 14 September 2012 19:00, Ahmad Emneina <Ahmad.Emneina@citrix.com>wrote:
>
>> Programmatically via the api, if you have time. Query the volumes,
>> identify the ones you want moved,and move them one or two at a time to
>> avoid saturating your storage network. This can also be done manually via
>> the UI. Select your vm, if its powered off you'll get an icon that looks
>> like a "+" with arrows shooting out of each end, and clicking that should
>> pop up a dialog prompting you for the storage you want to move the vm to.
>>
>> On 9/14/12 9:34 AM, "claude bariot" <clobariot@gmail.com> wrote:
>>
>> >Thank so much for your explaination.
>> >What is the best way for doing the disk migratiation between 2 primary
>> >storage ?
>> >
>> >regards
>> >
>> >On 14 September 2012 18:23, Ahmad Emneina <Ahmad.Emneina@citrix.com>
>> >wrote:
>> >
>> >> You need to enable the original primary storage since that¹s where the
>> >>vm
>> >> volumes are on. don¹t power on the vm's but find their volumes, and
>> >>volume
>> >> migrate them to the new primary storage. After you migrated them all
>> >>off,
>> >> you can power them on and enable maintenance on the storage you want
>> >> removed.
>> >>
>> >> On 9/14/12 9:18 AM, "claude bariot" <clobariot@gmail.com<mailto:
>> >> clobariot@gmail.com>> wrote:
>> >>
>> >> I was tried an another test :
>> >>
>> >> I have 3 net primary storage and 2 local primary storage
>> >>
>> >> When I enable the first net primary storage in maintenace mode, all
>> >>System
>> >> VMs migrate to another net primary storage automaticaly...
>> >>
>> >> Bat, all Vms don't migrate on the another primary storage as System VMs
>> >>
>> >> anyone can help me please ?
>> >>
>> >>
>> >> On 14 September 2012 14:49, claude bariot <clobariot@gmail.com<mailto:
>> >> clobariot@gmail.com>> wrote:
>> >> No VMs are running right now because, I had enabled maintenace mode for
>> >>my
>> >> first PS.
>> >> Before to did this action, I was added an another PS (iscsi target)...
>> >>
>> >> Actualy I have in my cluster 2 PS :
>> >> 1 is in maintenace mode
>> >> 1 no maintenance mode
>> >>
>> >> see  the screenshot :
>> >> [cid:ii_139c4d08a3f76688]
>> >>
>> >>
>> >> apparently, the second PS is useless? because unbable to stat any VM or
>> >> create a new VM
>> >>
>> >> Idea ?
>> >>
>> >>
>> >> On 14 September 2012 14:08, Mice Xia <weiran.xia1@gmail.com<mailto:
>> >> weiran.xia1@gmail.com>> wrote:
>> >> [storage.allocator.AbstractStoragePoolAllocator]
>> >>(Job-Executor-47:job-77)
>> >> Cannot allocate this pool 204 for storage since its usage percentage:
>> >> 0.9558435325173986 has crossed the
>> >> pool.storage.capacity.disablethreshold: 0.85, skipping this pool
>> >> ---------
>> >>
>> >> usage of your storage pool (id=204) has crossed 0.85, which is the
>> >> threshold to disable vm allocation. Maybe you need one more PS, or
>> >> remove your existing VMs to release some space.
>> >>
>> >> Regards
>> >> Mice
>> >>
>> >> 2012/9/14 claude bariot
>> >><clobariot@gmail.com<mailto:clobariot@gmail.com>>:
>> >> > I have an another PS in my cluster. When I try to add a new instance,
>> >>it
>> >> > fail with the following logs messages :
>> >> >
>> >> >
>> >> > 2012-09-14 13:50:26,946 DEBUG [allocator.impl.FirstFitAllocator]
>> >> > (Job-Executor-47:job-77 FirstFitRoutingAllocator) Found a suitable
>> >>host,
>> >> > adding to list: 11
>> >> > 2012-09-14 13:50:26,947 DEBUG [allocator.impl.FirstFitAllocator]
>> >> > (Job-Executor-47:job-77 FirstFitRoutingAllocator) Host Allocator
>> >> returning
>> >> > 2 suitable hosts
>> >> > 2012-09-14 13:50:26,948 DEBUG [cloud.deploy.FirstFitPlanner]
>> >> > (Job-Executor-47:job-77) Checking suitable pools for volume (Id,
>> >>Type):
>> >> > (27,ROOT)
>> >> > 2012-09-14 13:50:26,948 DEBUG [cloud.deploy.FirstFitPlanner]
>> >> > (Job-Executor-47:job-77) We need to allocate new storagepool for this
>> >> volume
>> >> > 2012-09-14 13:50:26,948 DEBUG [cloud.deploy.FirstFitPlanner]
>> >> > (Job-Executor-47:job-77) Calling StoragePoolAllocators to find
>> >>suitable
>> >> > pools
>> >> > 2012-09-14 13:50:26,949 DEBUG
>> >> > [storage.allocator.FirstFitStoragePoolAllocator]
>> >>(Job-Executor-47:job-77)
>> >> > Looking for pools in dc: 1  pod:1  cluster:1
>> >> > 2012-09-14 13:50:26,951 DEBUG
>> >> > [storage.allocator.FirstFitStoragePoolAllocator]
>> >>(Job-Executor-47:job-77)
>> >> > FirstFitStoragePoolAllocator has 2 pools to check for
>> >> >  allocation
>> >> > 2012-09-14 13:50:26,951 DEBUG
>> >> > [storage.allocator.AbstractStoragePoolAllocator]
>> >>(Job-Executor-47:job-77)
>> >> > Checking if storage pool is suitable, name: cloud-pri
>> >>  ary
>> >> > ,poolId: 204
>> >> > 2012-09-14 13:50:26,951 DEBUG
>> >> > [storage.allocator.AbstractStoragePoolAllocator]
>> >>(Job-Executor-47:job-77)
>> >> > Is localStorageAllocationNeeded? false
>> >> > 2012-09-14 13:50:26,951 DEBUG
>> >> > [storage.allocator.AbstractStoragePoolAllocator]
>> >>(Job-Executor-47:job-77)
>> >> > Is storage pool shared? true
>> >> > 2012-09-14 13:50:26,952 DEBUG
>> >> > [storage.allocator.AbstractStoragePoolAllocator]
>> >>(Job-Executor-47:job-77)
>> >> > Attempting to look for pool 204 for storage, totalSiz
>> >> :
>> >> > 52432994304, usedBytes: 50117738496, usedPct: 0.9558435325173986,
>> >>disable
>> >> > threshold: 0.85
>> >> > 2012-09-14 13:50:26,952 DEBUG
>> >> > [storage.allocator.AbstractStoragePoolAllocator]
>> >>(Job-Executor-47:job-77)
>> >> > Cannot allocate this pool 204 for storage since its u
>> >>  age
>> >> > percentage: 0.9558435325173986 has crossed the
>> >> > pool.storage.capacity.disablethreshold: 0.85, skipping this pool
>> >> > 2012-09-14 13:50:26,952 DEBUG
>> >> > [storage.allocator.AbstractStoragePoolAllocator]
>> >>(Job-Executor-47:job-77)
>> >> > Checking if storage pool is suitable, name: local-sto
>> >> e1
>> >> > ,poolId: 200
>> >> > 2012-09-14 13:50:26,952 DEBUG
>> >> > [storage.allocator.AbstractStoragePoolAllocator]
>> >>(Job-Executor-47:job-77)
>> >> > StoragePool status is not UP, status is: Maintenance,
>> >> >  skipping this pool
>> >> > 2012-09-14 13:50:26,952 DEBUG
>> >> > [storage.allocator.FirstFitStoragePoolAllocator]
>> >>(Job-Executor-47:job-77)
>> >> > FirstFitStoragePoolAllocator returning 0 suitable sto
>> >>  age
>> >> > pools
>> >> > 2012-09-14 13:50:26,952 DEBUG [cloud.deploy.FirstFitPlanner]
>> >> > (Job-Executor-47:job-77) No suitable pools found for volume:
>> >> > Vol[27|vm=24|ROOT] under cluster: 1
>> >> > 2012-09-14 13:50:26,952 DEBUG [cloud.deploy.FirstFitPlanner]
>> >> > (Job-Executor-47:job-77) No suitable pools found
>> >> > 2012-09-14 13:50:26,952 DEBUG [cloud.deploy.FirstFitPlanner]
>> >> > (Job-Executor-47:job-77) No suitable storagePools found under this
>> >> Cluster:
>> >> > 1
>> >> > 2012-09-14 13:50:26,952 DEBUG [cloud.deploy.FirstFitPlanner]
>> >> > (Job-Executor-47:job-77) Could not find suitable Deployment
>> >>Destination
>> >> for
>> >> > this VM under any cl                  sters, returning.
>> >> > 2012-09-14 13:50:27,156 DEBUG [cloud.capacity.CapacityManagerImpl]
>> >> > (Job-Executor-47:job-77) VM state transitted from :Starting to
>> Stopped
>> >> with
>> >> > event: Operati                  nFailedvm's original host id: null
>> new
>> >> host
>> >> > id: null host id before state transition: null
>> >> > 2012-09-14 13:50:27,376 DEBUG [cloud.capacity.CapacityManagerImpl]
>> >> > (Job-Executor-47:job-77) VM state transitted from :Stopped to Error
>> >>with
>> >> > event: OperationF                  iledToErrorvm's original host id:
>> >>null
>> >> > new host id: null host id before state transition: null
>> >> > 2012-09-14 13:50:28,041 ERROR [cloud.alert.AlertManagerImpl]
>> >> > (Job-Executor-47:job-77) Problem sending email alert
>> >> > 2012-09-14 13:50:28,270 INFO  [api.commands.DeployVMCmd]
>> >> > (Job-Executor-47:job-77)
>> >> > com.cloud.exception.InsufficientServerCapacityException: Unable to
>> >> create a
>> >> >                  deployment for VM[User|i-2-24-VM]Scope=interface
>> >> > com.cloud.dc.DataCenter; id=1
>> >> > 2012-09-14 13:50:28,270 WARN  [cloud.api.ApiDispatcher]
>> >> > (Job-Executor-47:job-77) class com.cloud.api.ServerApiException :
>> >>Unable
>> >> to
>> >> > create a deployment for V                  [User|i-2-24-VM]
>> >> > 2012-09-14 13:50:28,270 DEBUG [cloud.async.AsyncJobManagerImpl]
>> >> > (Job-Executor-47:job-77) Complete async job-77, jobStatus: 2,
>> >>resultCode:
>> >> > 530, result: com.cl<http://com.cl>
>> >> >  ud.api.response.ExceptionResponse@75cb722f
>> >> > 2012-09-14 13:50:31,787 DEBUG [cloud.async.AsyncJobManagerImpl]
>> >> > (catalina-exec-17:null) Async job-77 completed
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > On 14 September 2012 13:46, claude bariot <clobariot@gmail.com
>> <mailto:
>> >> clobariot@gmail.com>> wrote:
>> >> >
>> >> >> Yep.
>> >> >> The Storage VM system has been restarted into the available primary
>> >> stoge
>> >> >> fine.
>> >> >>
>> >> >> Bat, I would like know, how can I do for using my other available
PS
>> >>?
>> >> >> regards
>> >> >>
>> >> >> On 14 September 2012 10:50, Mice Xia <mice_xia@tcloudcomputing.com
>> >> <mailto:mice_xia@tcloudcomputing.com>> wrote:
>> >> >>
>> >> >>> If I recall correctly, this is by design. Maintenance is used
for
>> >> >>> scenarios like you want to power off primary storage and replace
>> >> hardware
>> >> >>> chips in it.
>> >> >>>
>> >> >>> When you maintain a primary storage, system VMs and vrouter
>> >>associated
>> >> >>> get restarted on other available PS.
>> >> >>> User VMs will just stop.
>> >> >>>
>> >> >>> Regards
>> >> >>> Mice
>> >> >>>
>> >> >>> -----Original Message-----
>> >> >>> From: claude bariot [mailto:clobariot@gmail.com<mailto:
>> >> clobariot@gmail.com>]
>> >> >>> Sent: Friday, September 14, 2012 4:09 PM
>> >> >>> To: cloudstack-users@incubator.apache.org<mailto:
>> >> cloudstack-users@incubator.apache.org>
>> >> >>> Subject: Re: Iscsi Lun for Primary storage
>> >> >>>
>> >> >>> Ok.
>> >> >>> Now a have 2 primary storage in may CS palteforme :
>> >> >>> 1 in nfs share (older and running fine)
>> >> >>> 1 iscsi target
>> >> >>>
>> >> >>> problem :
>> >> >>> - When I enable "maintenance mode) for the " nfsshare primary
>> >>storage"
>> >> I
>> >> >>> sow following :
>> >> >>>    . all system VMs disk migrate automaticaly to the "iscsi
share"
>> >>(new
>> >> >>> primary storage)
>> >> >>>    - bat all VMs instances has been stopped and the restart
failled
>> >>...
>> >> >>>
>> >> >>> Why ?
>> >> >>>
>> >> >>>
>> >> >>> On 13 September 2012 20:51, Anthony Xu <Xuefei.Xu@citrix.com
>> <mailto:
>> >> Xuefei.Xu@citrix.com>> wrote:
>> >> >>>
>> >> >>> > >- set node.startup to automatic in /etc/iscsi/iscsid.conf
?
>> >> >>> > >- connect to the target ? or CS will be connect automaticaly
>> >>after I
>> >> >>> add
>> >> >>> > a primary storage from UI ?
>> >> >>> > >- login manualy to the Lun target
>> >> >>> > >-  makle the fdisl for partinionning the new disk
(Lun)
>> >> >>> > >- format the disk etc ...
>> >> >>> >
>> >> >>> >
>> >> >>> > You don't need to do this, Xenserver will do this automatically.
>> >> >>> >
>> >> >>> >
>> >> >>> > Anthony
>> >> >>> >
>> >> >>> >
>> >> >>> > -----Original Message-----
>> >> >>> > From: claude bariot [mailto:clobariot@gmail.com<mailto:
>> >> clobariot@gmail.com>]
>> >> >>> > Sent: Thursday, September 13, 2012 6:16 AM
>> >> >>> > To: cloudstack-users@incubator.apache.org<mailto:
>> >> cloudstack-users@incubator.apache.org>
>> >> >>> > Subject: Iscsi Lun for Primary storage
>> >> >>> >
>> >> >>> > I was added an additional primary storage (using CS UI).
with the
>> >> >>> > following detail :
>> >> >>> >
>> >> >>> > *Name*: cloud-primary
>> >> >>> > *Type*: IscsiLUN*Path*: /iqn.2012-09.com.openfiler:primay-st/0
>> >> >>> >
>> >> >>> > I would like know if I should doing the following operation
to
>> the
>> >> >>> > Management server :
>> >> >>> >
>> >> >>> >
>> >> >>> > - set node.startup to automatic in /etc/iscsi/iscsid.conf
?
>> >> >>> > - connect to the target ? or CS will be connect automaticaly
>> >>after I
>> >> >>> add a
>> >> >>> > primary storage from UI ?
>> >> >>> > - login manualy to the Lun target
>> >> >>> > -  makle the fdisl for partinionning the new disk (Lun)
>> >> >>> > - format the disk etc ...
>> >> >>> >
>> >> >>> > regards
>> >> >>> >
>> >> >>>
>> >> >>
>> >> >>
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Æ
>> >>
>> >
>>
>>
>> --
>> Æ
>>
>>
>>
>>
>

Mime
  • Unnamed multipart/related (inline, None, 0 bytes)
View raw message