Return-Path: X-Original-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-users-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 82AACD054 for ; Fri, 14 Sep 2012 16:34:33 +0000 (UTC) Received: (qmail 55265 invoked by uid 500); 14 Sep 2012 16:34:33 -0000 Delivered-To: apmail-incubator-cloudstack-users-archive@incubator.apache.org Received: (qmail 55246 invoked by uid 500); 14 Sep 2012 16:34:33 -0000 Mailing-List: contact cloudstack-users-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-users@incubator.apache.org Delivered-To: mailing list cloudstack-users@incubator.apache.org Received: (qmail 55236 invoked by uid 99); 14 Sep 2012 16:34:33 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Sep 2012 16:34:33 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of clobariot@gmail.com designates 209.85.220.175 as permitted sender) Received: from [209.85.220.175] (HELO mail-vc0-f175.google.com) (209.85.220.175) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Sep 2012 16:34:28 +0000 Received: by vcdm8 with SMTP id m8so5086825vcd.6 for ; Fri, 14 Sep 2012 09:34:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=t2MiJVm9Sdms/FSjzFlF2sWBAY9cAZYTk+kAGhhB49A=; b=VD2wD3zBAKnemFGwC3M3knS6tNDgU4/SjnwtBX2o51loxGc5DpwbQFsunNTtUUR+ou BGcvaMLvVADkNq+VhP6eiyi8V6L3kZQPel3iAE6VRn0rkH7oUw3I/i8XRrzR6Uc9iEI4 Uoy47no/9NJIO+hIB3Qs4DnBiufynSgoKyUyQHI2Fv00vCX2eAnHUlDf8Rg5SbuU7lyI Zse524dEfISbW2YZ0cgvBjECIYjE3viV846k+4K4V2dY7qHPavmUECWsT7Wz4u3ZIGdj 2a6owShSP4PoZTVdv6yaGHmUUbG4BODiNR9kKTE3NP7wjHuOS1R6U66q9xJFCPHTJ3ce LwcQ== MIME-Version: 1.0 Received: by 10.220.153.200 with SMTP id l8mr2795040vcw.40.1347640447776; Fri, 14 Sep 2012 09:34:07 -0700 (PDT) Received: by 10.58.15.234 with HTTP; Fri, 14 Sep 2012 09:34:07 -0700 (PDT) In-Reply-To: References: Date: Fri, 14 Sep 2012 18:34:07 +0200 Message-ID: Subject: Re: Iscsi Lun for Primary storage From: claude bariot To: cloudstack-users@incubator.apache.org Content-Type: multipart/alternative; boundary=f46d04339cae2cb6b304c9abfe96 X-Virus-Checked: Checked by ClamAV on apache.org --f46d04339cae2cb6b304c9abfe96 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Thank so much for your explaination. What is the best way for doing the disk migratiation between 2 primary storage ? regards On 14 September 2012 18:23, Ahmad Emneina wrote: > You need to enable the original primary storage since that=92s where the = vm > volumes are on. don=92t power on the vm's but find their volumes, and vol= ume > migrate them to the new primary storage. After you migrated them all off, > you can power them on and enable maintenance on the storage you want > removed. > > On 9/14/12 9:18 AM, "claude bariot" clobariot@gmail.com>> wrote: > > I was tried an another test : > > I have 3 net primary storage and 2 local primary storage > > When I enable the first net primary storage in maintenace mode, all Syste= m > VMs migrate to another net primary storage automaticaly... > > Bat, all Vms don't migrate on the another primary storage as System VMs > > anyone can help me please ? > > > On 14 September 2012 14:49, claude bariot clobariot@gmail.com>> wrote: > No VMs are running right now because, I had enabled maintenace mode for m= y > first PS. > Before to did this action, I was added an another PS (iscsi target)... > > Actualy I have in my cluster 2 PS : > 1 is in maintenace mode > 1 no maintenance mode > > see the screenshot : > [cid:ii_139c4d08a3f76688] > > > apparently, the second PS is useless? because unbable to stat any VM or > create a new VM > > Idea ? > > > On 14 September 2012 14:08, Mice Xia weiran.xia1@gmail.com>> wrote: > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-47:job-77) > Cannot allocate this pool 204 for storage since its usage percentage: > 0.9558435325173986 has crossed the > pool.storage.capacity.disablethreshold: 0.85, skipping this pool > --------- > > usage of your storage pool (id=3D204) has crossed 0.85, which is the > threshold to disable vm allocation. Maybe you need one more PS, or > remove your existing VMs to release some space. > > Regards > Mice > > 2012/9/14 claude bariot >= : > > I have an another PS in my cluster. When I try to add a new instance, i= t > > fail with the following logs messages : > > > > > > 2012-09-14 13:50:26,946 DEBUG [allocator.impl.FirstFitAllocator] > > (Job-Executor-47:job-77 FirstFitRoutingAllocator) Found a suitable host= , > > adding to list: 11 > > 2012-09-14 13:50:26,947 DEBUG [allocator.impl.FirstFitAllocator] > > (Job-Executor-47:job-77 FirstFitRoutingAllocator) Host Allocator > returning > > 2 suitable hosts > > 2012-09-14 13:50:26,948 DEBUG [cloud.deploy.FirstFitPlanner] > > (Job-Executor-47:job-77) Checking suitable pools for volume (Id, Type): > > (27,ROOT) > > 2012-09-14 13:50:26,948 DEBUG [cloud.deploy.FirstFitPlanner] > > (Job-Executor-47:job-77) We need to allocate new storagepool for this > volume > > 2012-09-14 13:50:26,948 DEBUG [cloud.deploy.FirstFitPlanner] > > (Job-Executor-47:job-77) Calling StoragePoolAllocators to find suitable > > pools > > 2012-09-14 13:50:26,949 DEBUG > > [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-47:job-7= 7) > > Looking for pools in dc: 1 pod:1 cluster:1 > > 2012-09-14 13:50:26,951 DEBUG > > [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-47:job-7= 7) > > FirstFitStoragePoolAllocator has 2 pools to check for > > allocation > > 2012-09-14 13:50:26,951 DEBUG > > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-47:job-7= 7) > > Checking if storage pool is suitable, name: cloud-pri > ary > > ,poolId: 204 > > 2012-09-14 13:50:26,951 DEBUG > > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-47:job-7= 7) > > Is localStorageAllocationNeeded? false > > 2012-09-14 13:50:26,951 DEBUG > > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-47:job-7= 7) > > Is storage pool shared? true > > 2012-09-14 13:50:26,952 DEBUG > > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-47:job-7= 7) > > Attempting to look for pool 204 for storage, totalSiz = : > > 52432994304, usedBytes: 50117738496, usedPct: 0.9558435325173986, disab= le > > threshold: 0.85 > > 2012-09-14 13:50:26,952 DEBUG > > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-47:job-7= 7) > > Cannot allocate this pool 204 for storage since its u > age > > percentage: 0.9558435325173986 has crossed the > > pool.storage.capacity.disablethreshold: 0.85, skipping this pool > > 2012-09-14 13:50:26,952 DEBUG > > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-47:job-7= 7) > > Checking if storage pool is suitable, name: local-sto = e1 > > ,poolId: 200 > > 2012-09-14 13:50:26,952 DEBUG > > [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-47:job-7= 7) > > StoragePool status is not UP, status is: Maintenance, > > skipping this pool > > 2012-09-14 13:50:26,952 DEBUG > > [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-47:job-7= 7) > > FirstFitStoragePoolAllocator returning 0 suitable sto > age > > pools > > 2012-09-14 13:50:26,952 DEBUG [cloud.deploy.FirstFitPlanner] > > (Job-Executor-47:job-77) No suitable pools found for volume: > > Vol[27|vm=3D24|ROOT] under cluster: 1 > > 2012-09-14 13:50:26,952 DEBUG [cloud.deploy.FirstFitPlanner] > > (Job-Executor-47:job-77) No suitable pools found > > 2012-09-14 13:50:26,952 DEBUG [cloud.deploy.FirstFitPlanner] > > (Job-Executor-47:job-77) No suitable storagePools found under this > Cluster: > > 1 > > 2012-09-14 13:50:26,952 DEBUG [cloud.deploy.FirstFitPlanner] > > (Job-Executor-47:job-77) Could not find suitable Deployment Destination > for > > this VM under any cl sters, returning. > > 2012-09-14 13:50:27,156 DEBUG [cloud.capacity.CapacityManagerImpl] > > (Job-Executor-47:job-77) VM state transitted from :Starting to Stopped > with > > event: Operati nFailedvm's original host id: null new > host > > id: null host id before state transition: null > > 2012-09-14 13:50:27,376 DEBUG [cloud.capacity.CapacityManagerImpl] > > (Job-Executor-47:job-77) VM state transitted from :Stopped to Error wit= h > > event: OperationF iledToErrorvm's original host id: nu= ll > > new host id: null host id before state transition: null > > 2012-09-14 13:50:28,041 ERROR [cloud.alert.AlertManagerImpl] > > (Job-Executor-47:job-77) Problem sending email alert > > 2012-09-14 13:50:28,270 INFO [api.commands.DeployVMCmd] > > (Job-Executor-47:job-77) > > com.cloud.exception.InsufficientServerCapacityException: Unable to > create a > > deployment for VM[User|i-2-24-VM]Scope=3Dinterface > > com.cloud.dc.DataCenter; id=3D1 > > 2012-09-14 13:50:28,270 WARN [cloud.api.ApiDispatcher] > > (Job-Executor-47:job-77) class com.cloud.api.ServerApiException : Unabl= e > to > > create a deployment for V [User|i-2-24-VM] > > 2012-09-14 13:50:28,270 DEBUG [cloud.async.AsyncJobManagerImpl] > > (Job-Executor-47:job-77) Complete async job-77, jobStatus: 2, resultCod= e: > > 530, result: com.cl > > ud.api.response.ExceptionResponse@75cb722f > > 2012-09-14 13:50:31,787 DEBUG [cloud.async.AsyncJobManagerImpl] > > (catalina-exec-17:null) Async job-77 completed > > > > > > > > > > On 14 September 2012 13:46, claude bariot clobariot@gmail.com>> wrote: > > > >> Yep. > >> The Storage VM system has been restarted into the available primary > stoge > >> fine. > >> > >> Bat, I would like know, how can I do for using my other available PS ? > >> regards > >> > >> On 14 September 2012 10:50, Mice Xia > wrote: > >> > >>> If I recall correctly, this is by design. Maintenance is used for > >>> scenarios like you want to power off primary storage and replace > hardware > >>> chips in it. > >>> > >>> When you maintain a primary storage, system VMs and vrouter associate= d > >>> get restarted on other available PS. > >>> User VMs will just stop. > >>> > >>> Regards > >>> Mice > >>> > >>> -----Original Message----- > >>> From: claude bariot [mailto:clobariot@gmail.com clobariot@gmail.com>] > >>> Sent: Friday, September 14, 2012 4:09 PM > >>> To: cloudstack-users@incubator.apache.org cloudstack-users@incubator.apache.org> > >>> Subject: Re: Iscsi Lun for Primary storage > >>> > >>> Ok. > >>> Now a have 2 primary storage in may CS palteforme : > >>> 1 in nfs share (older and running fine) > >>> 1 iscsi target > >>> > >>> problem : > >>> - When I enable "maintenance mode) for the " nfsshare primary storage= " > I > >>> sow following : > >>> . all system VMs disk migrate automaticaly to the "iscsi share" (n= ew > >>> primary storage) > >>> - bat all VMs instances has been stopped and the restart failled .= .. > >>> > >>> Why ? > >>> > >>> > >>> On 13 September 2012 20:51, Anthony Xu Xuefei.Xu@citrix.com>> wrote: > >>> > >>> > >- set node.startup to automatic in /etc/iscsi/iscsid.conf ? > >>> > >- connect to the target ? or CS will be connect automaticaly after= I > >>> add > >>> > a primary storage from UI ? > >>> > >- login manualy to the Lun target > >>> > >- makle the fdisl for partinionning the new disk (Lun) > >>> > >- format the disk etc ... > >>> > > >>> > > >>> > You don't need to do this, Xenserver will do this automatically. > >>> > > >>> > > >>> > Anthony > >>> > > >>> > > >>> > -----Original Message----- > >>> > From: claude bariot [mailto:clobariot@gmail.com clobariot@gmail.com>] > >>> > Sent: Thursday, September 13, 2012 6:16 AM > >>> > To: cloudstack-users@incubator.apache.org cloudstack-users@incubator.apache.org> > >>> > Subject: Iscsi Lun for Primary storage > >>> > > >>> > I was added an additional primary storage (using CS UI). with the > >>> > following detail : > >>> > > >>> > *Name*: cloud-primary > >>> > *Type*: IscsiLUN*Path*: /iqn.2012-09.com.openfiler:primay-st/0 > >>> > > >>> > I would like know if I should doing the following operation to the > >>> > Management server : > >>> > > >>> > > >>> > - set node.startup to automatic in /etc/iscsi/iscsid.conf ? > >>> > - connect to the target ? or CS will be connect automaticaly after = I > >>> add a > >>> > primary storage from UI ? > >>> > - login manualy to the Lun target > >>> > - makle the fdisl for partinionning the new disk (Lun) > >>> > - format the disk etc ... > >>> > > >>> > regards > >>> > > >>> > >> > >> > > > > > -- > =C6 > --f46d04339cae2cb6b304c9abfe96--