cloudstack-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF subversion and git services (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CLOUDSTACK-4325) [Zone-Wide-PrimaryStorage] CloudStack is failing to pickup zone wide primary storages if both cluster and zone wide storages are present
Date Wed, 14 Aug 2013 22:53:51 GMT

    [ https://issues.apache.org/jira/browse/CLOUDSTACK-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13740355#comment-13740355
] 

ASF subversion and git services commented on CLOUDSTACK-4325:
-------------------------------------------------------------

Commit 37d58313c9c90c2b3191b55b0cc6927d9f3d2077 in branch refs/heads/master from [~edison]
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=37d5831 ]

CLOUDSTACK-4325: if userdispersing algorithm is used, then zone wide storages never been picked
up

                
> [Zone-Wide-PrimaryStorage] CloudStack is failing to pickup zone wide primary storages
if both cluster and zone wide storages are present
> ----------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: CLOUDSTACK-4325
>                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4325
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the default.) 
>          Components: Storage Controller
>    Affects Versions: 4.2.0
>         Environment: commit id # 8df22d1818c120716bea5fce39854da38f61055b
>            Reporter: venkata swamybabu budumuru
>            Assignee: edison su
>            Priority: Critical
>             Fix For: 4.2.0
>
>         Attachments: logs.tgz
>
>
> Step to reproduce :
> 1. Have latest CloudStack setup with at least 1 advanced zone.
> 2. Above setup was created using Marvin framework using APIs
> 3. During the creation of zone, I have added 2 cluster wide primary storages
> - PS0
> - PS1
> mysql> select * from storage_pool where id<3\G
> *************************** 1. row ***************************
>                    id: 1
>                  name: PS0
>                  uuid: 5458182e-bfcb-351c-97ed-e7223bca2b8e
>             pool_type: NetworkFilesystem
>                  port: 2049
>        data_center_id: 1
>                pod_id: 1
>            cluster_id: 1
>            used_bytes: 4218878263296
>        capacity_bytes: 5902284816384
>          host_address: 10.147.28.7
>             user_info: NULL
>                  path: /export/home/swamy/primary.campo.kvm.1.zone
>               created: 2013-08-14 07:10:01
>               removed: NULL
>           update_time: NULL
>                status: Maintenance
> storage_provider_name: DefaultPrimary
>                 scope: CLUSTER
>            hypervisor: NULL
>               managed: 0
>         capacity_iops: NULL
> *************************** 2. row ***************************
>                    id: 2
>                  name: PS1
>                  uuid: 94634fe1-55f7-3fa8-aad9-5adc25246072
>             pool_type: NetworkFilesystem
>                  port: 2049
>        data_center_id: 1
>                pod_id: 1
>            cluster_id: 1
>            used_bytes: 4217960071168
>        capacity_bytes: 5902284816384
>          host_address: 10.147.28.7
>             user_info: NULL
>                  path: /export/home/swamy/primary.campo.kvm.2.zone
>               created: 2013-08-14 07:10:02
>               removed: NULL
>           update_time: NULL
>                status: Maintenance
> storage_provider_name: DefaultPrimary
>                 scope: CLUSTER
>            hypervisor: NULL
>               managed: 0
>         capacity_iops: NULL
> 2 rows in set (0.00 sec)
> Observations:
> (i) SSVM and CPVM volumes got created on pool_id=1
> 4. Zone got setup without any issues.
> 5. Added following zone wide primary storages 
> - test1
> - test2
> mysql> select * from storage_pool where id>7\G
> *************************** 1. row ***************************
>                    id: 8
>                  name: test1
>                  uuid: 4e612995-3cb1-344e-ba19-3992e3d37d3f
>             pool_type: NetworkFilesystem
>                  port: 2049
>        data_center_id: 1
>                pod_id: NULL
>            cluster_id: NULL
>            used_bytes: 4214658203648
>        capacity_bytes: 5902284816384
>          host_address: 10.147.28.7
>             user_info: NULL
>                  path: /export/home/swamy/test1
>               created: 2013-08-14 09:49:56
>               removed: NULL
>           update_time: NULL
>                status: Up
> storage_provider_name: DefaultPrimary
>                 scope: ZONE
>            hypervisor: KVM
>               managed: 0
>         capacity_iops: NULL
> *************************** 2. row ***************************
>                    id: 9
>                  name: test2
>                  uuid: 43a95e23-1ad6-30a9-9903-f68231dacec5
>             pool_type: NetworkFilesystem
>                  port: 2049
>        data_center_id: 1
>                pod_id: NULL
>            cluster_id: NULL
>            used_bytes: 4214658793472
>        capacity_bytes: 5902284816384
>          host_address: 10.147.28.7
>             user_info: NULL
>                  path: /export/home/swamy/test2
>               created: 2013-08-14 09:50:12
>               removed: NULL
>           update_time: NULL
>                status: Up
> storage_provider_name: DefaultPrimary
>                 scope: ZONE
>            hypervisor: KVM
>               managed: 0
>         capacity_iops: NULL
> 6. Have created a non-ROOT domain user and deployed VMs
> 7. Create 5 volumes as above users (volume ids : 23,24,25,26,27)
> 8. Tried to attach 23 & 24 volumes to the above deployed VM
> Observations :
> (ii) user VMs came up on pool_id=1 and router VMs came up on pool_id=2
> (iii) both the DATADISKS (23 & 24) got attached but, they got allocated on pool_id=1.
It never picked zone wide storages and it looks like the only way for this to happen is through
storageTags.
> 9. Now place the storages (PS0, PS1) in maintenance mode.
> Observations:
> (iv) PS0 and PS1 went to maintenance mode successfully.
> 10. Now tried to attach volume ids 25,26, 27 to the VM and it failed with the following
error.
> 2013-08-14 17:23:36,365 ERROR [cloud.async.AsyncJobManagerImpl] (Job-Executor-18:job-69
= [ 8ddd30f2-fbf4-45d5-b5c8-9e67fcfc085c ]) Unexpected exception while executing org.apache.cloudstack.api.command.user.volume.AttachVolumeCmd
> com.cloud.utils.exception.CloudRuntimeException: Unable to find storage pool when create
volumetest123
> 	at com.cloud.storage.VolumeManagerImpl.createVolume(VolumeManagerImpl.java:677)
> 	at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
> 	at com.cloud.storage.VolumeManagerImpl.createVolumeOnPrimaryStorage(VolumeManagerImpl.java:1538)
> 	at com.cloud.storage.VolumeManagerImpl.attachVolumeToVM(VolumeManagerImpl.java:1862)
> 	at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
> 	at org.apache.cloudstack.api.command.user.volume.AttachVolumeCmd.execute(AttachVolumeCmd.java:122)
> 	at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158)
> 	at com.cloud.async.AsyncJobManagerImpl$1.run(AsyncJobManagerImpl.java:531)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> 	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:679)
> 11. Tried to deploy a VM but, that as well failed with the following error 
> 2013-08-14 17:23:51,283 INFO  [user.vm.DeployVMCmd] (Job-Executor-19:job-70 = [ 5da32f69-bded-4d24-9423-97d18d426f5d
]) Unable to create a deployment for VM[User|58f3ba16-1481-4a72-bb7d-80d8b50830dc]
> com.cloud.exception.InsufficientServerCapacityException: Unable to create a deployment
for VM[User|58f3ba16-1481-4a72-bb7d-80d8b50830dc]Scope=interface com.cloud.dc.DataCenter;
id=1
> 	at org.apache.cloudstack.engine.cloud.entity.api.VMEntityManagerImpl.reserveVirtualMachine(VMEntityManagerImpl.java:209)
> 	at org.apache.cloudstack.engine.cloud.entity.api.VirtualMachineEntityImpl.reserve(VirtualMachineEntityImpl.java:198)
> 	at com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:3404)
> 	at com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:2965)
> 	at com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:2951)
> 	at com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
> 	at org.apache.cloudstack.api.command.user.vm.DeployVMCmd.execute(DeployVMCmd.java:420)
> 	at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158)
> 	at com.cloud.async.AsyncJobManagerImpl$1.run(AsyncJobManagerImpl.java:531)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> 	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:679)
> Expected Behaviour:
> ===============
> - When both PS0 and PS1 are in maintenance mode then it should automatically pick test1
and test2 for volumes or for new VM deployments.
> Attaching all the required logs along with db dump to the bug. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message