cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Indra Pramana <in...@sg.or.id>
Subject Re: Problem in adding Ceph RBD as primary storage for CloudStack 4.1.0
Date Fri, 12 Jul 2013 06:16:41 GMT
Hi Prasanna,

Good day to you, and thank you for your e-mail.

Yes, the file exists. I can access the file from the management server and
the two hypervisors hosts if I mount manually.

[root@cs-nas-01 /mnt/vol1/sec-storage/template/tmpl/1/3]# ls
-la
total
1418787
drwxr-xr-x  2 root  wheel          4 Jul 11 20:21
.
drwxr-xr-x  3 root  wheel          3 Jul 11 20:17
..
-rw-r--r--  1 root  wheel  725811200 Jul 11 20:21
425b9e5a-fbc7-4637-a33a-fe9d0ed4fa98.qcow2
-rw-r--r--  1 root  wheel        295 Jul 11 20:21
template.properties
[root@cs-nas-01 /mnt/vol1/sec-storage/template/tmpl/1/3]#
pwd
/mnt/vol1/sec-storage/template/tmpl/1/3


Any advise?

Looking forward to your reply, thank you.

Cheers.



On Fri, Jul 12, 2013 at 2:07 PM, Prasanna Santhanam <tsp@apache.org> wrote:

> Can you check whether there is a file at:
> nfs://103.25.200.19/mnt/vol1/sec-storage/template/tmpl/1/3/
>
> On Fri, Jul 12, 2013 at 01:59:34PM +0800, Indra Pramana wrote:
> > Hi Prasanna,
> >
> > Thanks for your e-mail.
> >
> > I have tried restarting the management server, and the problem still
> > persists. I even tried to re-do the installation and configuration again
> > from scratch last night, but the problem still there.
> >
> > I also noted that on the beginning of the logs, I found some error
> messages
> > saying that the template cannot be downloaded to the pool. See this logs:
> >
> > http://pastebin.com/BY1AVJ08
> >
> > It says it failed because cannot get volume from the pool. Could it be
> > related, i.e. the absence of the template caused the system VMs cannot be
> > created and started?
> >
> > I have ensured that I downloaded the system VM template using
> > cloud-install-sys-tmplt and verified that the template is already there
> in
> > the secondary storage server.
> >
> > Any advice is appreciated.
> >
> > Looking forward to your reply, thank you.
> >
> > Cheers.
> >
> >
> >
> > On Fri, Jul 12, 2013 at 1:21 PM, Prasanna Santhanam <tsp@apache.org>
> wrote:
> >
> > > It looks like a previous attempt to start the systemVMs has failed
> > > putting the nfs storage in the avoid set. Did you try restarting your
> > > management server?
> > >
> > > This line leads me to the above mentioned:
> > > 2013-07-12 13:10:48,236 DEBUG
> > > [storage.allocator.AbstractStoragePoolAllocator] (secstorage-1:null)
> > > StoragePool is in avoid set, skipping this pool
> > >
> > >
> > > On Fri, Jul 12, 2013 at 01:16:53PM +0800, Indra Pramana wrote:
> > > > Dear Wido and all,
> > > >
> > > > I have managed to get the hosts, primary and secondary storage
> running.
> > > >
> > > > - 2 KVM hypervisor hosts
> > > > - One RBD primary storage
> > > > - One NFS primary storage (for system VMs, since I understand that
> system
> > > > VMs cannot use RBD)
> > > > - One NFS secondary storage
> > > >
> > > > However, now I am having problem with the system VMs: CPVM and SSVM,
> > > unable
> > > > to start.
> > > >
> > > > Excerpt from management-server.log file is here:
> > > > http://pastebin.com/ENkpCALY
> > > >
> > > > It seems that the VMs were not able to be created because unable to
> find
> > > > suitable StoragePools.
> > > >
> > > > I understand that system VMs will be using the NFS primary storage
> > > instead
> > > > of RBD, so I have confirmed that I am able to mount the primary
> storage
> > > via
> > > > NFS and have read and write access, from both the hypervisor and the
> > > > management server.
> > > >
> > > > Any advise how can I resolve the problem to make both the system VMs
> > > > created and started?
> > > >
> > > > Looking forward to your reply, thank you.
> > > >
> > > > Cheers.
> > > >
> > > >
> > > > On Fri, Jul 12, 2013 at 9:43 AM, Indra Pramana <indra@sg.or.id>
> wrote:
> > > >
> > > > > Hi Wido,
> > > > >
> > > > > Thanks for the advice, I'm now able to add the RBD pool as primary
> > > storage.
> > > > >
> > > > > Many thanks! :)
> > > > >
> > > > > Cheers.
> > > > >
> > > > >
> > > > > On Thursday, July 11, 2013, Wido den Hollander wrote:
> > > > >
> > > > >> Hi,
> > > > >>
> > > > >> On 07/10/2013 03:42 PM, Chip Childers wrote:
> > > > >>
> > > > >>> Cc'ing Wido, our resident Ceph expert. ;-)
> > > > >>>
> > > > >>>
> > > > >> Hehe ;)
> > > > >>
> > > > >>  On Wed, Jul 10, 2013 at 05:45:25PM +0800, Indra Pramana wrote:
> > > > >>>
> > > > >>>> Dear all,
> > > > >>>>
> > > > >>>> I am installing CloudStack 4.1.0 (upgraded from 4.0.2)
and I
> also
> > > have a
> > > > >>>> Ceph cluster running. However, I am having issues in
adding the
> RBD
> > > as
> > > > >>>> primary storage. Tried to follow the instruction here,
but
> unable to
> > > > >>>> make
> > > > >>>> it work:
> > > > >>>>
> > > > >>>> http://ceph.com/docs/master/**rbd/rbd-cloudstack/<
> > > http://ceph.com/docs/master/rbd/rbd-cloudstack/>
> > > > >>>>
> > > > >>>> I have setup a pool on the Ceph cluster. The status of
the
> cluster
> > > is
> > > > >>>> healthy. Since I am using Ubuntu 12.04.2 LTS (Precise)
for the
> > > > >>>> hypervisors,
> > > > >>>> I also have compiled libvirt manually to ensure that
the version
> > > 0.9.13
> > > > >>>> is
> > > > >>>> installed (previously it's 0.9.8).
> > > > >>>>
> > > > >>>>
> > > > >> You can also use the Ubuntu Cloud Archive, I still need to get
the
> > > docs
> > > > >> updated for that.
> > > > >>
> > > > >> I described the process in a blogpost:
> > > http://blog.widodh.nl/2013/06/**
> > > > >> a-quick-note-on-running-**cloudstack-with-rbd-on-ubuntu-**12-04/<
> > >
> http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/
> > > >
> > > > >>
> > > > >>  indra@hv-kvm-01:~/rbd$ ceph
> > > > >>>> ceph> health
> > > > >>>> HEALTH_OK
> > > > >>>>
> > > > >>>> indra@hv-kvm-01:~$ ceph osd lspools
> > > > >>>> 0 data,1 metadata,2 rbd,3 sc1,
> > > > >>>>
> > > > >>>> root@hv-kvm-01:/home/indra# libvirtd --version
> > > > >>>> libvirtd (libvirt) 0.9.13
> > > > >>>>
> > > > >>>> I tried to add Primary Storage into the Cloudstack zone
which I
> have
> > > > >>>> created:
> > > > >>>>
> > > > >>>> Add Primary Storage:
> > > > >>>>
> > > > >>>> Zone: my zone name
> > > > >>>> Pod: my pod name
> > > > >>>> Cluster: my cluster name
> > > > >>>> Name: ceph-rbd-pri-storage
> > > > >>>> Protocol: RBD
> > > > >>>> RADOS Monitor: my first Ceph monitor IP address
> > > > >>>> RADOS Pool: sc1 (the pool name on Ceph cluster)
> > > > >>>> RADOS User: client.admin
> > > > >>>> RADOS Secret: /etc/ceph/ceph.client.admin.**keyring (keyring
> file
> > > > >>>> location)
> > > > >>>>
> > > > >>>
> > > > >> This is your problem. That shouldn't be the location of the file,
> but
> > > it
> > > > >> should be the secret, which is a base64 encoded string.
> > > > >>
> > > > >> $ ceph auth list
> > > > >>
> > > > >> That should tell you what the secret is.
> > > > >>
> > > > >>  Storage Tags: rbd
> > > > >>
> > > > >> This is the error message when I tried to add the primary storage
> by
> > > > >> clicking OK:
> > > > >>
> > > > >> DB Exception on: com.mysql.jdbc.**JDBC4PreparedStatement@4b2eb56
> **:
> > > > >> INSERT INTO
> > > > >> storage_pool (storage_pool.id, storage_pool.name,
> storage_pool.uuid,
> > > > >> storage_pool.pool_type, storage_pool.created,
> > > storage_pool.update_time,
> > > > >> storage_pool.data_center_id, storage_pool.pod_id,
> > > > >> storage_pool.available_bytes, storage_pool.capacity_bytes,
> > > > >> storage_pool.status, storage_pool.scope,
> > > storage_pool.storage_provider_**
> > > > >> id,
> > > > >> storage_pool.host_address, storage_pool.path, storage_pool.port,
> > > > >> storage_pool.user_info, storage_pool.cluster_id) VALUES (217,
> > > > >> _binary'ceph-rbd-pri-storage',
> > > > >> _binary'a226c9a1-da78-3f3a-**b5ac-e18b925c9634', 'RBD',
> '2013-07-10
> > > > >> 09:08:28', null, 2, 2, 0, 0, 'Up', null, null, null, _binary'ceph/
> > > > >> ceph.client.admin.keyring@10.**237.11.2/sc1<
> > > http://ceph.client.admin.keyring@10.237.11.2/sc1>',
> > > > >> 6789, null, 2)
> > > > >>
> > > > >> On the management-server.log file:
> > > > >>
> > > > >> 2013-07-10 17:08:28,845 DEBUG [cloud.api.ApiServlet]
> > > > >> (catalina-exec-2:null)
> > > > >> ===START===  192.168.0.100 -- GET
> > > > >> command=createStoragePool&**zoneid=c116950e-e4ae-4f23-**a7e7-
> > > > >> 74a75c4ee638&podId=a748b063-**3a83-4175-a0e9-de39118fe5ce&**
> > > > >> clusterid=1f87eb09-324d-4d49-**83c2-88d84d7a15df&name=ceph-**
> > > > >> rbd-pri-storage&url=rbd%3A%2F%**2Fclient.admin%3A_etc%2Fc
> > > > >> eph%2Fceph.client.admin.**keyring%4010.237.11.2%2Fsc1&**
> > > > >>
> tags=rbd&response=json&**sessionkey=**rDRfWpqeKfQKbKZtHr398ULV%2F8k%**
> > > > >> 3D&_=1373447307839
> > > > >> 2013-07-10 17:08:28,862 DEBUG [cloud.storage.**StorageManagerImpl]
> > > > >> (catalina-exec-2:null) createPool Params @ scheme - rbd
> storageHost -
> > > null
> > > > >> hostPath - /ceph/ceph.client
> > > > >> .admin.keyring@10.237.11.2/sc1 port - -1
> > > > >> 2013-07-10 17:08:28,918 DEBUG [cloud.storage.**StorageManagerImpl]
> > > > >> (catalina-exec-2:null) In createPool Setting poolId - 217 uuid
-
> > > > >> a226c9a1-da78-3f3a-b5ac-**e18b925c9634 z
> > > > >> oneId - 2 podId - 2 poolName - ceph-rbd-pri-storage
> > > > >> 2013-07-10 17:08:28,921 DEBUG [db.Transaction.Transaction]
> > > > >> (catalina-exec-2:null) Rolling back the transaction: Time = 3
> Name =
> > > > >> persist; called by -Transaction.rollbac
> > > > >> k:890-Transaction.removeUpTo:**833-Transaction.close:657-**
> > > > >> TransactionContextBuilder.**interceptException:63-**
> > > > >>
> ComponentInstantiationPostProc**essor$InterceptorDispatcher.**interce
> > > > >> pt:133-StorageManagerImpl.**createPool:1378-**
> > > > >> StorageManagerImpl.createPool:**147-CreateStoragePoolCmd.**
> > > > >> execute:123-ApiDispatcher.**dispatch:162-ApiServer.**
> > > > >> queueCommand:505-ApiSe
> > > > >> rver.handleRequest:355-**ApiServlet.processRequest:302
> > > > >> 2013-07-10 17:08:28,923 ERROR [cloud.api.ApiServer]
> > > (catalina-exec-2:null)
> > > > >> unhandled exception executing api command: createStoragePool
> > > > >> com.cloud.utils.exception.**CloudRuntimeException: DB Exception
> on:
> > > > >> com.mysql.jdbc.**JDBC4PreparedStatement@4b2eb56**: INSERT INTO
> > > > >> storage_pool (
> > > > >> storage_pool.id, storage_pool
> > > > >> .name, storage_pool.uuid, storage_pool.pool_type,
> > > storage_pool.created,
> > > > >> storage_pool.update_time, storage_pool.data_center_id,
> > > > >> storage_pool.pod_id,
> > > > >> storage_pool.availab
> > > > >> le_bytes, storage_pool.capacity_bytes, storage_pool.status,
> > > > >> storage_pool.scope, storage_pool.storage_provider_**id,
> > > > >> storage_pool.host_address, storage_pool.path, storage_
> > > > >> pool.port, storage_pool.user_info, storage_pool.cluster_id) VALUES
> > > (217,
> > > > >> _binary'ceph-rbd-pri-storage',
> > > > >> _binary'a226c9a1-da78-3f3a-**b5ac-e18b925c9634', 'RBD', '2013-07-1
> > > > >> 0 09:08:28', null, 2, 2, 0, 0, 'Up', null, null, null,
> _binary'ceph/
> > > > >> ceph.client.admin.keyring@10.**237.11.2/sc1<
> > > http://ceph.client.admin.keyring@10.237.11.2/sc1>',
> > > > >> 6789, null, 2)
> > > > >>          at
> > > > >>
> > > com.cloud.utils.db.**GenericDaoBase.persist(**GenericDaoBase.java:1342)
> > > > >>          at
> > > > >> com.cloud.storage.dao.**StoragePoolDaoImpl.persist(**
> > > > >> StoragePoolDaoImpl.java:232)
> > > > >>          at
> > > > >> com.cloud.utils.component.**ComponentInstantiationPostProc**
> > > > >>
> > >
> essor$InterceptorDispatcher.**intercept(**ComponentInstantiationPostProc*
> > > > >> *es
> > > > >>
> > > > >>
> > >
> > > --
> > > Prasanna.,
> > >
> > > ------------------------
> > > Powered by BigRock.com
> > >
> > >
>
> --
> Prasanna.,
>
> ------------------------
> Powered by BigRock.com
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message