cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Indra Pramana <in...@sg.or.id>
Subject Re: Problem in adding Ceph RBD as primary storage for CloudStack 4.1.0
Date Fri, 12 Jul 2013 09:12:25 GMT
Hi Wido,

Noted, can't wait for 4.2 to be released. :)

Dear Prasanna, Wido and all,

I just realised that while the system VMs are running, they are still not
accessible through the public IPs assigned to them. I have been waiting for
the SSVM to download the default CentOS template and it doesn't appear on
the template list.

I tried to SSH into the SSVM via the local link address from the KVM host,
and running the health check /usr/local/cloud/systemvm/ssvm-check.sh shows
that the VM cannot reach anywhere. It cannot reach the public DNS server (I
used Google's 8.8.8.8), cannot reach the management server, and cannot even
reach the public IP gateway.

Is it due to misconfiguration of the KVM network bridges? How can I see the
mapping between the NIC interfaces of the SSVM (eth0, eth1, eth2 and eth3)
and the actual physical NIC interfaces on the KVM hosts (eth0) and the
network bridges (cloudbr0, cloudbr1)? Any logs I can verify to ensure that
the VLAN and network bridging is working?

Appreciate any advice.

Thank you.



On Fri, Jul 12, 2013 at 4:19 PM, Wido den Hollander <wido@widodh.nl> wrote:

> On 07/12/2013 10:14 AM, Indra Pramana wrote:
>
>> Hi Prasanna,
>>
>> I managed to fix the problem, thanks for your advice to turn the agent
>> log level to debug:
>>
>> https://cwiki.apache.org/**confluence/display/CLOUDSTACK/**
>> KVM+agent+debug<https://cwiki.apache.org/confluence/display/CLOUDSTACK/KVM+agent+debug>
>>
>>  From the log, I found out that the agent on the KVM host tried to NFS
>> mount directly to 103.25.200.19:/mnt/vol1/sec-**
>> storage/template/tmpl/1/3,
>> which was not allowed by the NFS server due to its default configuration
>> to only allow to mount to /mnt/vol1/sec-storage (the root of the NFS
>> share).
>>
>>
> Ah, that's odd!
>
> Btw, in 4.2 you'll be able to deploy SSVMs on RBD as well, so that
> limitation will be gone.
>
> Wido
>
>  After I changed the NFS server configuration to allow mount to all
>> sub-directories, re-export the NFS and voila, the system was able to
>> download the template and now both the system VMs (CPVM and SSVM) are
>> running!
>>
>> Many thanks for your help! :)
>>
>> Cheers.
>>
>>
>>
>> On Fri, Jul 12, 2013 at 3:31 PM, Indra Pramana <indra@sg.or.id
>> <mailto:indra@sg.or.id>> wrote:
>>
>>     Hi Prasanna,
>>
>>     Good day to you, and thank you for your e-mail.
>>
>>     Yes, the cloudstack-agent service is running on both the KVM hosts.
>>     There is no "cloud" user being created though, when I installed the
>>     agent. I installed the agent as root.
>>
>>     root@hv-kvm-01:/home/indra# service cloudstack-agent status
>>       * cloud-agent is running
>>
>>     root@hv-kvm-01:/home/indra# su - cloud
>>     Unknown id: cloud
>>
>>     Please advise how can I resolve this problem, shall I create the
>>     Unix "cloud" user manually? Basically I follow this instruction to
>>     prepare the KVM host and install the CloudStack agent:
>>
>>     http://cloudstack.apache.org/**docs/en-US/Apache_CloudStack/**
>> 4.1.0/html/Installation_Guide/**hypervisor-kvm-install-flow.**html<http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.1.0/html/Installation_Guide/hypervisor-kvm-install-flow.html>
>>
>>     with this instruction from Wido on how to prepare libvirt with Ceph
>>     RBD storage pool support:
>>
>>     http://blog.widodh.nl/2013/06/**a-quick-note-on-running-**
>> cloudstack-with-rbd-on-ubuntu-**12-04/<http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/>
>>
>>     I also have checked /var/log/cloud/agent/agent.log and I don't see
>>     any error messages, except this error message which will show up
>>     every time I restart the agent:
>>
>>     2013-07-12 15:22:47,454 ERROR [cloud.resource.**ServerResourceBase]
>>     (main:null) Nics are not configured!
>>     2013-07-12 15:22:47,459 INFO  [cloud.resource.**ServerResourceBase]
>>     (main:null) Designating private to be nic eth0.5
>>
>>     More logs can be found here: http://pastebin.com/yeNmCt7S
>>
>>     I have configured the network bridges on the NIC interface as per
>>     this instruction:
>>
>>     http://cloudstack.apache.org/**docs/en-US/Apache_CloudStack/**
>> 4.1.0/html/Installation_Guide/**hypervisor-kvm-install-flow.**
>> html#hypervisor-host-install-**network<http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.1.0/html/Installation_Guide/hypervisor-kvm-install-flow.html#hypervisor-host-install-network>
>>
>>     On the zone, I used advanced network configuration with just one
>>     physical network for management, public and guest/private. I didn't
>>     include storage, which by default the traffic will use the
>>     management VLAN network.
>>
>>     Please advise if there's anything else I might have been missing.
>>
>>     Looking forward to your reply, thank you.
>>
>>     Cheers.
>>
>>
>>
>>     On Fri, Jul 12, 2013 at 2:56 PM, Prasanna Santhanam <tsp@apache.org
>>     <mailto:tsp@apache.org>> wrote:
>>
>>         Indeed, cloudstack will go through the allocation to startup the
>>         system VMs too. So that process is failing to recognize the volume
>>         (.qcow2) present on your NFS storage.
>>
>>         Can you check if your cloudstack agent service is running with
>>         the KVM
>>         host? And it should've created the user cloud. $id cloud to check
>> if
>>         there is a user.
>>
>>         Did you see what's happening in the agent logs? These are under
>>         /var/log/cloud/ on your host when the systemVMs are coming up.
>>         If the
>>         logs are not showing any useful information you can turn on debug
>>         level for more verbosity.
>>
>>         See here: https://cwiki.apache.org/**confluence/x/FgPMAQ<https://cwiki.apache.org/confluence/x/FgPMAQ>
>>
>>         On Fri, Jul 12, 2013 at 02:40:35PM +0800, Indra Pramana wrote:
>>          > Hi Prasanna,
>>          >
>>          > Good day to you, and thank you for your e-mail.
>>          >
>>          > Yes, when I export the NFS, I set the permission so that
>>         normal user will
>>          > be able to have read/write access to the files
>> (no_root_squash).
>>          >
>>          > I have tested and I can have read/write access from my KVM
>>         hosts using
>>          > normal user. BTW, there's no "cloud" user on the hosts, I
>>         believe it's not
>>          > created during cloudstack-agent installation?
>>          >
>>          > In any case, do you think the template issue and the storage
>> pool
>>          > allocation issue might be related, or are they two different
>>         problems
>>          > altogether?
>>          >
>>          > Looking forward to your reply, thank you.
>>          >
>>          > Cheers.
>>          >
>>          >
>>          >
>>          > On Fri, Jul 12, 2013 at 2:26 PM, Prasanna Santhanam
>>         <tsp@apache.org <mailto:tsp@apache.org>> wrote:
>>          >
>>          > > Can you access the file as user root? Or user cloud? The
>>         cloudstack
>>          > > agent on your KVM host runs as user cloud and the NFS
>>         permissions
>>          > > might be disallowing the volume (.qcow2) from being accessed.
>>          > >
>>          > > On Fri, Jul 12, 2013 at 02:16:41PM +0800, Indra Pramana
>> wrote:
>>          > > > Hi Prasanna,
>>          > > >
>>          > > > Good day to you, and thank you for your e-mail.
>>          > > >
>>          > > > Yes, the file exists. I can access the file from the
>>         management server
>>          > > and
>>          > > > the two hypervisors hosts if I mount manually.
>>          > > >
>>          > > > [root@cs-nas-01 /mnt/vol1/sec-storage/**template/tmpl/1/3]#
>> ls
>>          > > > -la
>>          > > > total
>>          > > > 1418787
>>          > > > drwxr-xr-x  2 root  wheel          4 Jul 11 20:21
>>          > > > .
>>          > > > drwxr-xr-x  3 root  wheel          3 Jul 11 20:17
>>          > > > ..
>>          > > > -rw-r--r--  1 root  wheel  725811200 Jul 11 20:21
>>          > > > 425b9e5a-fbc7-4637-a33a-**fe9d0ed4fa98.qcow2
>>          > > > -rw-r--r--  1 root  wheel        295 Jul 11 20:21
>>          > > > template.properties
>>          > > > [root@cs-nas-01 /mnt/vol1/sec-storage/**
>> template/tmpl/1/3]#
>>          > > > pwd
>>          > > > /mnt/vol1/sec-storage/**template/tmpl/1/3
>>          > > >
>>          > > >
>>          > > > Any advise?
>>          > > >
>>          > > > Looking forward to your reply, thank you.
>>          > > >
>>          > > > Cheers.
>>          > > >
>>          > > >
>>          > > >
>>          > > > On Fri, Jul 12, 2013 at 2:07 PM, Prasanna Santhanam
>>         <tsp@apache.org <mailto:tsp@apache.org>>
>>
>>          > > wrote:
>>          > > >
>>          > > > > Can you check whether there is a file at:
>>          > > > >
>>         nfs://103.25.200.19/mnt/vol1/**sec-storage/template/tmpl/1/3/<http://103.25.200.19/mnt/vol1/sec-storage/template/tmpl/1/3/>
>>         <http://103.25.200.19/mnt/**vol1/sec-storage/template/**tmpl/1/3/<http://103.25.200.19/mnt/vol1/sec-storage/template/tmpl/1/3/>
>> >
>>
>>          > > > >
>>          > > > > On Fri, Jul 12, 2013 at 01:59:34PM +0800, Indra Pramana
>>         wrote:
>>          > > > > > Hi Prasanna,
>>          > > > > >
>>          > > > > > Thanks for your e-mail.
>>          > > > > >
>>          > > > > > I have tried restarting the management server,
and
>>         the problem still
>>          > > > > > persists. I even tried to re-do the installation
and
>>         configuration
>>          > > again
>>          > > > > > from scratch last night, but the problem still
there.
>>          > > > > >
>>          > > > > > I also noted that on the beginning of the logs,
I
>>         found some error
>>          > > > > messages
>>          > > > > > saying that the template cannot be downloaded to
the
>>         pool. See this
>>          > > logs:
>>          > > > > >
>>          > > > > > http://pastebin.com/BY1AVJ08
>>          > > > > >
>>          > > > > > It says it failed because cannot get volume from
the
>>         pool. Could it
>>          > > be
>>          > > > > > related, i.e. the absence of the template caused
the
>>         system VMs
>>          > > cannot be
>>          > > > > > created and started?
>>          > > > > >
>>          > > > > > I have ensured that I downloaded the system VM
>>         template using
>>          > > > > > cloud-install-sys-tmplt and verified that the
>>         template is already
>>          > > there
>>          > > > > in
>>          > > > > > the secondary storage server.
>>          > > > > >
>>          > > > > > Any advice is appreciated.
>>          > > > > >
>>          > > > > > Looking forward to your reply, thank you.
>>          > > > > >
>>          > > > > > Cheers.
>>          > > > > >
>>          > > > > >
>>          > > > > >
>>          > > > > > On Fri, Jul 12, 2013 at 1:21 PM, Prasanna Santhanam
>>         <tsp@apache.org <mailto:tsp@apache.org>>
>>
>>          > > > > wrote:
>>          > > > > >
>>          > > > > > > It looks like a previous attempt to start
the
>>         systemVMs has failed
>>          > > > > > > putting the nfs storage in the avoid set.
Did you
>>         try restarting
>>          > > your
>>          > > > > > > management server?
>>          > > > > > >
>>          > > > > > > This line leads me to the above mentioned:
>>          > > > > > > 2013-07-12 13:10:48,236 DEBUG
>>          > > > > > > [storage.allocator.**AbstractStoragePoolAllocator]
>>          > > (secstorage-1:null)
>>          > > > > > > StoragePool is in avoid set, skipping this
pool
>>          > > > > > >
>>          > > > > > >
>>          > > > > > > On Fri, Jul 12, 2013 at 01:16:53PM +0800,
Indra
>>         Pramana wrote:
>>          > > > > > > > Dear Wido and all,
>>          > > > > > > >
>>          > > > > > > > I have managed to get the hosts, primary
and
>>         secondary storage
>>          > > > > running.
>>          > > > > > > >
>>          > > > > > > > - 2 KVM hypervisor hosts
>>          > > > > > > > - One RBD primary storage
>>          > > > > > > > - One NFS primary storage (for system
VMs, since
>>         I understand
>>          > > that
>>          > > > > system
>>          > > > > > > > VMs cannot use RBD)
>>          > > > > > > > - One NFS secondary storage
>>          > > > > > > >
>>          > > > > > > > However, now I am having problem with
the system
>>         VMs: CPVM and
>>          > > SSVM,
>>          > > > > > > unable
>>          > > > > > > > to start.
>>          > > > > > > >
>>          > > > > > > > Excerpt from management-server.log file
is here:
>>          > > > > > > > http://pastebin.com/ENkpCALY
>>          > > > > > > >
>>          > > > > > > > It seems that the VMs were not able to
be created
>>         because unable
>>          > > to
>>          > > > > find
>>          > > > > > > > suitable StoragePools.
>>          > > > > > > >
>>          > > > > > > > I understand that system VMs will be
using the
>>         NFS primary
>>          > > storage
>>          > > > > > > instead
>>          > > > > > > > of RBD, so I have confirmed that I am
able to
>>         mount the primary
>>          > > > > storage
>>          > > > > > > via
>>          > > > > > > > NFS and have read and write access, from
both the
>>         hypervisor and
>>          > > the
>>          > > > > > > > management server.
>>          > > > > > > >
>>          > > > > > > > Any advise how can I resolve the problem
to make
>>         both the system
>>          > > VMs
>>          > > > > > > > created and started?
>>          > > > > > > >
>>          > > > > > > > Looking forward to your reply, thank
you.
>>          > > > > > > >
>>          > > > > > > > Cheers.
>>          > > > > > > >
>>          > > > > > > >
>>          > > > > > > > On Fri, Jul 12, 2013 at 9:43 AM, Indra
Pramana
>>         <indra@sg.or.id <mailto:indra@sg.or.id>>
>>
>>          > > > > wrote:
>>          > > > > > > >
>>          > > > > > > > > Hi Wido,
>>          > > > > > > > >
>>          > > > > > > > > Thanks for the advice, I'm now able
to add the
>>         RBD pool as
>>          > > primary
>>          > > > > > > storage.
>>          > > > > > > > >
>>          > > > > > > > > Many thanks! :)
>>          > > > > > > > >
>>          > > > > > > > > Cheers.
>>          > > > > > > > >
>>          > > > > > > > >
>>          > > > > > > > > On Thursday, July 11, 2013, Wido
den Hollander
>>         wrote:
>>          > > > > > > > >
>>          > > > > > > > >> Hi,
>>          > > > > > > > >>
>>          > > > > > > > >> On 07/10/2013 03:42 PM, Chip
Childers wrote:
>>          > > > > > > > >>
>>          > > > > > > > >>> Cc'ing Wido, our resident
Ceph expert. ;-)
>>          > > > > > > > >>>
>>          > > > > > > > >>>
>>          > > > > > > > >> Hehe ;)
>>          > > > > > > > >>
>>          > > > > > > > >>  On Wed, Jul 10, 2013 at 05:45:25PM
+0800,
>>         Indra Pramana
>>          > > wrote:
>>          > > > > > > > >>>
>>          > > > > > > > >>>> Dear all,
>>          > > > > > > > >>>>
>>          > > > > > > > >>>> I am installing CloudStack
4.1.0 (upgraded
>>         from 4.0.2) and I
>>          > > > > also
>>          > > > > > > have a
>>          > > > > > > > >>>> Ceph cluster running.
However, I am having
>>         issues in adding
>>          > > the
>>          > > > > RBD
>>          > > > > > > as
>>          > > > > > > > >>>> primary storage. Tried
to follow the
>>         instruction here, but
>>          > > > > unable to
>>          > > > > > > > >>>> make
>>          > > > > > > > >>>> it work:
>>          > > > > > > > >>>>
>>          > > > > > > > >>>>
>>         http://ceph.com/docs/master/****rbd/rbd-cloudstack/<http://ceph.com/docs/master/**rbd/rbd-cloudstack/>
>> <
>>          > > > > > > http://ceph.com/docs/master/**rbd/rbd-cloudstack/<http://ceph.com/docs/master/rbd/rbd-cloudstack/>
>> >
>>          > > > > > > > >>>>
>>          > > > > > > > >>>> I have setup a pool
on the Ceph cluster. The
>>         status of the
>>          > > > > cluster
>>          > > > > > > is
>>          > > > > > > > >>>> healthy. Since I am
using Ubuntu 12.04.2 LTS
>>         (Precise) for
>>          > > the
>>          > > > > > > > >>>> hypervisors,
>>          > > > > > > > >>>> I also have compiled
libvirt manually to
>>         ensure that the
>>          > > version
>>          > > > > > > 0.9.13
>>          > > > > > > > >>>> is
>>          > > > > > > > >>>> installed (previously
it's 0.9.8).
>>          > > > > > > > >>>>
>>          > > > > > > > >>>>
>>          > > > > > > > >> You can also use the Ubuntu
Cloud Archive, I
>>         still need to
>>          > > get the
>>          > > > > > > docs
>>          > > > > > > > >> updated for that.
>>          > > > > > > > >>
>>          > > > > > > > >> I described the process in a
blogpost:
>>          > > > > > > http://blog.widodh.nl/2013/06/****<http://blog.widodh.nl/2013/06/**>
>>          > > > > > > > >>
>>          > >
>>         a-quick-note-on-running-****cloudstack-with-rbd-on-ubuntu-**
>> **12-04/<
>>          > > > > > >
>>          > > > >
>>          > >
>>         http://blog.widodh.nl/2013/06/**a-quick-note-on-running-**
>> cloudstack-with-rbd-on-ubuntu-**12-04/<http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/>
>>          > > > > > > >
>>          > > > > > > > >>
>>          > > > > > > > >>  indra@hv-kvm-01:~/rbd$ ceph
>>          > > > > > > > >>>> ceph> health
>>          > > > > > > > >>>> HEALTH_OK
>>          > > > > > > > >>>>
>>          > > > > > > > >>>> indra@hv-kvm-01:~$ ceph
osd lspools
>>          > > > > > > > >>>> 0 data,1 metadata,2
rbd,3 sc1,
>>          > > > > > > > >>>>
>>          > > > > > > > >>>> root@hv-kvm-01:/home/indra#
libvirtd
>> --version
>>          > > > > > > > >>>> libvirtd (libvirt) 0.9.13
>>          > > > > > > > >>>>
>>          > > > > > > > >>>> I tried to add Primary
Storage into the
>>         Cloudstack zone
>>          > > which I
>>          > > > > have
>>          > > > > > > > >>>> created:
>>          > > > > > > > >>>>
>>          > > > > > > > >>>> Add Primary Storage:
>>          > > > > > > > >>>>
>>          > > > > > > > >>>> Zone: my zone name
>>          > > > > > > > >>>> Pod: my pod name
>>          > > > > > > > >>>> Cluster: my cluster
name
>>          > > > > > > > >>>> Name: ceph-rbd-pri-storage
>>          > > > > > > > >>>> Protocol: RBD
>>          > > > > > > > >>>> RADOS Monitor: my first
Ceph monitor IP
>> address
>>          > > > > > > > >>>> RADOS Pool: sc1 (the
pool name on Ceph
>> cluster)
>>          > > > > > > > >>>> RADOS User: client.admin
>>          > > > > > > > >>>> RADOS Secret:
>>         /etc/ceph/ceph.client.admin.****keyring (keyring
>>          > > > > file
>>          > > > > > > > >>>> location)
>>          > > > > > > > >>>>
>>          > > > > > > > >>>
>>          > > > > > > > >> This is your problem. That shouldn't
be the
>>         location of the
>>          > > file,
>>          > > > > but
>>          > > > > > > it
>>          > > > > > > > >> should be the secret, which
is a base64
>>         encoded string.
>>          > > > > > > > >>
>>          > > > > > > > >> $ ceph auth list
>>          > > > > > > > >>
>>          > > > > > > > >> That should tell you what the
secret is.
>>          > > > > > > > >>
>>          > > > > > > > >>  Storage Tags: rbd
>>          > > > > > > > >>
>>          > > > > > > > >> This is the error message when
I tried to add
>>         the primary
>>          > > storage
>>          > > > > by
>>          > > > > > > > >> clicking OK:
>>          > > > > > > > >>
>>          > > > > > > > >> DB Exception on:
>>          > > com.mysql.jdbc.****JDBC4PreparedStatement@4b2eb56
>>          > > > > **:
>>          > > > > > > > >> INSERT INTO
>>          > > > > > > > >> storage_pool (storage_pool.id
>>         <http://storage_pool.id>, storage_pool.name
>>         <http://storage_pool.name>,
>>
>>          > > > > storage_pool.uuid,
>>          > > > > > > > >> storage_pool.pool_type, storage_pool.created,
>>          > > > > > > storage_pool.update_time,
>>          > > > > > > > >> storage_pool.data_center_id,
>> storage_pool.pod_id,
>>          > > > > > > > >> storage_pool.available_bytes,
>>         storage_pool.capacity_bytes,
>>          > > > > > > > >> storage_pool.status, storage_pool.scope,
>>          > > > > > > storage_pool.storage_provider_****
>>          > > > > > > > >> id,
>>          > > > > > > > >> storage_pool.host_address, storage_pool.path,
>>          > > storage_pool.port,
>>          > > > > > > > >> storage_pool.user_info,
>>         storage_pool.cluster_id) VALUES (217,
>>          > > > > > > > >> _binary'ceph-rbd-pri-storage',
>>          > > > > > > > >>
>>         _binary'a226c9a1-da78-3f3a-****b5ac-e18b925c9634', 'RBD',
>>          > > > > '2013-07-10
>>          > > > > > > > >> 09:08:28', null, 2, 2, 0, 0,
'Up', null, null,
>>         null,
>>          > > _binary'ceph/
>>          > > > > > > > >> ceph.client.admin.keyring@10.****237.11.2/sc1<
>>          > > > > > > http://ceph.client.admin.**keyring@10.237.11.2/sc1<http://ceph.client.admin.keyring@10.237.11.2/sc1>
>> >',
>>          > > > > > > > >> 6789, null, 2)
>>          > > > > > > > >>
>>          > > > > > > > >> On the management-server.log
file:
>>          > > > > > > > >>
>>          > > > > > > > >> 2013-07-10 17:08:28,845 DEBUG
>>         [cloud.api.ApiServlet]
>>          > > > > > > > >> (catalina-exec-2:null)
>>          > > > > > > > >> ===START===  192.168.0.100 --
GET
>>          > > > > > > > >>
>>         command=createStoragePool&****zoneid=c116950e-e4ae-4f23-****a7e7-
>>          > > > > > > > >>
>>         74a75c4ee638&podId=a748b063-****3a83-4175-a0e9-de39118fe5ce&**
>>          > > > > > > > >>
>>         clusterid=1f87eb09-324d-4d49-****83c2-88d84d7a15df&name=ceph-****
>>          > > > > > > > >>
>>         rbd-pri-storage&url=rbd%3A%2F%****2Fclient.admin%3A_etc%2Fc
>>          > > > > > > > >>
>>         eph%2Fceph.client.admin.****keyring%4010.237.11.2%2Fsc1&**
>>          > > > > > > > >>
>>          > > > >
>>         tags=rbd&response=json&****sessionkey=****
>> rDRfWpqeKfQKbKZtHr398ULV%2F8k%****
>>          > > > > > > > >> 3D&_=1373447307839
>>          > > > > > > > >> 2013-07-10 17:08:28,862 DEBUG
>>          > > [cloud.storage.****StorageManagerImpl]
>>          > > > > > > > >> (catalina-exec-2:null) createPool
Params @
>>         scheme - rbd
>>          > > > > storageHost -
>>          > > > > > > null
>>          > > > > > > > >> hostPath - /ceph/ceph.client
>>          > > > > > > > >> .admin.keyring@10.237.11.2/sc1
>>         <http://admin.keyring@10.237.**11.2/sc1<http://admin.keyring@10.237.11.2/sc1>>
>> port - -1
>>
>>          > > > > > > > >> 2013-07-10 17:08:28,918 DEBUG
>>          > > [cloud.storage.****StorageManagerImpl]
>>          > > > > > > > >> (catalina-exec-2:null) In createPool
Setting
>>         poolId - 217
>>          > > uuid -
>>          > > > > > > > >> a226c9a1-da78-3f3a-b5ac-****e18b925c9634
z
>>          > > > > > > > >> oneId - 2 podId - 2 poolName
-
>>         ceph-rbd-pri-storage
>>          > > > > > > > >> 2013-07-10 17:08:28,921 DEBUG
>>         [db.Transaction.Transaction]
>>          > > > > > > > >> (catalina-exec-2:null) Rolling
back the
>>         transaction: Time = 3
>>          > > > > Name =
>>          > > > > > > > >> persist; called by -Transaction.rollbac
>>          > > > > > > > >>
>>         k:890-Transaction.removeUpTo:****833-Transaction.close:657-**
>>          > > > > > > > >>
>>         TransactionContextBuilder.****interceptException:63-**
>>          > > > > > > > >>
>>          > > > >
>>         ComponentInstantiationPostProc****essor$InterceptorDispatcher.**
>> **interce
>>          > > > > > > > >> pt:133-StorageManagerImpl.****
>> createPool:1378-**
>>          > > > > > > > >>
>>         StorageManagerImpl.createPool:****147-CreateStoragePoolCmd.**
>>          > > > > > > > >>
>>         execute:123-ApiDispatcher.****dispatch:162-ApiServer.**
>>          > > > > > > > >> queueCommand:505-ApiSe
>>          > > > > > > > >>
>>         rver.handleRequest:355-****ApiServlet.processRequest:302
>>          > > > > > > > >> 2013-07-10 17:08:28,923 ERROR
>>         [cloud.api.ApiServer]
>>          > > > > > > (catalina-exec-2:null)
>>          > > > > > > > >> unhandled exception executing
api command:
>>         createStoragePool
>>          > > > > > > > >>
>>         com.cloud.utils.exception.****CloudRuntimeException: DB
>>          > > Exception
>>          > > > > on:
>>          > > > > > > > >>
>>         com.mysql.jdbc.****JDBC4PreparedStatement@4b2eb56****: INSERT
>>          > > INTO
>>          > > > > > > > >> storage_pool (
>>          > > > > > > > >> storage_pool.id <http://storage_pool.id>,
>>
>>         storage_pool
>>          > > > > > > > >> .name, storage_pool.uuid,
>> storage_pool.pool_type,
>>          > > > > > > storage_pool.created,
>>          > > > > > > > >> storage_pool.update_time,
>>         storage_pool.data_center_id,
>>          > > > > > > > >> storage_pool.pod_id,
>>          > > > > > > > >> storage_pool.availab
>>          > > > > > > > >> le_bytes, storage_pool.capacity_bytes,
>>         storage_pool.status,
>>          > > > > > > > >> storage_pool.scope,
>>         storage_pool.storage_provider_****id,
>>          > > > > > > > >> storage_pool.host_address, storage_pool.path,
>>         storage_
>>          > > > > > > > >> pool.port, storage_pool.user_info,
>>         storage_pool.cluster_id)
>>          > > VALUES
>>          > > > > > > (217,
>>          > > > > > > > >> _binary'ceph-rbd-pri-storage',
>>          > > > > > > > >>
>>         _binary'a226c9a1-da78-3f3a-****b5ac-e18b925c9634', 'RBD',
>>          > > '2013-07-1
>>          > > > > > > > >> 0 09:08:28', null, 2, 2, 0,
0, 'Up', null,
>>         null, null,
>>          > > > > _binary'ceph/
>>          > > > > > > > >> ceph.client.admin.keyring@10.****237.11.2/sc1<
>>          > > > > > > http://ceph.client.admin.**keyring@10.237.11.2/sc1<http://ceph.client.admin.keyring@10.237.11.2/sc1>
>> >',
>>          > > > > > > > >> 6789, null, 2)
>>          > > > > > > > >>          at
>>          > > > > > > > >>
>>          > > > > > >
>>          > >
>>         com.cloud.utils.db.****GenericDaoBase.persist(****
>> GenericDaoBase.java:1342)
>>          > > > > > > > >>          at
>>          > > > > > > > >>
>>         com.cloud.storage.dao.****StoragePoolDaoImpl.persist(**
>>          > > > > > > > >> StoragePoolDaoImpl.java:232)
>>          > > > > > > > >>          at
>>          > > > > > > > >>
>>         com.cloud.utils.component.****ComponentInstantiationPostProc****
>>          > > > > > > > >>
>>          > > > > > >
>>          > > > >
>>          > >
>>         essor$InterceptorDispatcher.****intercept(****
>> ComponentInstantiationPostProc***
>>          > > > > > > > >> *es
>>          > > > > > > > >>
>>          > > > > > > > >>
>>          > > > > > >
>>          > > > > > > --
>>          > > > > > > Prasanna.,
>>          > > > > > >
>>          > > > > > > ------------------------
>>          > > > > > > Powered by BigRock.com
>>          > > > > > >
>>          > > > > > >
>>          > > > >
>>          > > > > --
>>          > > > > Prasanna.,
>>          > > > >
>>          > > > > ------------------------
>>          > > > > Powered by BigRock.com
>>          > > > >
>>          > > > >
>>          > >
>>          > > --
>>          > > Prasanna.,
>>          > >
>>          > > ------------------------
>>          > > Powered by BigRock.com
>>          > >
>>          > >
>>
>>         --
>>         Prasanna.,
>>
>>         ------------------------
>>         Powered by BigRock.com
>>
>>
>>
>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message