cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kimihiko Kitase <>
Subject RE: Problem in adding Ceph RBD storage to CloudStack
Date Mon, 22 Jul 2013 10:43:18 GMT
It seems secondary storage vm could copy template to primary storage successfully, but created
VM doesn't point this vol..
If we create vm manually and add this vol as boot vol, it works fine..

So it seems cloudstack cannot configure VM correctly in ceph rbd environment.

Any idea?


-----Original Message-----
From: Kimihiko Kitase [] 
Sent: Monday, July 22, 2013 7:11 PM
Subject: RE: Problem in adding Ceph RBD storage to CloudStack


I am in the project with Nakajima san.

We succeeded to add RBD storage to primary storage.
But when we try to boot centos as user instance, it fail during system logger process.
It works fine when we boot centos using NFS storage.
It works fine when we boot centos using NFS storage and add additional disk from RBD storage.

Do you have any idea to resolve this issue?


-----Original Message-----
From: Takuma Nakajima []
Sent: Saturday, July 20, 2013 12:23 PM
Subject: Re: Problem in adding Ceph RBD storage to CloudStack

I'm sorry but I forgot to tell you that the environment does not have the internet connection.
It is not allowed to make a direct connection to the internet because of the security policy.

> No, it works for me like a charm :)
> Could you set the Agent logging to DEBUG as well and show the output 
> of
that log? Maybe paste the log on pastebin.
> I'm interested in the XMLs the Agent is feeding to libvirt when adding
the RBD pool.

I thought the new libvirt overwrites the old one, but actually both libvirt (with RBD and
without RBD) were installed to the system. qemu was installed from the package and so it might
have the dependency to the libvirt installed from the package. After deleting the both libvirt
installed from source and package, then installed it from rpm package with RBD support, RBD
storage was registered to the CloudStack successfully.

> Why not 6.4?

Because of no internet connection, packages in the local mirror repository may be old.
I checked /etc/redhat-release and it showed the version is 6.3.

In current state, although the RBD storage was installed, system VMs won't start with "Unable
to get vms
org.libvirt.LibvirtException: Domain not found: no domain with matching uuid 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'"
error like
The uuid in the error message was not in the database of the management server nor in ceph
storage node.

I tried removing host from the CloudStack and cleaning up the computing node, but it cannot
be added again to the CloudStack.
agent log says it attempted to connect to localhost:8250 though the management server address
is set to in global settings.
management server log is here:
( is the address of the computing node)

Now the computing node is under rebuilding.

Takuma Nakajima

2013/7/19 David Nalley <>:
> On Thu, Jul 18, 2013 at 12:09 PM, Takuma Nakajima 
> <> wrote:
>> Hi,
>> I'm building a CloudStack 4.1 with Ceph RBD storage using RHEL 6.3
>> but it fails when adding RBD storage to primary storage.
>> Does anybody know about the problem?
> Why not 6.4?

View raw message