incubator-cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wido den Hollander <w...@widodh.nl>
Subject Re: fail to add rbd primary storage
Date Thu, 27 Sep 2012 14:45:42 GMT
Hi,

As requested, could you try to define the RBD storage pool manually on 
the Hypervisor first?

Create a file "secret.xml"

<secret ephemeral='no' private='no'>
   <uuid>7a91dc24-b072-43c4-98fb-4b2415322b0f</uuid>
   <usage type='ceph'>
     <name>admin</name>
   </usage>
</secret>

Then run:

$ virsh secret-define secret.xml
$ virsh secret-set-value 7a91dc24-b072-43c4-98fb-4b2415322b0f <key>

Where <key> is your cephx key.

Now, create a file "rbd-pool.xml"

<pool type='rbd'>
   <name>mycephpool</name>
   <uuid>f959641f-f518-4505-9e85-17d994e2a398</uuid>
   <source>
     <host name='1.2.3.4' port='6789'/>
     <name>rbd</name>
     <auth username='admin' type='ceph'>
       <secret uuid='7a91dc24-b072-43c4-98fb-4b2415322b0f'/>
     </auth>
   </source>
</pool>

Obviously, replace 1.2.3.4 by the IP/hostname of your monitor.

Then define the pool:

$ virsh define-pool rbd-pool.xml


Let me know how that works out. It's just to rule out it's not a problem 
with your libvirt or Ceph cluster.

Wido

On 09/27/2012 08:52 AM, coudstacks wrote:
> if this step is documented.
> RADOS user = client.admin
> RADOS secret = key correspond to client.admin.
> what else should i do  on ceph-nodes?
>
> 2012-09-27 21:03:52,128 WARN  [cloud.storage.StorageManagerImpl] (catalina-exec-24:null)
Unable to establish a connection between Host[-6-Routing] and Pool[203|RBD]
> com.cloud.exception.StorageUnavailableException: Resource [StoragePool:203] is unreachable:
Unable establish connection from storage head to storage pool 203 due to java.lang.NullPointerException
>          at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:563)
>          at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:57)
>          at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2066)
>          at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1027)
>          at com.cloud.agent.Agent.processRequest(Agent.java:518)
>          at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:831)
>          at com.cloud.utils.nio.Task.run(Task.java:83)
>          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>          at java.lang.Thread.run(Thread.java:679)
>          at com.cloud.storage.StorageManagerImpl.connectHostToSharedPool(StorageManagerImpl.java:1685)
>          at com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:1450)
>          at com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:215)
>          at com.cloud.api.commands.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:120)
>          at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:138)
>          at com.cloud.api.ApiServer.queueCommand(ApiServer.java:543)
>          at com.cloud.api.ApiServer.handleRequest(ApiServer.java:422)
>          at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
>          at com.cloud.api.ApiServlet.doGet(ApiServlet.java:63)
>          at javax.servlet.http.HttpServlet.service(HttpServlet.java:689)
>          at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
>          at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
>          at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>          at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>          at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>          at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>          at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>          at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
>          at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>          at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
>          at org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
>          at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
>          at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2268)
>          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>          at java.lang.Thread.run(Thread.java:679)
> 2012-09-27 21:03:52,137 WARN  [cloud.storage.StorageManagerImpl] (catalina-exec-24:null)
No host can access storage pool Pool[203|RBD] on cluster 1
> 2012-09-27 21:03:52,140 WARN  [cloud.api.ApiDispatcher] (catalina-exec-24:null) class
com.cloud.api.ServerApiException : Failed to add storage pool
>
>
>
>
>
> At 2012-07-06 23:11:47,"Wido den Hollander" <wido@widodh.nl> wrote:
>>
>>
>> On 07/05/2012 11:34 PM, Senner, Talin wrote:
>>> Awesomeness Wido.  +10.  I'd be happy to do any testing on my Ubuntu
>>> 12.04 cluster + ceph .48...
>>
>> Testing is needed.
>>
>> Be aware, on Ubuntu 12.04 there is the bug with libvirt and the secrets.
>>
>> If you run into problems, check if libvirtd is linked against libroken:
>>
>> $ ldd /usr/sbin/libvirtd
>>
>> If you are running without Cephx you will need the patched version of
>> Qemu where it explicitly adds "auth_supported=none" to the Qemu args,
>> this is commit: ccb94785007d33365d49dd566e194eb0a022148d
>>
>> Wido
>>
>>>
>>> Talin
>>>
>>> On Thu, Jul 5, 2012 at 8:50 AM, Wido den Hollander <wido@widodh.nl> wrote:
>>>>>
>>>>> Other than these limitations, everything works. You can create instances
>>>>> and attach RBD disks. It also supports cephx authorization, so no
>>>>> problem there!
>>>>
>>>>
>>>> I found a bug in libvirt under Ubuntu 12.04. In short, the base64
>>>> encoding/decoding inside libvirt is broken due to a third party library.
>>>>
>>>> For more information:
>>>> https://www.redhat.com/archives/libvir-list/2012-July/msg00058.html
>>>>
>>>>
>>>>>
>>>>> What do you need to run this patch?
>>>>> - A Ceph cluster
>>>>> - libvirt with RBD storage pool support (>0.9.12)
>>>>
>>>>
>>>> I recommend running 0.9.13 (just got out) since it contains RBD support.
But
>>>> there is a bug if you're not running with cephx, this just got fixed:
>>>> http://libvirt.org/git/?p=libvirt.git;a=commit;h=ccb94785007d33365d49dd566e194eb0a022148d
>>>>
>>>> In a couple of weeks libvirt 0.9.14 will be released and that will contain
>>>> everything you need and will probably fix the base64/secret problem as well.
>>>>
>>>>
>>>>> - Modified libvirt-java bindings (jar is in the patch)
>>>>
>>>>
>>>> Tomorrow there will be a release of libvirt-java 0.4.8 which will contain
>>>> everything you need. No more need for a homebrew version of the libvirt Java
>>>> bindings, we can use the upstream ones!
>>>>
>>>> http://www.libvirt.org/git/?p=libvirt-java.git;a=summary
>>>>
>>>>
>>>>> - Qemu with RBD support (>0.14)
>>>>> - A extra field "user_info" in the storage pool table, see the SQL
>>>>> change in the patch
>>>>>
>>>>> You can fetch the code on my Github account [3].
>>>>
>>>>
>>>> Not true anymore, I'm now pushing to the "rbd" feature branch at the Apache
>>>> CloudStack repository.
>>>>
>>>>
>>>>>
>>>>> Warning: I'll be rebasing against the master branch regularly, so be
>>>>> aware of git pull not always working nicely.
>>>>>
>>>>> I'd like to see this code reviewed while I'm working on the latest stuff
>>>>> and getting all the patches upstream in other projects (mainly the
>>>>> libvirt Java bindings).
>>>>
>>>>
>>>> Like said, the libvirt Java bindings have gone upstream and that should be
>>>> settled by tomorrow.
>>>>
>>>> Wido
>>
>


Mime
View raw message