Return-Path: X-Original-To: apmail-cloudstack-dev-archive@www.apache.org Delivered-To: apmail-cloudstack-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0368610AF3 for ; Mon, 22 Jul 2013 19:13:07 +0000 (UTC) Received: (qmail 52964 invoked by uid 500); 22 Jul 2013 19:13:06 -0000 Delivered-To: apmail-cloudstack-dev-archive@cloudstack.apache.org Received: (qmail 52923 invoked by uid 500); 22 Jul 2013 19:13:06 -0000 Mailing-List: contact dev-help@cloudstack.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cloudstack.apache.org Delivered-To: mailing list dev@cloudstack.apache.org Received: (qmail 52894 invoked by uid 99); 22 Jul 2013 19:13:05 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Jul 2013 19:13:05 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of arkray0701@gmail.com designates 209.85.220.182 as permitted sender) Received: from [209.85.220.182] (HELO mail-vc0-f182.google.com) (209.85.220.182) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Jul 2013 19:13:01 +0000 Received: by mail-vc0-f182.google.com with SMTP id hf12so1627447vcb.27 for ; Mon, 22 Jul 2013 12:12:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=T/I21A/Mk3nc9uGWlOR2urG2sLmwPqaKnlxnHpXZDEQ=; b=Kxlfly0ZXoew8F55pbgDxcbSWTvzUyios7pQHh5qaxeMcsUobaa7FU6W4JL7egeW6W saNIgBEZOO1saSlSu6cM38z2MbIjdqcFmqFmIOZ5XB3d3p0WsyJJ51iQipNhmJTjN+QC lzsXuKHupHEng91bcMLx6Y297TGGGIgSxXrq9JzPY+djGGhgsLQem/CnCDDSvjtnDjgq Sj/R2QGs0o36qDwMV65KnBHcYrPqdVSYz5H1X0uzIcUK/dsStDxAlipVj62nuiteRrio r+6yYNfkdMPy5JLU6dBurQHFqO2ljWEZ634Rt5qz47/hNHxhqiViC/rnPwPpEVroAYH2 2xyw== MIME-Version: 1.0 X-Received: by 10.220.43.19 with SMTP id u19mr4747388vce.3.1374520360689; Mon, 22 Jul 2013 12:12:40 -0700 (PDT) Received: by 10.58.30.15 with HTTP; Mon, 22 Jul 2013 12:12:40 -0700 (PDT) In-Reply-To: <51ED81AD.6000904@widodh.nl> References: <9A6163509B773D4EA0D3CD58E4F1650E1494F1B9@SINPEX01CL01.citrite.net> <9A6163509B773D4EA0D3CD58E4F1650E14950254@SINPEX01CL01.citrite.net> <51ED0DBA.6010406@widodh.nl> <9A6163509B773D4EA0D3CD58E4F1650E14951959@SINPEX01CL01.citrite.net> <51ED44E7.8010508@widodh.nl> <9A6163509B773D4EA0D3CD58E4F1650E14953E5E@SINPEX01CL01.citrite.net> <51ED79DA.9090008@widodh.nl> <51ED81AD.6000904@widodh.nl> Date: Tue, 23 Jul 2013 04:12:40 +0900 Message-ID: Subject: Re: Problem in adding Ceph RBD storage to CloudStack From: Satoshi Shimazaki To: dev@cloudstack.apache.org Content-Type: multipart/alternative; boundary=047d7b3a914cd5ea8f04e21e7588 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b3a914cd5ea8f04e21e7588 Content-Type: text/plain; charset=ISO-8859-1 Hi, It does not exist as below. [root@rx200s7-07m ~]# rbd -p libvirt-pool info 5e5d9b40-270b-44af-9479-782175556c47 rbd: error opening image 5e5d9b40-270b-44af-9479-782175556c47: (2) No such file or directory 2013-07-23 04:05:59.903162 7ff4d6b74760 -1 librbd::ImageCtx: error finding header: (2) No such file or directory [root@rx200s7-07m ~]# > Can you try an install from an ISO to see if that works? OK, I will try it tomorrow. (Now, it's 4AM on JST) Thanks, Satoshi Shimazaki 2013/7/23 Wido den Hollander > Hi, > > > On 07/22/2013 08:55 PM, Satoshi Shimazaki wrote: > >> Hi Wido, >> >> Thank you for your comment. >> >> What I see is "No such file or directory", so that RBD image does not >>> >> exist. >> >>> It seems like a copy didn't succeed but now CloudStack thinks that the >>> >> image does exist. >> >>> Does "libvirt-pool" have a RBD image with the name >>> >> 5e5d9b40-270b-44af-9479-**782175556c47 ? >> No, it does not. >> >> There is no "5e5d9b40-270b-44af-9479-**782175556c47" as below. >> >> > Oh, I meant this: > > $ rbd -p libvirt-pool info 5e5d9b40-270b-44af-9479-**782175556c47 > > > virsh # pool-list >> Name State Autostart >> ------------------------------**----------- >> 3900a5bf-3362-392b-8bd0-**57b10ef47bb5 active no >> b39ca2cd-65ea-46d5-8a71-**c3a4ef95028e active no >> cd6520d6-bfc3-3537-9600-**7f044e11ddb1 active no >> >> virsh # vol-list 3900a5bf-3362-392b-8bd0-**57b10ef47bb5 >> Name Path >> ------------------------------**----------- >> 6ff9719f-3e4d-4ff5-ab67-**154e30c936c2 >> libvirt-pool/6ff9719f-3e4d-**4ff5-ab67-154e30c936c2 >> 8555f35f-3ed8-436b-895a-**04e88e7327e0 >> libvirt-pool/8555f35f-3ed8-**436b-895a-04e88e7327e0 >> cd3688ab-e37b-4866-9ea7-**4051b670a323 >> libvirt-pool/cd3688ab-e37b-**4866-9ea7-4051b670a323 >> >> >> What I see is "No such file or directory", so that RBD image does not >>> >> exist. >> >>> It seems like a copy didn't succeed but now CloudStack thinks that the >>> >> image does exist. >> I agree, but I don't understand why it occured... >> >> > I can't tell now. Can you try an install from an ISO to see if that works? > > And if that works, can you deploy a fresh template to see what the copy > does? > > It will run qemu-img twice: > > 1. Secondary Storage -> RBD > 2. RBD -> RBD > > In 4.2 the second step will be a RBD clone operation btw. > > Wido > > - VMs(Root disk) on NFS : It works. >> - VMs(Root disk) on RBD : It doesn't work >> - Data Disk on RBD (attach to VMs on NFS) : It works. >> >> Is there any other points to be checked? >> >> >> Thanks, >> Satoshi Shimazaki >> >> >> >> 2013/7/23 Wido den Hollander >> >> Hi, >>> >>> >>> On 07/22/2013 07:56 PM, Satoshi Shimazaki wrote: >>> >>> Hi Wido, >>>> >>>> I'm in the project with Kimi and Nakajima-san. >>>> >>>> [root@rx200s7-07m ~]# ceph -v >>>> ceph version 0.61.4 (****1669132fcfc27d0c0b5e5bb93ade59****d147e23404) >>>> >>>> >>>> Same version is installed into all the hosts (KVM host and Ceph nodes). >>>> >>>> Here is KVM Agent log. >>>> http://pastebin.com/5yG1uBuj >>>> I had set the log level "DEBUG" and failed to create 2 instances >>>> ,"RBDVM-shimazaki-1" and "RBDVM-shimazaki-2". >>>> >>>> >>>> What I see is "No such file or directory", so that RBD image does not >>> exist. >>> >>> It seems like a copy didn't succeed but now CloudStack thinks that the >>> image does exist. >>> >>> Does "libvirt-pool" have a RBD image with the name >>> 5e5d9b40-270b-44af-9479- >>> **782175556c47 ? >>> >>> Wido >>> >>> >>> Thanks, >>> >>>> Satoshi Shimazaki >>>> >>>> >>>> >>>> >>>> 2013/7/23 Kimihiko Kitase >>>> >>>> Hi Wido >>>> >>>>> >>>>> Thanks for you comment. >>>>> >>>>> If we create vm on the NFS primary storage and mount additional disk on >>>>> the RBD storage, it works fine. >>>>> If we check vm from virt manager, there is no virtual disk. So we >>>>> believe >>>>> the problem should be vm configuration... >>>>> >>>>> We will check ceph version tomorrow. >>>>> >>>>> Thanks >>>>> Kimi >>>>> >>>>> -----Original Message----- >>>>> From: Wido den Hollander [mailto:wido@widodh.nl] >>>>> Sent: Monday, July 22, 2013 11:43 PM >>>>> To: dev@cloudstack.apache.org >>>>> Subject: Re: Problem in adding Ceph RBD storage to CloudStack >>>>> >>>>> Hi, >>>>> >>>>> On 07/22/2013 02:25 PM, Kimihiko Kitase wrote: >>>>> >>>>> Wido, Thank you very much. >>>>>> >>>>>> CloudStack: 4.1.0 >>>>>> QEMU: 1.5.50 >>>>>> Libvirt: 0.10.2 >>>>>> >>>>>> >>>>> What version of Ceph on the nodes? >>>>> >>>>> $ ceph -v >>>>> >>>>> >>>>> We will set "DEBUG" on the agent tomorrow. But the following is >>>>>> command >>>>>> >>>>>> CloudStack issue. We got this command at KVM host. >>>>> >>>>> >>>>>> [root@rx200s7-07m ~]# ps -ef|grep 1517 >>>>>> root 16099 1 27 19:36 ? 00:00:12 /usr/libexec/qemu-kvm >>>>>> >>>>>> -name i-2-1517-VM -S -M pc-i440fx-1.6 -enable-kvm -m 256 -smp >>>>> 1,sockets=1,cores=1,threads=1 -uuid e67f1707-fe92-3426-978d-** >>>>> 0441d5000d6a >>>>> -no-user-config -nodefaults -chardev >>>>> socket,id=charmonitor,path=/****var/lib/libvirt/qemu/i-2-1517-**** >>>>> VM.monitor,server,nowait >>>>> -mon chardev=charmonitor,id=****monitor,mode=control -rtc base=utc >>>>> -no-shutdown >>>>> -boot dc -drive >>>>> file=rbd:libvirt-pool/****cd3688ab-e37b-4866-9ea7-** >>>>> 4051b670a323:id=libvirt:key=****AQC7OuZReMndFxAAY/** >>>>> qUwLbvfod6EMvgVWU21g==:auth_****supported=cephx\;none:mon_** >>>>> host=192.168.10.20\:6789,if=****none,id=drive-virtio-disk0,** >>>>> format=raw,cache=none >>>>> -device >>>>> virtio-blk-pci,bus=pci.0,addr=****0x4,drive=drive-virtio-**disk0,** >>>>> id=virtio-disk0 >>>>> -drive >>>>> if=none,media=cdrom,id=drive-****ide0-1-0,readonly=on,format=**** >>>>> raw,cache=none >>>>> -device ide-drive,bus=ide.1,unit=0,****drive=drive-ide0-1-0,id=ide0-** >>>>> **1-0 >>>>> -netdev >>>>> tap,fd=27,id=hostnet0,vhost=****on,vhostfd=29 -device >>>>> virtio-net-pci,netdev=****hostnet0,id=net0,mac=02:00:0a:****b9:00:16 >>>>> >>>>> ,bus=pci. >>>>> 0,addr=0x3 -chardev pty,id=charserial0 -device >>>>> isa-serial,chardev=****charserial0,id=serial0 -usb -device >>>>> >>>>> usb-tablet,id=input0 >>>>> -vnc 0.0.0.0:3 -vga cirrus -device >>>>> virtio-balloon-pci,id=****balloon0,bus=pci.0,addr=0x5 >>>>> >>>>> >>>>> >>>>>> >>>>>> The argument to Qemu seems just fine, so I think the problem is not >>>>> in >>>>> CloudStack. >>>>> >>>>> Wido >>>>> >>>>> Thanks >>>>> >>>>>> Kimi >>>>>> >>>>>> -----Original Message----- >>>>>> From: Wido den Hollander [mailto:wido@widodh.nl] >>>>>> Sent: Monday, July 22, 2013 7:47 PM >>>>>> To: dev@cloudstack.apache.org >>>>>> Subject: Re: Problem in adding Ceph RBD storage to CloudStack >>>>>> >>>>>> Hi, >>>>>> >>>>>> On 07/22/2013 12:43 PM, Kimihiko Kitase wrote: >>>>>> >>>>>> It seems secondary storage vm could copy template to primary storage >>>>>>> >>>>>>> successfully, but created VM doesn't point this vol.. >>>>>> >>>>> >>>>> If we create vm manually and add this vol as boot vol, it works fine.. >>>>>> >>>>>>> >>>>>>> >>>>>>> Which version of CloudStack are you using? >>>>>> >>>>>> What is the Qemu version running on your hypervisor and what libvirt >>>>>> >>>>>> version? >>>>> >>>>> >>>>>> If you set the logging level on the Agent to "DEBUG", does it show >>>>>> >>>>>> deploying the VM with the correct XML parameters? >>>>> >>>>> >>>>>> I haven't seen the things you are reporting. >>>>>> >>>>>> Wido >>>>>> >>>>>> So it seems cloudstack cannot configure VM correctly in ceph rbd >>>>>> >>>>>>> >>>>>>> environment. >>>>>> >>>>> >>>>> >>>>>> Any idea? >>>>>>> >>>>>>> Thanks >>>>>>> Kimi >>>>>>> >>>>>>> -----Original Message----- >>>>>>> From: Kimihiko Kitase [mailto:Kimihiko.Kitase@**citr**ix.co.jp >>>>>>> > >>>>>>> ] >>>>>>> Sent: Monday, July 22, 2013 7:11 PM >>>>>>> To: dev@cloudstack.apache.org >>>>>>> Subject: RE: Problem in adding Ceph RBD storage to CloudStack >>>>>>> >>>>>>> Hello >>>>>>> >>>>>>> I am in the project with Nakajima san. >>>>>>> >>>>>>> We succeeded to add RBD storage to primary storage. >>>>>>> But when we try to boot centos as user instance, it fail during >>>>>>> system >>>>>>> >>>>>>> logger process. >>>>>> >>>>> >>>>> It works fine when we boot centos using NFS storage. >>>>>> >>>>>>> It works fine when we boot centos using NFS storage and add >>>>>>> additional >>>>>>> >>>>>>> disk from RBD storage. >>>>>> >>>>> >>>>> >>>>>> Do you have any idea to resolve this issue? >>>>>>> >>>>>>> Thanks >>>>>>> Kimi >>>>>>> >>>>>>> -----Original Message----- >>>>>>> From: Takuma Nakajima [mailto:penguin.trance.2716@****gmail.com >>>>>>> > >>>>>>> ] >>>>>>> Sent: Saturday, July 20, 2013 12:23 PM >>>>>>> To: dev@cloudstack.apache.org >>>>>>> Subject: Re: Problem in adding Ceph RBD storage to CloudStack >>>>>>> >>>>>>> I'm sorry but I forgot to tell you that the environment does not have >>>>>>> >>>>>>> the internet connection. >>>>>> >>>>> >>>>> It is not allowed to make a direct connection to the internet because >>>>>> >>>>>>> >>>>>>> of the security policy. >>>>>> >>>>> >>>>> >>>>>> Wido, >>>>>>> >>>>>>> No, it works for me like a charm :) >>>>>>>> >>>>>>>> Could you set the Agent logging to DEBUG as well and show the output >>>>>>>> of >>>>>>>> >>>>>>>> that log? Maybe paste the log on pastebin. >>>>>>> >>>>>>> >>>>>>>> I'm interested in the XMLs the Agent is feeding to libvirt when >>>>>>>> adding >>>>>>>> >>>>>>>> the RBD pool. >>>>>>> >>>>>>> I thought the new libvirt overwrites the old one, but actually both >>>>>>> >>>>>>> libvirt (with RBD and without RBD) were installed to the system. >>>>>> qemu >>>>>> >>>>> was >>>>> installed from the package and so it might have the dependency to the >>>>> libvirt installed from the package. After deleting the both libvirt >>>>> installed from source and package, then installed it from rpm package >>>>> with >>>>> RBD support, RBD storage was registered to the CloudStack successfully. >>>>> >>>>> >>>>>> David, >>>>>>> >>>>>>> Why not 6.4? >>>>>>>> >>>>>>>> >>>>>>> Because of no internet connection, packages in the local mirror >>>>>>> >>>>>>> repository may be old. >>>>>> >>>>> >>>>> I checked /etc/redhat-release and it showed the version is 6.3. >>>>>> >>>>>>> >>>>>>> In current state, although the RBD storage was installed, system VMs >>>>>>> won't start with "Unable to get vms >>>>>>> org.libvirt.LibvirtException: Domain not found: no domain with >>>>>>> matching uuid 'xxxxxxxx-xxxx-xxxx-xxxx-****xxxxxxxxxxxx'" error like >>>>>>> http://mail-archives.apache.****org/mod_mbox/cloudstack-users/**** >>>>>>> 201303.mbox>>>>>> cloudstack-users/201303.mbox >>>>>>> > >>>>>>> / >>>>>>> %****3CD2EE6B3265AD864EB3EA4F5C670D****256F3546C1@EXMBX01L-CRP-03.** >>>>>>> ** >>>>>>> >>>>>>> webmdhealth. >>>>>>> net%3E The uuid in the error message was not in the database of the >>>>>>> management server nor in ceph storage node. >>>>>>> >>>>>>> I tried removing host from the CloudStack and cleaning up the >>>>>>> computing >>>>>>> >>>>>>> node, but it cannot be added again to the CloudStack. >>>>>> >>>>> >>>>> agent log says it attempted to connect to localhost:8250 though the >>>>>> >>>>>>> >>>>>>> management server address is set to 10.40.1.190 in global settings. >>>>>> >>>>> >>>>> management server log is here: http://pastebin.com/muGz73c0 >>>>>> >>>>>>> (10.40.1.24 is the address of the computing node) >>>>>>> >>>>>>> Now the computing node is under rebuilding. >>>>>>> >>>>>>> Takuma Nakajima >>>>>>> >>>>>>> 2013/7/19 David Nalley : >>>>>>> >>>>>>> On Thu, Jul 18, 2013 at 12:09 PM, Takuma Nakajima >>>>>>>> wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I'm building a CloudStack 4.1 with Ceph RBD storage using RHEL 6.3 >>>>>>>>> >>>>>>>>> recently >>>>>>>> >>>>>>> >>>>>>> but it fails when adding RBD storage to primary storage. >>>>>>>> >>>>>>>>> Does anybody know about the problem? >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> Why not 6.4? >>>>>>>> >>>>>>>> >>>>>>> >>>>> >>>> >> --047d7b3a914cd5ea8f04e21e7588--