cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pierre-Luc Dion <pd...@cloudops.com>
Subject Re: cloudstack,swift
Date Mon, 01 Feb 2016 12:54:33 GMT
If  you are using Swift as Secondary Storage, there is still few bugs like
the CLOUDSTACK-9248 remaining. Syed pushed some fixes that are now in +4.7
if I'm correct.
We have some fix going on related to CLOUDSTACK-9248, PR should be in the
list soon as we complete all tests.

But if it's backed by Ceph, as Wido is saying, why use the Swift API
instead of S3 ?


PL


On Fri, Jan 22, 2016 at 6:08 AM, Wido den Hollander <wido@widodh.nl> wrote:

> Why not use S3? The RADOS Gateway speaks S3 just fine.
>
> I've been using CloudStack with Ceph S3 without any problems.
>
> Wido
>
> On 21-01-16 23:06, ilya wrote:
> > You might find this relevant
> >
> > https://issues.apache.org/jira/browse/CLOUDSTACK-9248
> >
> >
> >
> > On 1/21/16 12:32 AM, ilya wrote:
> >> ++ Wido - perhaps he seen it..
> >> ----
> >> Hi Yuriy,
> >>
> >> I'm going to switch to english as i'm posting this thread to "users"
> >> mailing list
> >>
> >> Sorry, i dont have enough experience with ceph + cloudstack.
> >>
> >> When you ask on mailing lists, please clearly mention the problem - as
> >> its not clear.
> >>
> >> Id think its related to your configuration, since error states
> >>> There is no secondary storage VM for secondary storage host Images
> >>
> >> Do you have Secondary Storage VM running?
> >>
> >> Regards
> >> ilya
> >>
> >> On 1/19/16 10:47 PM, Юрий Карпель wrote:
> >>> Приветствую!
> >>>
> >>>
> >>> Собрал тестовый стенд с cloudstack 4.7, kvm, ceph (centos7)
> >>>
> >>> Кластер пока только для тестов, если коротко
то сделал 2-а rgw на
> >>> civetweb и сервер nfs + haproxy для s3,swift :
> >>>
> >>> [client.rgw.srv-rgw01]
> >>> rgw print continue = false
> >>> host = srv-rgw01
> >>> rgw frontends = civetweb port=8080
> >>> rgw_socket_path = /tmp/radosgw.sock
> >>>
> >>> [client.rgw.srv-rgw02]
> >>> rgw print continue = false
> >>> host = srv-rgw01
> >>> rgw frontends = civetweb port=8080
> >>> rgw_socket_path = /tmp/radosgw.sock
> >>>
> >>> Проверяем:
> >>> [ceph@ceph-adm ~]$ swift -A
> http://ceph-rgw.test.bst.ru:8080/auth/v1.0/
> >>> -U cloudstack:swift -K 'KBDbLt3DJ9hhMCVuPDfX1TwtLVywa2NVtO6ODBnu' list
> >>> images
> >>> [ceph@ceph-adm ~]$
> >>> [ceph@ceph-adm ~]$ swift upload images ceph.log
> >>> ceph.log
> >>> [ceph@ceph-adm ~]$ swift stat images
> >>>                       Account: v1
> >>>                     Container: images
> >>>                       Objects: 1
> >>>                         Bytes: 96355
> >>>                      Read ACL:
> >>>                     Write ACL:
> >>>                       Sync To:
> >>>                      Sync Key:
> >>>                 Accept-Ranges: bytes
> >>>              X-Storage-Policy: default-placement
> >>> X-Container-Bytes-Used-Actual: 98304
> >>>                   X-Timestamp: 1453193667.00000
> >>>                    X-Trans-Id:
> >>> tx000000000000000000208-00569f2d4c-4395-default
> >>>                  Content-Type: text/plain; charset=utf-8
> >>> [ceph@ceph-adm ~]$
> >>>
> >>>
> >>> Добавляю в cloudstack:
> >>> Name: Images
> >>> Provider: Swift
> >>> URL: http://ceph-rgw.test.bst.ru:8080/auth/v1.0/
> >>> Account:cloudstack
> >>> Username: swift
> >>> Key:
> >>>
> >>> Лог:
> >>> 2016-01-20 09:35:37,688 DEBUG [c.c.a.ApiServlet]
> >>> (catalina-exec-20:ctx-72c3e20f ctx-320404d7) (logid:9e2762af) ===END===
> >>>  192.168.7.29 -- GET
> >>>
> command=addImageStore&response=json&name=Images&provider=Swift&url=http%3A%2F%
> 2Fceph-rgw.test.bst.ru
> >>> <http://2Fceph-rgw.test.bst.ru
> >%3A8080%2Fauth%2Fv1.0%2F&details%5B0%5D.key=account&details%5B0%5D.value=cloudstack&details%5B1%5D.key=username&details%5B1%5D.value=swift&details%5B2%5D.key=key&details%5B2%5D.value=KBDbLt3DJ9hhMCVuPDfX1TwtLVywa2NVtO6ODBnu&_=1453271737545
> >>> 2016-01-20 09:35:43,684 DEBUG [c.c.h.d.HostDaoImpl]
> >>> (ClusteredAgentManager Timer:ctx-96868b17) (logid:73617e92) Resetting
> >>> hosts suitable for reconnect
> >>> 2016-01-20 09:35:43,689 DEBUG [c.c.h.d.HostDaoImpl]
> >>> (ClusteredAgentManager Timer:ctx-96868b17) (logid:73617e92) Completed
> >>> resetting hosts suitable for reconnect
> >>> 2016-01-20 09:35:43,689 DEBUG [c.c.h.d.HostDaoImpl]
> >>> (ClusteredAgentManager Timer:ctx-96868b17) (logid:73617e92) Acquiring
> >>> hosts for clusters already owned by this management server
> >>> 2016-01-20 09:35:43,690 DEBUG [c.c.h.d.HostDaoImpl]
> >>> (ClusteredAgentManager Timer:ctx-96868b17) (logid:73617e92) Completed
> >>> acquiring hosts for clusters already owned by this management server
> >>> 2016-01-20 09:35:43,690 DEBUG [c.c.h.d.HostDaoImpl]
> >>> (ClusteredAgentManager Timer:ctx-96868b17) (logid:73617e92) Acquiring
> >>> hosts for clusters not owned by any management server
> >>> 2016-01-20 09:35:43,692 DEBUG [c.c.h.d.HostDaoImpl]
> >>> (ClusteredAgentManager Timer:ctx-96868b17) (logid:73617e92) Completed
> >>> acquiring hosts for clusters not owned by any management server
> >>> 2016-01-20 09:35:47,073 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> >>> (AsyncJobMgr-Heartbeat-1:ctx-2c2a88d4) (logid:053e95d5) Begin cleanup
> >>> expired async-jobs
> >>> 2016-01-20 09:35:47,100 INFO  [o.a.c.f.j.i.AsyncJobManagerImpl]
> >>> (AsyncJobMgr-Heartbeat-1:ctx-2c2a88d4) (logid:053e95d5) End cleanup
> >>> expired async-jobs
> >>> 2016-01-20 09:35:47,906 INFO  [o.a.c.s.r.NfsSecondaryStorageResource]
> >>> (pool-56-thread-1:ctx-c962141a) (logid:e48a8f70) Determined host
> >>> rgw-lb01.cloud.bstelecom.ru <http://rgw-lb01.cloud.bstelecom.ru>
> >>> corresponds to IP 10.30.15.2
> >>> 2016-01-20 09:35:47,906 DEBUG [o.a.c.s.r.NfsSecondaryStorageResource]
> >>> (pool-56-thread-1:ctx-c962141a) (logid:e48a8f70) Mounting device with
> >>> nfs-style path of 10.30.15.2:/nfs
> >>> 2016-01-20 09:35:47,906 DEBUG [o.a.c.s.r.NfsSecondaryStorageResource]
> >>> (pool-56-thread-1:ctx-c962141a) (logid:e48a8f70) making available
> >>> /var/cloudstack/mnt/secStorage/c6c692a0-265d-3109-93d4-f0f65f524d84 on
> >>> nfs://rgw-lb01.test.bst.ru/nfs <http://rgw-lb01.test.bst.ru/nfs>
> >>> 2016-01-20 09:35:47,906 DEBUG [o.a.c.s.r.NfsSecondaryStorageResource]
> >>> (pool-56-thread-1:ctx-c962141a) (logid:e48a8f70) local folder for mount
> >>> will be
> /var/cloudstack/mnt/secStorage/c6c692a0-265d-3109-93d4-f0f65f524d84
> >>> 2016-01-20 09:35:47,909 DEBUG [o.a.c.s.r.NfsSecondaryStorageResource]
> >>> (pool-56-thread-1:ctx-c962141a) (logid:e48a8f70) Executing: sudo mount
> >>> 2016-01-20 09:35:47,940 DEBUG [o.a.c.s.r.NfsSecondaryStorageResource]
> >>> (pool-56-thread-1:ctx-c962141a) (logid:e48a8f70) Execution is
> successful.
> >>> 2016-01-20 09:35:47,945 DEBUG [o.a.c.s.r.NfsSecondaryStorageResource]
> >>> (pool-56-thread-1:ctx-c962141a) (logid:e48a8f70) Some device already
> >>> mounted at
> >>> /var/cloudstack/mnt/secStorage/c6c692a0-265d-3109-93d4-f0f65f524d84, no
> >>> need to mount nfs://rgw-lb01test.bst.ru/nfs <
> http://rgw-lb01test.bst.ru/nfs>
> >>> 2016-01-20 09:35:47,951 DEBUG [o.a.c.s.r.NfsSecondaryStorageResource]
> >>> (pool-56-thread-1:ctx-c962141a) (logid:e48a8f70) Faild to get
> >>> url:
> http://cloudstack.apt-get.eu/systemvm/4.6/systemvm64template-4.6.0-kvm.qcow2.bz2
> ,
> >>> due to java.io.IOException: access denied
> >>>
> >>>
> >>> Как итог не загружены шаблоны systemvm:
> >>>  [c.c.s.StatsCollector] (StatsCollector-4:ctx-4e2b563c)
> (logid:0523bff9)
> >>> There is no secondary storage VM for secondary storage host Images
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message