Return-Path: X-Original-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-cloudstack-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 513D783F for ; Fri, 24 Aug 2012 22:07:46 +0000 (UTC) Received: (qmail 39541 invoked by uid 500); 24 Aug 2012 22:07:46 -0000 Delivered-To: apmail-incubator-cloudstack-dev-archive@incubator.apache.org Received: (qmail 39520 invoked by uid 500); 24 Aug 2012 22:07:46 -0000 Mailing-List: contact cloudstack-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: cloudstack-dev@incubator.apache.org Delivered-To: mailing list cloudstack-dev@incubator.apache.org Received: (qmail 39509 invoked by uid 99); 24 Aug 2012 22:07:45 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 24 Aug 2012 22:07:45 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of Chiradeep.Vittal@citrix.com designates 66.165.176.63 as permitted sender) Received: from [66.165.176.63] (HELO SMTP02.CITRIX.COM) (66.165.176.63) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 24 Aug 2012 22:07:41 +0000 X-IronPort-AV: E=Sophos;i="4.80,307,1344225600"; d="scan'208";a="206200149" Received: from sjcpmailmx01.citrite.net ([10.216.14.74]) by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5; 24 Aug 2012 18:07:13 -0400 Received: from SJCPMAILBOX01.citrite.net ([10.216.4.72]) by SJCPMAILMX01.citrite.net ([10.216.14.74]) with mapi; Fri, 24 Aug 2012 15:07:12 -0700 From: Chiradeep Vittal To: CloudStack DeveloperList Date: Fri, 24 Aug 2012 15:10:43 -0700 Subject: Re: Secondary Storage S3 Provider Thread-Topic: Secondary Storage S3 Provider Thread-Index: Ac2CRNAOeZ5OHbIcSZCjmDUK68T3mQ== Message-ID: In-Reply-To: <2D00C4C497B443468138B9B19BB22BDA@basho.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/14.13.0.110805 acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org =20 On 8/23/12 9:56 AM, "Greg Burd" wrote: >Hello all.=20 > >Apologies if any of the questions I ask are overly obvious, I'm just >diving into the Java code and trying to find my way around. My goal is >to build layer which allows objects destined for secondary storage to >reside in an S3-compatible service - specifically, Riak Cloud Storage >(which we call "Riak CS"). Later on I'd like to plumb in a way to allow >Riak CS to provide the S3 service itself to users of CloudStack >deployments. > >So it seems that to do this I'll need to implement a >`cloud.bridge.io.s3.S3ServiceBucketAdapter` (or something with a similar >name) which implements the `S3BucketAdapter` API, correct? As far as I >can tell, that class will essentially just call out using the S3 API to a >specified S3 server (in my case, a running Riak CS cluster somewhere). If you already have the S3 API coded up, there is the option of just using CloudStack credentials. There is ongoing work to expose an "auth service" where you can retrieve the secret key http://wiki.cloudstack.org/display/RelOps/Regions+Functional+Spec#RegionsFu nctionalSpec-AuthenticationService > I assume that somewhere in the code there is the Amazon API for >accessing S3, correct? Right now, there isn't, although, there is work going on currently in this area. http://wiki.cloudstack.org/pages/viewpage.action?pageId=3D9601252 > >Then I'm guessing I'll need to also implement or change something related >to auth in `cloud.bridge.auth.s3`, correct? We have an authentication >system built into Riak CS now, somehow these two will need to merge into >one unified auth system. I assume that this signature-based auth using the api keys? > >Finally I'll need to integrate into the build tools/scripting so that >deployment is automated. Something like `ant deploy-riak-cs` I'm >guessing. > >So, I have a few questions: I hope you have some answers. I expect the folks working on the features I mentioned above will pipe up about the state of the project. >1. Am I on the right track? >2. Has anyone already started work on building an S3 secondary storage >backend integration? >3. What's the best way to integrate auth? >4. What parts of this should be designed to be reusable to provide S3 >service itself at a later date? I think that having a functional, scalable S3 implementation is good enough, as long as the auth integrates with CloudStack compute. For production, you'd want to have an API that provides usage information so that users can be billed for the usage of the service along with usage of the compute service. >5. What needs to be done to provide the expected level of integration >into the build/test tools? > >Anything else? Thanks for any and all help. > >best,=20 > >@gregburd, Basho Technologies | http://basho.com | @basho > >