www-infrastructure-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alan D. Cabrera" <l...@toolazydogs.com>
Subject Re: Brooklyn not in http://people.apache.org/committers-by-project.html
Date Tue, 27 Jan 2015 15:06:10 GMT

> On Jan 23, 2015, at 11:57 AM, Alan D. Cabrera <list@toolazydogs.com> wrote:
> 
> 
>> On Jan 23, 2015, at 9:56 AM, Bertrand Delacretaz <bdelacretaz@apache.org> wrote:
>> 
>> On Fri, Jan 23, 2015 at 5:00 PM, Alan D. Cabrera <list@toolazydogs.com> wrote:
>>> ...At the moment, we have untold documented and undocumented processes that fiddle
with
>>> the disparate and inconsistent set of metadata that represents the Apache Software
Foundation....
>> 
>> You're suggesting a REST API to replace that, IMO a set of structured
>> text files in a *single* svn folder combined with a validation script
>> that people can apply before committing would also do the job.
>> 
>> That's reasonably http-friendly and you'd just need to write the
>> validation script.
> 
> I did consider that. Keeping the consolidated, canonical, non-LDAP, data in Subversion
does make sense.  Having Subversion serve up the data directly poses problems.
> 
> At my work we heavily use/used Subversion to manage our metadata for the same reasons
you mention with pre-commit hooks performing validation.  We are now in the process of ripping
much of it out; some of the reasons apply to the domain we wish to solve here.
> 
> My thinking is:
> 
> The granularity of access is complex.  We have to complex matrix of R/O and R/W permissions
for public, committer, PMC, ASF member, Board, PPMC.  Trying to fit that into Subversion files
and folder bends and rips your data in strange ways as one denormalizes data to get specific
permissions to work correctly.  Since the data gets bent and ripped the requisite pre-commit
logic gets split apart and so the security logic gets obfuscated.
> 
> Invariably one needs to read/write to something else, e.g. LDAP, etc. and keep it consistent.

I’m going to assume that there’s lazy consensus on this and that the following steps are
the way to go
architect a general REST API that captures foundation data
move processes to REST API
refactor disparate data sources behind REST API to where we want to be
I realize that some processes will necessarily need to work directly against data files/LDAP
behind the REST API but I think the management of those sources should be done through the
REST API so that all the interactions are performed in a standard, protected, audible, way.

To that end I’m going to start collecting a catalog of processes that are in place.  Please
chime in if this is not the way to go or if someone else is already working on this.


Regards,
Alan



Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message