gump-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wade.Stebbi...@Lawson.com
Subject Re: [RT] Gump 3.0 - Database Model
Date Thu, 16 Dec 2004 20:18:07 GMT
Stefano:

  see my responses below.

wade


Stefano Mazzocchi <stefano@apache.org> wrote on 12/15/2004 09:40:28 PM:

> [...]
> 
> What I was thinking is that this (and other of your suggestions) adds a 
> "meta-metadata" layer and I'm not sure if I want to add this complexity 
> at this point (given that the model is complex enough already).
> 
> I agree that this meta-metadata layer will be very useful (for 
> annotation, grouping and further user interaction around the collected 
> data) but this is something we can add incrementally later on.

Yes.  This is a very easy thing to add-on later, "over the top" so to
speak, as none of the inner workings depend on it.  It is purely a way
to organize projects for presentation purposes.  Meta-meta?  Sure, why
not call it that.



> [...]
> 
> Ok, this is again another meta-metadata layer but this is something that 

> I'm not sure I like. It smells of overdesign and at this point I want to 

> keep features that are just critical for having the system working. "the 

> simplest thing that can possibly work".

Understood.  It is probably something more useful within my environment,
which is based on several different build systems that feed this system.


> [...]
> 
> Keep in mind that we DO NOT WANT gump to build anything that anybody 
> would start use for their own stuff. It is critical, socially and 
> politically and for the security ecosystem that gump's artifact 
> repository is not used for anything else rather than distributed gumping 

> and fallback scenarios.
> 
> Consider it a cache, a repository of "precomputed calculations" rather 
> than anything else.
> 
> This is true for executables: for javadocs and docs, this is a different 

> story but we should not attack too many problems at the same time.

I see.  Our requirement was more broad for the Artifact Repository, and
thus it is "overloaded" to serve the build system itself (more gump like)
as well as internal (to the company) users for certain artifacts.  This
notion of an Artifact Repository is not very well fleshed out at the
moment, here, it is mostly design ideas at the present time.  We have
some pieces in place, mostly in a crude way.


> [...]
> > 
> > In fact, at present in my schema, for a single build table entry,
> > there can be:
> > 
> >  - any number of notes
> >  - any number of artifacts
> >  - any number of results
> 
> This is interesting. How can you have different numbers of results if 
> you have only one output signal for a given build?

Ah, that all depends on how 'result' is defined.  As a "Build Results"
system, in my case, it serves more than just to feed the build system
proper.  Thus, as an example:

 1. Building (e.g., compiling) --> one result
 2. Packaging --> 2nd result
 3. First level automated testing (eg., unit) --> 3rd result
 4. QA testing --> 4th result
...
 N. Overall

There is usually a fixed set of "result types" on a per project
basis, some projects might not bother with "QA testing" for an
example, some might fold packaging into the build proper, etc.
This is all very dynamic, of course, because a new type could be
added one day and then live on, another type could be phased out.
The presentation is setup to handle all this.

The one output signal to which you refer is probably #1, else it
is #N.  In my case, N is calculated and the calculation is again
a per-project parameter.  This might seem like unnecessary
overdesign to Gump, but there are reasons why this is needed
here--actually, plenty.

The main points being:

 - although the build system produces artifacts, and in doing
   so there is status about that activity, thus one type of
   build result,

 - there are things we learn about those artifacts after they
   are produced, thus more results.

Bit of background on me, hopefully not to bore anyone.  I am in a
business environment, now, but came from years of cross-development
builds, embedded systems, etc.  To me, a build (proper) produces a
"stream of bytes" which most people call artifacts.  That stream of
bytes is further qualified as time goes on, usually in a series of
steps, and each step I have defined here as a result.  Many complex
systems have added new twists where "build tooling" itself is
produced during the build process, to which I try to decompose into
separate builds, where subsequent builds then become consumers of
another build's artifacts--baselined to some level of goodness, one
would hope.  Not that I'm saying anything really new here, except
about my perspective on things.

wade
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message