avalon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stephen McConnell <mcconn...@apache.org>
Subject Re: [RT] Standardizing Meta Info
Date Tue, 08 Jul 2003 15:58:16 GMT


Berin Loritsch wrote:

> Stephen McConnell wrote:
>
>>
>> The big answer ....
>>
>> Berin Loritsch wrote:
>>
>>> The purpose of this Random
>>> Thought (RT) is to standardize the persistence of the Meta 
>>> Information, or
>>> how it is stored in the JAR files.
>>
>>
>> Good timing - have been working on only this for the last few days.
>
>
> Great!
>
>>> All three of the Avalon containers have different mechanisms for 
>>> doing this,
>>> and it would be really great to standardize on one. So far, Merlin 
>>> has the
>>> most expressive format, so we might want to use that format to 
>>> handle the
>>> integration. In fact, we might want to adapt that whole library (the 
>>> Merlin
>>> Meta) library to meet the needs of all the containers. 
>>
>>
>>
>>
>> I would suggest we establish the following Avalon projects:
>>
>> avalon/info-spi ........... the immutable meta-info classes
>> avalon/info ............... builders, writers and validators
>> avalon/info-tools ......... generators and other meta-info tools
>
>
> Essentially what I was going to propose. I was simply going to name
> it "meta", "meta-spi", and "meta-tools", but the "info" based name
> also works.


Actually I think sticking with "meta" may be a better approach - its 
more easily identifable as to what it is.

>
>> The significant point here is that I'm proposing the separation of 
>> the meta-info (descriptors) from the meta-data (directives). I 
>> imagine a future in which we introduce avalon/data-spi, avalon/data, 
>> and avalon/data-tools as the standard framework for writing 
>> deployment descriptions.
>
>
> No problems there, mate.
>
>> Leo did a good analysis on the current implementation of the 
>> avalon.meta tags. The majority of the issues raised have already been 
>> addressed - but there is still some more hashing out stuff that needs 
>> to be done - but more on that as we break out the details.
>
>
> We can hash out the details once we have the knowns taken care of.
>
>>> Also, a point of contention in the past is one of versioning in the 
>>> service
>>> and implementation types. After more thought on the matter, I am 
>>> inclined
>>> to agree it is useful information that can be used to validate contract
>>> compatibility with the component. It should not be absolutely required,
>>> but used if it exists.
>>
>>
>> There are two versioning aspects in the existing meta-info model - 
>> service versions and component implementation versions. Service 
>> versions are used and respected in the Merlin and Phoenix platforms. 
>> Component type versioning is provided in Merlin but is not used 
>> computationally - i.e. it is considered as management information. As 
>> far as the service versioning is concerned - the current 
>> implementation defaults to "1.0"which is a bad thing. Basically we 
>> need to default to null and make sure that we are clear about what 
>> null implies when doing a version check. My position is that null 
>> should result in the a "undefined" version in which case any service 
>> version would map. This would be consistent with Fortress but would 
>> need some adjustments in Merlin to sync with the semantics.
>
>
> Ok. How about adjusting the Version class so that a -1 represents 
> undefined.
> That way we can also have an isDefined() or isValid() type of check. The
> compareTo() and equals() method would be adjusted so that an undefined 
> version
> is always equal to any other version. All calls to getMajor(), 
> getMinor(), and
> getMicro() will all return -1 if there is no valid version info.


Sounds reasonable - havn't looked at the Version implementation in 
aleast 6 months - so I'll need to refresh my memory a bit.

>
>>> Merlin and Phoenix both use the META-INF/MANIFEST.MF format to 
>>> identify the
>>> components. In the end this might be the best approach as it 
>>> identifies the
>>> types instead of the services as primary which seems to be more common.
>>> More thoughts on this would be welcome. 
>>
>>
>> The usage of manifest meta info in Phoenix and Merlin is limited to 
>> the support for the Sun Optional Extensions Specification. This is 
>> different level or granularity to components. Phoenix discovers 
>> components based o the declarations in its assembly file whereas 
>> Merlin scans jar files for meta-info. The only role that manifest 
>> info plays is to facilitate the automation of classpath creation 
>> based on jar file dependency statements - but this is relatively 
>> independent of the meta question.
>
>
> Ok. So in essence the manifest info is essentially a moot point, and the
> only critical thing is the assembly--as this determines how things get
> mapped together and which components are absolutely required.


Yep

>
>>> XML file format differences can be easily accounted for by using a 
>>> simple
>>> transformation to convert one XML format to another. Not to mention 
>>> that
>>> Merlin Meta does read Phoenix meta info natively. That is a good thing. 
>>
>>
>> The Merlin approach is to use the Type object as the transformation 
>> source. For example, when generating an XML source from javadoc tags, 
>> Melin uses the javadoc tags to create a Type instance then writes 
>> that instance out to a particular target format. Current input and 
>> output mappings look like:
>>
>>>
>>> The main question is how easy is it to extend Merlin Meta? If the 
>>> data model
>>> is too specific, then we can run into a problem. If the data model 
>>> is nice and
>>> generic, then all is well.
>>
>>
>> From experience this is not such a simple question. The flexibility 
>> in the current model is achieved through an ability to supplement a 
>> type description with attributes at just about any level. This is 
>> totally sufficient for even the most strange and bizarre extensions. 
>> The limitations concern the introduction of a new formalism - for 
>> example, lifecycle extensions required the addition of new state, 
>> which in turn requires explicit version management of the 
>> serializable implementation. I sure that there is some additional 
>> smart stuff that can be incorporated to better handle multiple 
>> version of serialized content, but I'm not so up-to-spped on this area.
>
>
> Hmm. Truth be told, the attributes work well for most things and I am
> happy with that. However, not all remoting (an assumption here) requires
> serialization of objects, merely manipulation of them. What is most 
> critical
> in this situation (as I learned from M$ tutorials) is that the version of
> the serialization mechanism is the same. The information can be read and
> interpreted as long as the serialization mechanism does not alter the 
> order
> or format of the persistanse. We can tackle that side of the equation 
> later,
> in the containers.


I think it has to be tackled at the meta package layer - but I don't 
think its a big deal - its seems to be just a question of getting the 
serialization identifiers in place (but its not an area I'm not so 
familiar with). Maybe someone else here knows a bit more about this.

>
> Again, we will attack things as we need to. No sense in trying to think
> of all possible uses when we haven't come across all possible use cases
> yet.
>
>> The very big plus about the current model is that it is serializable. 
>> This means that the meta-info model can be passed across the wire 
>> between management tools and deployment engines (containers). The 
>> same things applies to meta-data. Combines serializable meta-info 
>> with serializable meta-data means we can arrive at a serializable 
>> meta-model. This has really big implications with respect to 
>> distributed management (just for reference - it’s the serializable 
>> meta-data and meta-model management that I've been working on most of 
>> last week).
>
>
> In my mind, the separation of meta-info and meta-data is largely 
> artificial.
> The difference is what semantically we are trying to convey. We have a 
> set
> of information about components, so it is all meta information. Meta data
> would be data about data, for example "how many rows were retrieved 
> with a
> query?", "how many columns were received?", etc. Our concern is mainly 
> with
> the meta information.


I see the distinction as equivalent in importance to the notion of 
interface and implementation. The meta-info declares *criteria* - e.g. 
this component *needs* a context entry called "fred" that is castable to 
a "String" (i.e. think of meta-info as the contract). Meta-data 
describes solutions to achiving that *criteria* (i.e. think of meta-data 
as an implementation).

One of the reasons why Merlin is interoperable with Phoenix meta-info is 
because of this seperation. The Phoneix <blockinfo> descriptor is pure 
meta-info (contract). Merlin applies a distinctly different meta-data 
model as compared to Phoenix - reflecting a different implementation 
strategy.

Anyway - I'm going to do a RT on the subject of meta info/data/model - 
current thinking and so on.

Cheers, Steve.

-- 

Stephen J. McConnell
mailto:mcconnell@apache.org
http://www.osm.net

Sent via James running under Merlin as an NT service.
http://avalon.apache.org/sandbox/merlin




---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@avalon.apache.org
For additional commands, e-mail: dev-help@avalon.apache.org


Mime
View raw message