archiva-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brett Porter <>
Subject Re: [comments] metadata-updater consumer in 1.2
Date Fri, 15 Aug 2008 06:47:20 GMT
Generally in favour, though I agree with Deng and have a couple more  

On 14/08/2008, at 8:58 PM, James William Dumay wrote:

> On 14/08/2008, at 6:58 PM, Maria Odea Ching wrote:
>>> How I propose this will work:
>>> 1. consume both POM artifact and metadata events
>>> 2. create a new maven-metadata.xml if it does not exist (Only if  
>>> the event
>>> was a POM event)'
>>> 4. check if a metadata update is actually needed (ie, if the  
>>> artifact is
>>> already in the existing metadata, do nothing).
>>> 3. Merge metadata given a new POM event or metadata event.
>> Hmm, what if the metadata is broken or corrupted (ex. there are  
>> versions
>> specified in the metadata but doesn't actually exist in the repo)?  
>> Will the
>> metadata be checked for that & get updated too?
> Removing missing versions is a little bit more difficult. If there  
> was some differentiation between a managed repository used for  
> deployment or used as a proxy cache this might be possible.

Isn't that what we have now by the filename differentiation?

> Metadata in the managed repository shows an incomplete view of one  
> or more remote repositories that may not have their versions on disk  
> yet - so in this case I believe we should keep those versions and  
> not remove them.

I thought we proxied metadata requests first then merged them so we  
had a full picture?

I'm a bit worried about losing this functionality (though TBH, I've  
had trouble getting it to work consistently in the current version  
anyway) - it's useful to be able to delete artifacts from the file  
system and have it clean up after itself.

>>> Advantages:
>>> * cuts out the lossy conversion from path -> Artifact/ 
>>> VersionedReference ->
>>> path
>>> * can figure out the correct location, groupId, artifactId and  
>>> version of
>>> the metadata file to be updated or created by reading POM metadata.

Sounds good. Will it fall back to the directory structure if the POM  
is not present/invalid? (Think jpox 1.1.9).

>>> * You would be able to transform a local repository into a server
>>> repository

Not sure I understand this one?

>>> * This implementation will have unit tests.

tests FTW :)

>>> Disadvantages:
>>> * May not have the entire model available on disk. In this case if  
>>> the
>>> parent is needed for a new metadata file to be written then we  
>>> simply do
>>> nothing with this consumer event. Later scans will probably happen  
>>> at a
>>> time
>>> when the parent becomes available.
>>> * Walking the POM inheritance tree may be a little slower as you  
>>> will need
>>> a
>>> read/parse for every time you want to go up one level in the tree.
>>> Thoughts? Questions?

I don't really grok either of these - all the information for the  
metadata (GAV) is in the uninherited POM, right?

Here are the use cases I was thinking of testing explicitly, btw:
* make sure versions are deleted when the artifacts are
* make sure the metadata is recreated when it is deleted and the  
artifacts are still intact
* make sure it is updated when an artifact is added via the scan
* various update/create scenarios for proxying


>> Sounds good to me :)
> Cool :)
> James

Brett Porter

View raw message