archiva-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From James William Dumay <ja...@atlassian.com>
Subject Re: [comments] metadata-updater consumer in 1.2
Date Wed, 27 Aug 2008 03:23:48 GMT
Brett,

On 15/08/2008, at 4:47 PM, Brett Porter wrote:

> Generally in favour, though I agree with Deng and have a couple more  
> questions.
>
> On 14/08/2008, at 8:58 PM, James William Dumay wrote:
>
>>
>> On 14/08/2008, at 6:58 PM, Maria Odea Ching wrote:
>>>>
>>>> How I propose this will work:
>>>> 1. consume both POM artifact and metadata events
>>>> 2. create a new maven-metadata.xml if it does not exist (Only if  
>>>> the event
>>>> was a POM event)'
>>>> 4. check if a metadata update is actually needed (ie, if the  
>>>> artifact is
>>>> already in the existing metadata, do nothing).
>>>> 3. Merge metadata given a new POM event or metadata event.
>>>
>>>
>>> Hmm, what if the metadata is broken or corrupted (ex. there are  
>>> versions
>>> specified in the metadata but doesn't actually exist in the repo)?  
>>> Will the
>>> metadata be checked for that & get updated too?
>>>
>>
>> Removing missing versions is a little bit more difficult. If there  
>> was some differentiation between a managed repository used for  
>> deployment or used as a proxy cache this might be possible.
>
> Isn't that what we have now by the filename differentiation?

Sure, but say if we deleted all maven-metadata files (from remote  
repositories or not) then called this consumer we still wouldn't be  
able to tell this or not.

>
>
>>
>>
>> Metadata in the managed repository shows an incomplete view of one  
>> or more remote repositories that may not have their versions on  
>> disk yet - so in this case I believe we should keep those versions  
>> and not remove them.
>
> I thought we proxied metadata requests first then merged them so we  
> had a full picture?

We still do that - that will not change. What I ment was a version may  
not exist on disk but exist on the remote repository and the metadata  
should still reflect that.

>
>
> I'm a bit worried about losing this functionality (though TBH, I've  
> had trouble getting it to work consistently in the current version  
> anyway) - it's useful to be able to delete artifacts from the file  
> system and have it clean up after itself.

The current implementation is a bit touch and go. Im not sure how this  
behavior should work just yet - the safest bet would be to keep its  
metadata around if it still exists.


>
>
>>
>>
>>>
>>>>
>>>>
>>>> Advantages:
>>>> * cuts out the lossy conversion from path -> Artifact/ 
>>>> VersionedReference ->
>>>> path
>>>> * can figure out the correct location, groupId, artifactId and  
>>>> version of
>>>> the metadata file to be updated or created by reading POM metadata.
>
> Sounds good. Will it fall back to the directory structure if the POM  
> is not present/invalid? (Think jpox 1.1.9).

Yeah, we can do that.

>
>
>>>>
>>>> * You would be able to transform a local repository into a server
>>>> repository
>
> Not sure I understand this one?

The on disk format for remote and local is slightly different. The  
consumer could potentially morph local repo metadata into server  
metadata (Wendy was musing about this on IRC not long ago).

>
>
>>>>
>>>> * This implementation will have unit tests.
>
> tests FTW :)

Tests ^_^

>
>
>>>>
>>>>
>>>> Disadvantages:
>>>> * May not have the entire model available on disk. In this case  
>>>> if the
>>>> parent is needed for a new metadata file to be written then we  
>>>> simply do
>>>> nothing with this consumer event. Later scans will probably  
>>>> happen at a
>>>> time
>>>> when the parent becomes available.
>>>> * Walking the POM inheritance tree may be a little slower as you  
>>>> will need
>>>> a
>>>> read/parse for every time you want to go up one level in the tree.
>>>>
>>>> Thoughts? Questions?
>
> I don't really grok either of these - all the information for the  
> metadata (GAV) is in the uninherited POM, right?
>
> Here are the use cases I was thinking of testing explicitly, btw:
> * make sure versions are deleted when the artifacts are
> * make sure the metadata is recreated when it is deleted and the  
> artifacts are still intact
> * make sure it is updated when an artifact is added via the scan
> * various update/create scenarios for proxying
>
> Cheers,
> Brett
>
>>>>
>>>
>>>
>>> Sounds good to me :)
>>
>> Cool :)
>>
>> James
>
> --
> Brett Porter
> brett@apache.org
> http://blogs.exist.com/bporter/
>


Mime
View raw message