archiva-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marc Lustig>
Subject Re: MRM-1351: please advise
Date Tue, 02 Mar 2010 09:15:53 GMT

brettporter wrote:
> On 02/03/2010, at 1:44 AM, Marc Lustig wrote:
>> What we need is a process to automatically verify the integrity of an
>> artifact by uploading a local hashcode. Ideally, the verification takes
>> place in a single transaction (HTTP-request).
>> I suppose modifying the deploy:deploy goal is not practical, as it may
>> have
>> impacts for a wide range of Maven users, and the Maven leaders will
>> probably
>> veto against it.
> The main problem with this is that Wagon currently streams the upload, and
> calculates the checksum as it goes, to upload as a separate file
> afterwards (remembering that, I don't know why I thought the checksum went
> first). Sending the checksum in a header would require reading a file
> twice - not a big deal, but a fair change to the way it works right now.
> It's not unreasonable, but it's probably not that necessary given that
> other checks can find the problem. As we seem to have discovered on
> users@, the content length check is probably already triggering for you
> anyway. If it's an incorrectly uploaded checksum instead, that can be done
> without additional goals...
>> But what do you think about adding a subgoal "verify" to the deploy
>> plugin?
>> That way, the following call would deploy and verify an artifact, using -
>> well - not a single transaction, but at least a single mvn-call:
>> "deploy:deploy deploy:verify"
>> The deploy:verify subgoal could presume that Maven has been configured to
>> create hash-codes in the local repo. So what deploy:verify could
>> basically
>> do is simply uploading an ordinary artifact .md5 or .sha1 using DAV.
>> Archiva identifies the verification task based on the file-suffix.
>> What will Archiva do: 
>> - compares the uploaded hashcode with the one that has been created
>> - in case the hashes match: return HTTP 200 (OK)
>> - in case the hashes do NOT match: all Archiva-artifacts for the given
>> version (jar, pom, hashes, xml, etc.) will be deleted. returned is HTTP
>> 400
>> (?) to indicate that the hashes did not match
>> The deploy:verify subgoal then outputs corresponding messages.
>> Failed verification should result in a BUILD ERROR message, of course.
> Is there a reason this needs a separate goal? Couldn't Archiva do this
> same behaviour when the checksum is first uploaded as part of deploy?
> This could be used to drive an "atomic deployment" of an artifact - all
> deployments go to a temporary location until a checksum-verified POM
> arrives, and if everything is valid it gets automatically pushed into the
> repository.
> - Brett

yes, the idea to commit a deployment as a single transaction ("atomic")
including the checksum (hashcode) should certainly be the goal, IMO.
What came to my mind now is that instead of sending the checksum as an
additional file, we could simply add the checksum as an HTTP header-entry to
the DAV-request that sends the artifact.
That way, the contract of the deploy-process will not be changed and we
avoid compatibility issues with other repo-managers.

Should the checksum be created on the fly, or should it be read from the
Should one of md5 or sha1 be sent or both checksums?

Regarding Archiva, only a minor change is needed. Instead of placing the
file in the managed repo unverified, the checksum needs to be read from the
HTTP-header and compared with a freshly generated checksum based on the file
received. We will need to discuss the proper place to add the code.

How sounds that plan for you?

View this message in context:
Sent from the archiva-dev mailing list archive at

View raw message