ant-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stephen McConnell" <mcconn...@dpml.net>
Subject RE: auto download of antlibs
Date Mon, 07 May 2007 16:34:03 GMT
 

> -----Original Message-----
> From: Steve Loughran [mailto:stevel@apache.org] 
> Sent: Friday, 4 May 2007 5:27 PM
> To: Ant Developers List
> Subject: auto download of antlibs
> 
> 
> One thing I've been thinking of this week is how could we 
> work with Ivy for automatic antlib download.
> 
> No code right now, just some thoughts
> 
> 
> 1. add a -offline argument to say "we are offline". this will 
> set a property, (and.build.offline) and the <offline> test 
> will work. It is meant to tell things like Ivy that we are 
> offline. At some point we could add some way for Ant to guess 
> whether the net is there or not, if java integrates with the 
> OS properly (there is an API call for this in J2ME, just not Java SE)

Sounds like we are mixing a characteristic of a network connection with a
policy decision.  When we talk about being offline - we are normally
describing a situation under which a TCP/IP connection is unavailable.  When
developers discuss '-offline' as a policy what this often translates to is
that they want to assert a rule preventing the automatic downloading of
artifacts.  Recognizing this difference enables recognition of a bunch of
other possibilities:

  a) I have a artifacts that I depend upon I want to modify the logic 
     used in the resolution of said artifacts

     - do I want to resolve artifacts over a remote network 
       connection?
     - are internet resolvable connections ok?
     - am I really talking about shades of gray relative to 
       a collection of repositories - in effect, am I designing 
       a artifact retrieval policy
     - am I talking about trust?
     - am I talking about artifact integrity?

  b) What is the state of my cache?

     - was the cached artifact established with the same policy 
       as the policy I'm currently asserting
     - does my cache management system associate existence 
       policy with the artifact
     - it the cached object verifiable
     - does my build policy imply anything on my caching policy
     - is my cache sharable and if it is, what am I asserting 
       in terms of policy

  c) What is the relationship between build process, cache, and 
     shared repositories?

     - am I trusted?
     - how can clients validate me, my cache, my policy, my 
       artifacts
     - does my build process trust my cache (given that interim
       dependent builds may be using policies that are not under
       my control - e.g. Eric uses Antlib X which has dependencies
       on jar X, Y, and Z

> 2. when we encounter an element (or even an attr) in an 
> unknown antlib xmlns, and we want to map that to a 
> projectcomponent, we hand off resolution to an antlib 
> resolver. We would have one built in (the failing resolver), 
> would default to the ivy one if it was present, and provide 
> some way to let people switch to a different one.

You can do this without mentioning Ivy so long as you have the mechanisms to
include URL protocol handlers.

Example of a working build.xml file:

<project default="install" xmlns:x="antlib:dpml.tools">
  <x:import uri="local:template:dpml/tools/standard"/>
</project>

The above build works if I put around about 5 specialized jar files into my
./ant/lib directory and invoke:

      $ ant

Or more typically, the mechanism I use on more than one hundred Ant based
projects (without anything in .ant/lib):

      $ build

In both cases what I am doing is making Ant URL aware - as such
"local:template:dpml/tools/standard" is recognized as a protocol handler and
the handler recognizes content types and maps into place the appropriate
content handler which in this case simply drags in template build file.  The
template file contains the following statements:

<project name="standard" default="install"
    xmlns:x="antlib:dpml.tools" >

  <target name="init">
    <x:init/>
  </target>

  ...

</project>

The <x:init/> task establishes an project helper that deals with uris for
things like antlib plugins (and a bunch of other protocol handlers that let
me deal with cached resources, resources on remote hosts, local preferences,
services based on independent virtual machines, deployment scenarios for
local or remote applications, basically most of the things you need in a
fully functional build environment.  In effect - it basically does the setup
of the machinery needed to override Ant behaviour when resolving tasks and
data types using the URL machinery bundled in the JVM.

> 3. an antlib resolver would do the mapping from antlib 
> package to artifacts (problem one), then download the 
> metadata for that artifact, pull it down and all its artifacts

Sounds like a protocol handler that captures sufficient information to
represent a classloader chain together with som information about the
deployment target.  One example that approaches this is the DPML part
definition which encapsulates (a) generic info, (b) a deployment strategy,
and (c) a classloader chain definition.

 http://www.dpml.net/metro/parts/index.html

In the DPML model the deployment strategy is dynamic and in out environment
we have several strategies we use on a regular basis.  One of these is a
antlib strategy which simply identifies the path to the antlib resource and
the namespace uri that the resource should be bound to.


> 4. we would then <typedef> the lib with the classpath that is 
> set up by the resolver

Depending on a classpath is generally not sufficient although recognition of
classpath definitions is a part of the problem. In effect - you need to deal
with the definition of a classloader chain.  Basically a class path defines
a classloader and a classloader is entry in a chain of classloaders
connected by parent relationships.  The chain (versus path) abstraction is
needed in order to isolate implementation classes from service classes.  In
the extreme scenario you will see examples involving a public API, and
management SPI, and implementation classloader. In some scenario you will
also see plugins (e.g. antlibs) being loaded as children of an SPI or impl
classloader (which result in chain extension). 

> 
> we'd need a metadata tree mapping antlibs to well known 
> packages, but that is not too hard. JSON, perhaps.

It would be worth looking at the JSR 277 draft spec - in particular the
topics dealing with repositories, runtime module construction, and service
loading.

http://jcp.org/en/jsr/detail?id=277

Cheers, Steve.

--------------------------
Stephen McConnell
mailto:mcconnell@dpml.net
http://www.dpml.net
 

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
For additional commands, e-mail: dev-help@ant.apache.org


Mime
View raw message