gump-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam Jack" <>
Subject RE: [PATCH][] Going forward
Date Tue, 22 Apr 2003 17:48:47 GMT

	Nicola Ken Barozzi wrote, On 22/04/2003 16.23:
	> Sam Ruby wrote, On 22/04/2003 16.17:
	>> If the project definition were to say where this dependency could be
	>> downloaded (presuming it were redistributable, of course), couldn't a
	>> simple urllib.urlopen() be used to retrieve the file?  If you want to
	>> get more fancy, there is httplib...
	>   urllib.urlopen("log4j")
	> Does this work?

I've contemplated Ruper verses a simple HTTP GET a number of times. Clearly
the HTTP GET should be feasible, and I know some folks use the ant <get> for
that purpose. That said, I think we need more than this to reach another
level of automation and manageability. I think dependencies need to be
managed by the dependency owners, and not be the dependee.

It seems wrong for a manager of project X to "freeze" the dependence on Y by
using a "get" to a specific instance of the Y jar files.  I'd hate to see is
the stagnation associated with that, it seems very anti-gump. I believe that
there needs to be some intelligence that negotiates between the dependees
and the dependencies, and is "environment smart". I think this code needs to
compare the local environment (perhaps with shared jar directories) against
remote repositories and follow some basic rules in determining what jar to
use, what is a "compatible jar" & use it across a build tree for a whole
environment. I think that ought be Ruper...

Without something to manage the "links" from one project to it's remote
dependencies I think we get an unmanageable (and stale) tangled web of
"gets" and screwed up (inconsistent) local environments.



View raw message