archiva-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brett Porter <>
Subject Re: Proposal: concurrent remote-requests
Date Wed, 14 Oct 2009 23:02:30 GMT
On 15/10/2009, at 12:06 AM, Marc Lustig wrote:

> Hi all,
> we have configured about 25 remote-repos for our public-artifacts  
> managed
> repo.
> In certain cases, black and white lists don't help and a request is  
> proxied
> to all the 20 remote-repos _sequentially_. Even thou we have  
> configured a
> short timeout of 5 secs, this takes 125 secs in case the artifacts  
> doesn't
> exist in any remote-repo - per artifact!
> So I was wondering if it would make sense to send requests to all of  
> the
> remote-repos _concurrently_.
> The first thread that find the artifacts could cause all the other  
> threads
> to cancel the http-request.
> The total request time would reduce from 100 secs++ to merely 5 secs.
> Tremendous win or?
> Has this been discussed before?

I think this is a pretty unusual case. I don't quite understand why  
you are hitting the timeout limit on the remote repo - if they are up  
they should be fast. Also, "first that finds" is different to the  
current rule since it's first that appears in the list. I worry that  
in this set up you're not entirely sure which repository the artifacts  
are meant to be coming from, so maybe it points to another problem.

> Is there an argument against this strategy?

Particularly if we turned on streaming of the proxied download to the  
client (which is intended) - we couldn't do so if they were pooled  
like this, unless we accepted the "first found rule".

That said, this might speed up requests with a long list of proxies,  
even if they are functioning properly. So it might be reasonable as an  
optional capability. One thing to consider would be doing a HEAD  
request instead of a GET for all the remotes first to select where to  
download from, then execute the GET from the desired one.

- Brett

View raw message