archiva-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stallard,David" <stall...@oclc.org>
Subject RE: 100% CPU in Archiva 1.3.5
Date Tue, 01 Nov 2011 14:42:49 GMT
We do have a fairly large number of Continuous Integration builds that
can trigger many times per day, each build uploading new snapshots.  It
sounds like that could be a problem based on your second point below.
However, that structure has been in place for well over a year and we've
only had this CPU problem for 2 weeks.  Maybe we just happened to cross
some threshold that has made it more problematic?

I'll look into reducing the number of scans and keeping them to off-peak
hours.

-----Original Message-----
From: Brett Porter [mailto:brett@porterclan.net] On Behalf Of Brett
Porter
Sent: Monday, October 31, 2011 6:34 PM
To: users@archiva.apache.org
Subject: Re: 100% CPU in Archiva 1.3.5

Top replying with a few points:

- artifact upload limits are only via the web UI, maven deployments can
push artifacts as large as needed.
- individual artifacts of that size shouldn't be a big problem (it's a
once off hit), but regularly updating snapshots of that size will cause
it to build
- the scan time below is quite long, particularly for the purge. You
might want to push the scanning schedule out to an "off peak" time - the
purge doesn't need to run that often, and most operations are done
on-demand with the scan just filling in any gaps.

HTH,
Brett

On 01/11/2011, at 2:15 AM, Stallard,David wrote:

> I'm not sure if this is useful, but here are the summaries of the most

> recent hourly scans...from archiva.log:
> 
> 
> .\ Scan of internal \.__________________________________________
>  Repository Dir    : <path removed>/internal
>  Repository Name   : Archiva Managed Internal Repository
>  Repository Layout : default
>  Known Consumers   : (7 configured)
>                      repository-purge (Total: 58857ms; Avg.: 1; Count:
> 58702)
>                      metadata-updater (Total: 419ms; Avg.: 209; Count:
> 2)
>                      auto-remove
>                      auto-rename
>                      update-db-artifact (Total: 98ms; Avg.: 49; Count:
> 2)
>                      create-missing-checksums (Total: 120ms; Avg.: 60;
> Count: 2)
>                      index-content (Total: 0ms; Avg.: 0; Count: 7)  
> Invalid Consumers : <none>
>  Duration          : 2 Minutes 56 Seconds 896 Milliseconds
>  When Gathered     : 10/31/11 11:02 AM
>  Total File Count  : 268305
>  Avg Time Per File :
> ______________________________________________________________
> 
> 
> .\ Scan of snapshots \.__________________________________________
>  Repository Dir    : <path removed>/snapshots
>  Repository Name   : Archiva Managed Snapshot Repository
>  Repository Layout : default
>  Known Consumers   : (7 configured)
>                      repository-purge (Total: 325200ms; Avg.: 8;
Count:
> 39805)
>                      metadata-updater (Total: 5915ms; Avg.: 50; Count:
> 116)
>                      auto-remove
>                      auto-rename
>                      update-db-artifact (Total: 17211ms; Avg.: 148;
> Count: 116)
>                      create-missing-checksums (Total: 15559ms; Avg.:
> 134; Count: 116)
>                      index-content (Total: 34ms; Avg.: 0; Count: 475)

> Invalid Consumers : <none>
>  Duration          : 7 Minutes 17 Seconds 416 Milliseconds
>  When Gathered     : 10/31/11 11:10 AM
>  Total File Count  : 166275
>  Avg Time Per File : 2 Milliseconds
> ______________________________________________________________
> 
> 
> 
> -----Original Message-----
> From: Stallard,David
> Sent: Monday, October 31, 2011 9:57 AM
> To: 'users@archiva.apache.org'
> Subject: RE: 100% CPU in Archiva 1.3.5
> 
> I need to correct my previous message...it turns out we do have 
> artifacts larger than 40M even though that is the defined maximum, I'm

> not sure at this point how that is happening.
> 
> In our internal repository we have 40 artifacts which are over 100M in

> size, with the largest one being 366M.  In snapshots, we have 61 
> artifacts that are >100M, where the largest is 342M.  I'm not sure how

> significant these sizes are in terms of the indexer, but wanted to 
> accurately reflect what we're dealing with.
> 
> -----Original Message-----
> From: Stallard,David
> Sent: Monday, October 31, 2011 9:43 AM
> To: 'users@archiva.apache.org'
> Subject: RE: 100% CPU in Archiva 1.3.5
> 
> Brett Porter said: 
>>> It's not unexpected that indexing drives it to 100% CPU momentarily,
> but causing it to become unavailable is unusual.
> How big are the artifacts it is scanning?<<
> 
> The CPU was still at 100% on Monday morning, so having the weekend to 
> index didn't seem to improve anything; the indexing queue was up to 
> about 3500.  We got a report that downloads from Archiva are extremely

> slow, so I just bounced it.  CPU was immedately at 100% after the 
> bounce, and the indexing queue is at 6.  I expect that queue to 
> continually rise, based on what I've seen after previous bounces.
> 
> Our upload maximum size was 10M for the longest time, but we had to 
> raise it to 20M a while back and then recently we raised it to 40M.  
> But I would think that the overwhelming majority of our artifacts are 
> 10M or less.
> 
> Is there a way to increase the logging level?  Currently, the logs 
> don't show any indication of what it is grinding away on.  After the 
> startup stuff, there really isn't anything in archiva.log except for 
> some Authorization Denied messages -- but these have been occurring 
> for months and months, I don't think they are related to the 100% CPU 
> issue that just started up about a week ago.
> 
> 
> 
> 
> 

--
Brett Porter
brett@apache.org
http://brettporter.wordpress.com/
http://au.linkedin.com/in/brettporter







Mime
View raw message