archiva-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Per Arnold Blaasmo <per-arnold.blaa...@atmel.com>
Subject Re: Running out of inodes in jcr folder
Date Mon, 12 Jan 2015 07:26:30 GMT
On 12. jan. 2015 06:56, Olivier Lamy wrote:
> On 8 January 2015 at 23:44, Per Arnold Blaasmo <per-arnold.blaasmo@atmel.com
>> wrote:
> 
>> Hi,
>>
>> After having Archiva up and running for a long time, my disk complaints
>> about no disk space.
>> The case being that it has run out of inodes.
>>
>> The jcr database in Archiva being the cause. It uses very many inodes.
>> Why is that?
>>
> 
> Yup Jackrabbit use a lot of small files. Maybe there is an issue when
> deleting archiva content (which maybe not deleting in jackrabbit).
> To fix that you can try shutdown you archiva instance, delete jackrabbit
> (jcr directories/files which can take a while as you probably have a huge
> repository.
> I added an other storage ( cassandra ) but I reckon ATM it's not very
> performant..
> 
> 
Yes, we do delete files in the repository outside of the jackrabbit.
We are using Archiva with Ivy (with our own patches) and we have 3 repos
(release, stable and continuous). We use Archiva in a CI environment and
there is lots of artifacts built that get stored in Archiva.

So we have a need to clean those out regularly. Archiva have a plugin to
clean out/purge snapshots, but we do not use snapshots. I have a plan to
make patch to that plugin, but have not got down to that.
So we have set up cron jobs to clean out files with acertain age from
the files system and the different repos.

We depend on the repo scanning of Archiva ti get it straight again. But
I guess those files that are deleted by the cron job still exists in the
database after a scan.

> 
>> Does it do garbage collection to remove deletet content?
>> Can I trigger a garbage collection on the database?
>> How can I configure it to not use so much inodes?
>>
>> Regards
>> Per A.
>>


Mime
View raw message