zookeeper-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Patrick Hunt <ph...@apache.org>
Subject Re: ACTION REQUIRED: disk space on jenkins master nearly full
Date Sat, 15 Jun 2019 17:35:28 GMT
On Sat, Jun 15, 2019 at 10:29 AM Enrico Olivelli <eolivelli@gmail.com>
wrote:

> Il sab 15 giu 2019, 18:18 Patrick Hunt <phunt@apache.org> ha scritto:
>
> > Narrowing this down to just the ZK folks.
> >
> > We're currently discarding the builds after 90 days for both of the jobs.
> > Perhaps we can narrow down to 60? The PRs link to these builds, are they
> > valuable after that point (vs just retriggering the build if missing)?
> >
> > I also notice that "PreCommit-ZOOKEEPER-github-pr-build-maven" is saving
> > all artifacts, rather than a subset (e.g. the logs) as is being done by
> > "PreCommit-ZOOKEEPER-github-pr-build" job. Perhaps we can update that?
> > Enrico or Norbert any insight?
> >
>
> I had enabled archiving in order to track some issue I can't recall.
> We should only keep logs in case of failure
>
> I think that 30 days is enough, but I am okay with 60.
> We are now working at a faster pace and a precommit run more than one month
> ago is probably out of date.
>
>
Sounds like 30 for both is fine then.

Patrick


>
> Enrico
>
>
> > Patrick
> >
> > On Fri, Jun 14, 2019 at 6:09 PM Chris Lambertus <cml@apache.org> wrote:
> >
> >> All,
> >>
> >> Thanks to those who have addressed this so far. The immediate storage
> >> issue has been resolved, but some builds still need to be fixed to
> ensure
> >> the build master does not run out of space again anytime soon.
> >>
> >> Here is the current list of builds storing over 40GB on the master:
> >>
> >> 597G    Packaging
> >> 204G    pulsar-master
> >> 199G    hadoop-multibranch
> >> 108G    Any23-trunk
> >> 93G     HBase Nightly
> >> 88G     PreCommit-ZOOKEEPER-github-pr-build
> >> 71G     stanbol-0.12
> >> 64G     Atlas-master-NoTests
> >> 50G     HBase-Find-Flaky-Tests
> >> 42G     PreCommit-ZOOKEEPER-github-pr-build-maven
> >>
> >>
> >> If you are unable to reduce the size of your retained builds, please let
> >> me know. I have added some additional project dev lists to the CC as I
> >> would like to hear back from everyone on this list as to the state of
> their
> >> stored builds.
> >>
> >> Thanks,
> >> Chris
> >>
> >>
> >>
> >>
> >> > On Jun 10, 2019, at 10:57 AM, Chris Lambertus <cml@apache.org> wrote:
> >> >
> >> > Hello,
> >> >
> >> > The jenkins master is nearly full.
> >> >
> >> > The workspaces listed below need significant size reduction within 24
> >> hours or Infra will need to perform some manual pruning of old builds to
> >> keep the jenkins system running. The Mesos “Packaging” job also needs
> to be
> >> corrected to include the project name (mesos-packaging) please.
> >> >
> >> > It appears that the typical ‘Discard Old Builds’ checkbox in the job
> >> configuration may not be working for multibranch pipeline jobs. Please
> >> refer to these articles for information on discarding builds in
> multibranch
> >> jobs:
> >> >
> >> >
> >>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >> > https://issues.jenkins-ci.org/browse/JENKINS-35642
> >> >
> >>
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >> >
> >> >
> >> >
> >> > NB: I have not fully vetted the above information, I just notice that
> >> many of these jobs have ‘Discard old builds’ checked, but it is clearly
> not
> >> working.
> >> >
> >> >
> >> > If you are unable to reduce your disk usage beyond what is listed,
> >> please let me know what the reasons are and we’ll see if we can find a
> >> solution. If you believe you’ve configured your job properly and the
> space
> >> usage is more than you expect, please comment here and we’ll take a
> look at
> >> what might be going on.
> >> >
> >> > I cut this list off arbitrarily at 40GB workspaces and larger. There
> >> are many which are between 20 and 30GB which also need to be addressed,
> but
> >> these are the current top contributors to the disk space situation.
> >> >
> >> >
> >> > 594G    Packaging
> >> > 425G    pulsar-website-build
> >> > 274G    pulsar-master
> >> > 195G    hadoop-multibranch
> >> > 173G    HBase Nightly
> >> > 138G    HBase-Flaky-Tests
> >> > 119G    netbeans-release
> >> > 108G    Any23-trunk
> >> > 101G    netbeans-linux-experiment
> >> > 96G     Jackrabbit-Oak-Windows
> >> > 94G     HBase-Find-Flaky-Tests
> >> > 88G     PreCommit-ZOOKEEPER-github-pr-build
> >> > 74G     netbeans-windows
> >> > 71G     stanbol-0.12
> >> > 68G     Sling
> >> > 63G     Atlas-master-NoTests
> >> > 48G     FlexJS Framework (maven)
> >> > 45G     HBase-PreCommit-GitHub-PR
> >> > 42G     pulsar-pull-request
> >> > 40G     Atlas-1.0-NoTests
> >> >
> >> >
> >> >
> >> > Thanks,
> >> > Chris
> >> > ASF Infra
> >>
> >>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message