From dev-return-4936-archive-asf-public=cust-asf.ponee.io@mxnet.incubator.apache.org Thu Nov 22 21:09:56 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id A9C90180645 for ; Thu, 22 Nov 2018 21:09:55 +0100 (CET) Received: (qmail 84003 invoked by uid 500); 22 Nov 2018 20:09:54 -0000 Mailing-List: contact dev-help@mxnet.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@mxnet.incubator.apache.org Delivered-To: mailing list dev@mxnet.incubator.apache.org Received: (qmail 83991 invoked by uid 99); 22 Nov 2018 20:09:54 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 22 Nov 2018 20:09:54 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id B7D28C1EAE for ; Thu, 22 Nov 2018 20:09:53 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.34 X-Spam-Level: * X-Spam-Status: No, score=1.34 tagged_above=-999 required=6.31 tests=[DKIMWL_WL_MED=-1.458, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_REPLY=1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=googlemail.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id dV1w1Qw82Fzf for ; Thu, 22 Nov 2018 20:09:51 +0000 (UTC) Received: from mail-qk1-f196.google.com (mail-qk1-f196.google.com [209.85.222.196]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 287B35F60D for ; Thu, 22 Nov 2018 20:09:51 +0000 (UTC) Received: by mail-qk1-f196.google.com with SMTP id q70so7443738qkh.6 for ; Thu, 22 Nov 2018 12:09:51 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=tiuCZyhZzq0at9O5CH7/rK+0Mjufqr3eiAxh23ZZays=; b=jRlamRXVvzTP/c2RVlo8H2HQAQ9KQTdq1DMpIQ074YfoCIPkWTyJ/wub3GCm5o3lfI wvu7XpLCs804f+7SUARa8/GqCyihIFH/KnvUP/wDlMzzOsQq6VrUUWdKQOySbDM5Kb6J yAY2W9s3VG5k7bQ+1zrUH3lqnjDN1QMbjAiYapMWy7RzpUlfLiJ0fSWjgvsr7/lJYJdq +C5XXZrbJRznSLSSpePnOdsU4FngGjTM1h/wdQkTRSJqsB+zCPU0N4e8bH1ALSpXgugn KEKrsMFGGT+mcVmOm/LLCsKI7gu1BRN9nhDH3ug+FyZcJqzAM8MMkT/a/Gxwmqh5/7iQ xuxw== X-Gm-Message-State: AA+aEWZ7ZV1i/KsIzz63mV0V3HCvDnpZ+x4IJym2t9mqW3LnWzxiSjwP qoxkzu9GPuvKDMKKis2BBuPA1u9h13cPdUrnYHKcKzX7 X-Google-Smtp-Source: AFSGD/XhZqxLsYvaVojJ1osesjUK2w8pSGIgNOQbuxIeX8A1gqopzdpo1+r+kc6oEhQ4alJz+4HOqvtOA5Nt4LxQcTU= X-Received: by 2002:a37:c304:: with SMTP id a4mr11102903qkj.269.1542917384343; Thu, 22 Nov 2018 12:09:44 -0800 (PST) MIME-Version: 1.0 References: <582a03bf-7bad-46be-bc10-d4fe42dff732@email.android.com> <8A2D3F08-C663-4424-AE87-41C9DC3CA6C2@gmail.com> In-Reply-To: From: Marco de Abreu Date: Thu, 22 Nov 2018 21:09:07 +0100 Message-ID: Subject: Re: CI impaired To: dev@mxnet.incubator.apache.org Content-Type: multipart/alternative; boundary="0000000000009c9676057b467267" --0000000000009c9676057b467267 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Thanks everybody, I really appreciate it! Today was a good day, there were no incidents and everything appears to be stable. In the meantime I did a deep dive on why we has such a significant performance decrease with of our compilation jobs - which then clogged up the queue and resulted in 1000 jobs waiting to be scheduled. The reason was the way how we use ccache to speed up our compilation jobs. Usually, this yields us a huge performance improvement (CPU openblas, for example, goes from 30 minutes down to ~3min, ARMv7 from 30 minutes down to ~1.5min, etc.). Unfortunately in this case, ccache was our limiting factor. Here's some background about how we operate our cache: We use EFS to have a distributed ccache between all of our unrestricted-prod-slaves. EFS is classified for almost unlimited scalability (being consumed by thousands of instances in parallel [1]) with a theoretical throughput of over 10Gbps. One thing I didn't know when I designed this approach was the method how throughput is being granted. Similar to T2-CPU-Credits, EFS uses BurstCredits to allow you higher throughput (default is 50MiB/s) [2]. Due to the high load, we consumed all of our credits - here's a very interesting graph: [3]. To avoid similar incidents in future, I have taken the following actions: 1. I switched EFS from burst-mode to provisioned throughput with 300MB/s (in the graph at [3] you can see how our IO immediately increases - and thus our CI gets faster - as soon as I added provisioned throughput). 2. I created internal follow-up tickets to add monitoring and automated actions. First, we should be notified if we are running low on credits to kick-off an investigation. Second (nice to have), we could have a lambda-function which listens for that event and automatically switches the EFS volume from burst-mode to provisioned throughput during high-load-times. The required throughput could be retrieved via CloudWatch and then multiplied by a factor. EFS allows to downgrade the throughput mode 24h after the last changes (to reduce capacity if the load is over) and always allows to upgrade the provisioned capacity (if the load goes even higher). I've been looking for a pre-made CloudFormation template to facilitate that, but so far, I haven't been able to find it. I'm now running additional load tests on our test CI environment to detect other potential bottlenecks. Thanks a lot for your support! Best regards, Marco [1]: https://docs.aws.amazon.com/efs/latest/ug/performance.html [2]: https://docs.aws.amazon.com/efs/latest/ug/performance.html#throughput-modes [3]: https://i.imgur.com/nboQLOn.png On Thu, Nov 22, 2018 at 1:40 AM Qing Lan wrote: > Appreciated for your effort and help to make CI a better place! > > Qing > > =EF=BB=BFOn 11/21/18, 4:38 PM, "Lin Yuan" wrote: > > Thanks for your efforts, Marco! > > On Wed, Nov 21, 2018 at 4:02 PM Anirudh Subramanian < > anirudh2290@gmail.com> > wrote: > > > Thanks for the quick response and mitigation! > > > > On Wed, Nov 21, 2018 at 3:55 PM Marco de Abreu > > wrote: > > > > > Hello, > > > > > > today, CI had some issues and I had to cancel all jobs a few > minutes ago. > > > This was basically caused by the high load that is currently bein= g > put on > > > our CI system due to the pre-release efforts for this Friday. > > > > > > It's really unfortunate that we just had outages of three core > components > > > within the last two days - sorry about that!. To recap, we had th= e > > > following outages (which are unrelated to the parallel refactor o= f > the > > > Jenkins pipeline): > > > - (yesterday evening) The Jenkins master ran out of disk space an= d > thus > > > processed requests at reduced capacity > > > - (this morning) The Jenkins master got updated which broke our > > > autoscalings upscaling capabilities. > > > - (new, this evening) Jenkins API was irresponsive: Due to the hi= gh > > number > > > of jobs and a bad API design in the Jenkins REST API, the > time-complexity > > > of a simple create or delete request was quadratic which resulted > in all > > > requests timing out (that was the current outage). This resulted > in our > > > auto scaling to be unable to interface with the Jenkins master. > > > > > > I have now made improvements to our REST API calls which reduced > the > > > complexity from O(N^2) to O(1). The reason was an underlying > redirect > > loop > > > in the Jenkins createNode and deleteNode REST API in combination > with > > > unrolling the entire slave and job graph (which got quite huge > during > > > extensive load) upon every single request. Since we had about 150 > > > registered slaves and 1000 jobs in the queue, the duration for a > single > > > REST API call rose to up to 45 seconds (we execute up to a few > hundred > > > queries per auto scaling loop). This lead to our auto scaling > timing out. > > > > > > Everything should be back to normal now. I'm closely observing th= e > > > situation and I'll let you know if I encounter any additional > issues. > > > > > > Again, sorry for any caused inconveniences. > > > > > > Best regards, > > > Marco > > > > > > On Wed, Nov 21, 2018 at 5:10 PM Gavin M Bell < > gavin.max.bell@gmail.com> > > > wrote: > > > > > > > Yes, let me add to the kudos, very nice work Marco. > > > > > > > > > > > > "I'm trying real hard to be the shepherd." -Jules Winnfield > > > > > > > > > > > > > On Nov 21, 2018, at 5:04 PM, Sunderland, Kellen > > > > wrote: > > > > > > > > > > Appreciate the big effort in bring the CI back so quickly. > Thanks > > > Marco. > > > > > > > > > > On Nov 21, 2018 5:52 AM, Marco de Abreu < > > marco.g.abreu@googlemail.com > > > .INVALID> > > > > wrote: > > > > > Thanks Aaron! Just for the record, the new Jenkins jobs were > > unrelated > > > to > > > > > that incident. > > > > > > > > > > If somebody is interested in the details around the outage: > > > > > > > > > > Due to a required maintenance (disk running full), we had to > upgrade > > > our > > > > > Jenkins master because it was running on Ubuntu 17.04 (for an > unknown > > > > > reason, it used to be 16.04) and we needed to install some > packages. > > > > Since > > > > > the support for Ubuntu 17.04 was stopped, this resulted in al= l > > package > > > > > updates and installations to fail because the repositories > were taken > > > > > offline. Due to the unavailable maintenance package and other > issues > > > with > > > > > the installed OpenJDK8 version, we made the decision to > upgrade the > > > > Jenkins > > > > > master to Ubuntu 18.04 LTS in order to get back to a supporte= d > > version > > > > with > > > > > maintenance tools. During this upgrade, Jenkins was > automatically > > > updated > > > > > by APT as part of the dist-upgrade process. > > > > > > > > > > In the latest version of Jenkins, some labels have been > changed which > > > we > > > > > depend on for our auto scaling. To be more specific: > > > > >> Waiting for next available executor on mxnetlinux-gpu > > > > > has been changed to > > > > >> Waiting for next available executor on =E2=80=98mxnetlinux-g= pu=E2=80=99 > > > > > Notice the quote characters. > > > > > > > > > > Jenkins does not offer a better way than to parse these > messages > > > > > unfortunately - there's no standardized way to express queue > items. > > > Since > > > > > our parser expected the above message without quote signs, th= is > > message > > > > was > > > > > discarded. > > > > > > > > > > We support various queue reasons (5 of them to be exact) that > > indicate > > > > > resource starvation. If we run super low on capacity, the que= ue > > reason > > > is > > > > > different and we would still be able to scale up, but most of > the > > cases > > > > > would have printed the unsupported message. This resulted in > reduced > > > > > capacity (to be specific, the limit during that time was 1 > slave per > > > > type). > > > > > > > > > > We have now fixed our autoscaling to automatically strip thes= e > > > characters > > > > > and added that message to our test suite. > > > > > > > > > > Best regards, > > > > > Marco > > > > > > > > > > On Wed, Nov 21, 2018 at 2:49 PM Aaron Markham < > > > aaron.s.markham@gmail.com > > > > > > > > > > wrote: > > > > > > > > > >> Marco, thanks for your hard work on this. I'm super excited > about > > the > > > > new > > > > >> Jenkins jobs. This is going to be very helpful and improve > sanity > > for > > > > our > > > > >> PRs and ourselves! > > > > >> > > > > >> Cheers, > > > > >> Aaron > > > > >> > > > > >> On Wed, Nov 21, 2018, 05:37 Marco de Abreu > > > > >> > > > >> > > > > >>> Hello, > > > > >>> > > > > >>> the CI is now back up and running. Auto scaling is working = as > > > expected > > > > >> and > > > > >>> it passed our load tests. > > > > >>> > > > > >>> Please excuse the caused inconveniences. > > > > >>> > > > > >>> Best regards, > > > > >>> Marco > > > > >>> > > > > >>> On Wed, Nov 21, 2018 at 5:24 AM Marco de Abreu < > > > > >>> marco.g.abreu@googlemail.com> > > > > >>> wrote: > > > > >>> > > > > >>>> Hello, > > > > >>>> > > > > >>>> I'd like to let you know that our CI was impaired and down > for the > > > > last > > > > >>>> few hours. After getting the CI back up, I noticed that ou= r > auto > > > > >> scaling > > > > >>>> broke due to a silent update of Jenkins which broke our > > > > >>> upscale-detection. > > > > >>>> Manual scaling is currently not possible and stopping the > scaling > > > > won't > > > > >>>> help either because there are currently no p3 instances > available, > > > > >> which > > > > >>>> means that all jobs will fail none the less. In a few > hours, the > > > auto > > > > >>>> scaling will have recycled all slaves through the down-sca= le > > > mechanism > > > > >>> and > > > > >>>> we will be out of capacity. This will lead to resource > starvation > > > and > > > > >>> thus > > > > >>>> timeouts. > > > > >>>> > > > > >>>> Your PRs will be properly registered by Jenkins, but pleas= e > expect > > > the > > > > >>>> jobs to time out and thus fail your PRs. > > > > >>>> > > > > >>>> I will fix the auto scaling as soon as I'm awake again. > > > > >>>> > > > > >>>> Sorry for the caused inconveniences. > > > > >>>> > > > > >>>> Best regards, > > > > >>>> Marco > > > > >>>> > > > > >>>> > > > > >>>> P.S. Sorry for the brief email and my lack of further > fixes, but > > > it's > > > > >>>> 5:30AM now and I've been working for 17 hours. > > > > >>>> > > > > >>> > > > > >> > > > > > > > > > > > > --0000000000009c9676057b467267--