From dev-return-71589-archive-asf-public=cust-asf.ponee.io@zookeeper.apache.org Thu Jul 19 08:26:10 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 982C7180630 for ; Thu, 19 Jul 2018 08:26:09 +0200 (CEST) Received: (qmail 25348 invoked by uid 500); 19 Jul 2018 06:26:03 -0000 Mailing-List: contact dev-help@zookeeper.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@zookeeper.apache.org Delivered-To: mailing list dev@zookeeper.apache.org Received: (qmail 25335 invoked by uid 99); 19 Jul 2018 06:26:03 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Jul 2018 06:26:03 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 18C521A2A4A for ; Thu, 19 Jul 2018 06:26:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -110.3 X-Spam-Level: X-Spam-Status: No, score=-110.3 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id DUnGm1DI7BaQ for ; Thu, 19 Jul 2018 06:26:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 308715F3CD for ; Thu, 19 Jul 2018 06:26:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 6E816E0056 for ; Thu, 19 Jul 2018 06:26:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 299B423F97 for ; Thu, 19 Jul 2018 06:26:00 +0000 (UTC) Date: Thu, 19 Jul 2018 06:26:00 +0000 (UTC) From: "Bogdan Kanivets (JIRA)" To: dev@zookeeper.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (ZOOKEEPER-3046) testManyChildWatchersAutoReset is flaky MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/ZOOKEEPER-3046?page=3Dcom.atlas= sian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D= 16548844#comment-16548844 ]=20 Bogdan Kanivets commented on ZOOKEEPER-3046: -------------------------------------------- [~hanm] [~phunt] I've done some digging into the performance of this test by adding more sys= tem and time logging. I've tested using fake=C2=A0[PR|https://github.com/apache/zookeeper/pull/53= 8]=C2=A0to collect data in apache CI system. I'm using fake 'testCreateMany= Children' that just creates 10k nodes. Also, I've tested locally by running=C2=A0all tests in the loop and recordi= ng all logs until the disk is full. My hypothesis: there is some other write-heavy test or build(maybe hadoop tests) that slow= s down creation of 10K children. Here fake test took 10s [https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/1912/tes= tReport/] Here=C2=A0fake took 3m 20s [https://builds.apache.org/job/PreCommit-ZOOKEEPER-github-pr-build/1907/tes= tReport/] # I don't think there is a deadlock, because I'm logging 'stat' after ever= y 1K nodes created and it's pretty consistent. ** "Latency min/avg/max: 2/32/596" on slow ** "Latency min/avg/max: 0/0/25" on fast # Locally, when disk is getting full, I've seen this test slowing down. # I've tried logging=C2=A0vmstat, but not very helpful ** load average is bit higher on slow "load average: 12.54, 12.04, 10.95" ** "load average: 4.16, 4.56, 4.86" on fast # Since iostat isn't available, I've logged 'less /proc/diskstats'. Not ve= ry meaningful because I'm logging at differently spaced timestamps right no= w. But one observation is that column 12 "I/Os currently in progress" is us= ually 1 or 2 on slow and usually 0 on fast. # Looking at column 9 of=C2=A0/proc/diskstats (writes completed) for 'sda'= device and calculating writes per ms using timestamps=C2=A0LOG.info: ** 0.18 writes/ms on slow ** 9.1 writes/ms on fast ** warning - this calculation shouldn't be trusted - I've done it only on = one sample=C2=A0 As a result I've tried to pin point some slow jenkins apache hosts, but was= n't successful. I don't see the pattern. Also tried looking at jobs that ru= n at the same time, again no luck. Maybe the next steps can be: # Trying to run this test with memory mapped files # Somehow pinning the host just for zk builds # Getting access to the box and doing better profiling # I can try writing a better version of=C2=A0/proc/diskstats stats collect= or =C2=A0 =C2=A0 =C2=A0 =C2=A0 > testManyChildWatchersAutoReset is flaky > --------------------------------------- > > Key: ZOOKEEPER-3046 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-3046 > Project: ZooKeeper > Issue Type: Bug > Components: tests > Affects Versions: 3.5.3, 3.4.12 > Reporter: Bogdan Kanivets > Assignee: Bogdan Kanivets > Priority: Minor > Labels: flaky, pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > According to the [dashboard|https://builds.apache.org/job/ZooKeeper-Find-= Flaky-Tests/lastSuccessfulBuild/artifact/report.html]=C2=A0testManyChildWat= chersAutoReset is flaky in 3.4 and 3.5 > [ZooKeeper_branch34_java10|https://builds.apache.org/job/ZooKeeper_branch= 34_java10//13] > [ZooKeeper_branch35_java9|https://builds.apache.org/job/ZooKeeper_branch3= 5_java9/253] > Test times out and because of that ant doesn't capture any output. -- This message was sent by Atlassian JIRA (v7.6.3#76005)