Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id D0ADF200BE5 for ; Sat, 10 Dec 2016 06:05:33 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id CF2A6160B1E; Sat, 10 Dec 2016 05:05:33 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 20885160B1D for ; Sat, 10 Dec 2016 06:05:32 +0100 (CET) Received: (qmail 78601 invoked by uid 500); 10 Dec 2016 05:05:31 -0000 Mailing-List: contact dev-help@nifi.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@nifi.apache.org Delivered-To: mailing list dev@nifi.apache.org Received: (qmail 78582 invoked by uid 99); 10 Dec 2016 05:05:31 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 10 Dec 2016 05:05:31 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id BE07618BB65 for ; Sat, 10 Dec 2016 05:05:30 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.801 X-Spam-Level: X-Spam-Status: No, score=-0.801 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id bOPvi27UziwW for ; Sat, 10 Dec 2016 05:05:29 +0000 (UTC) Received: from mail-io0-f174.google.com (mail-io0-f174.google.com [209.85.223.174]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id DE6B05F1B3 for ; Sat, 10 Dec 2016 05:05:28 +0000 (UTC) Received: by mail-io0-f174.google.com with SMTP id p42so96141507ioo.1 for ; Fri, 09 Dec 2016 21:05:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=Q/v8+n+PwVBM0XjoVx5TzmNKlYN8l55CNy8cylCqkgE=; b=birgHH1ufZy9mPaKakpKy+hVEeSDbzulgV8Ug2hmi6PVrL2Jshw8SLVql/wUS+JhWK snrWIu69sGhn2bV0+7aI5zr5C4bISV85B6aepacwidDIoL7ooZmLF76H2OlkkCuCe+P3 0zFAljFcs+Rpogx5fJ5MGxCHr5tKrDc8KfKFTLo4U3pQMSv95Rf9sJhDe/fohd13tBhw /NuEAhSsweLwqvtFLbT85B7vFObhrZdbi2jZixOmYd01/WOsYkMLOQkKb5SGdPPpYYOX w1AiysUMnt/69vNf4+QXK5+XRayeDw1n6eH5TD08Zn3RsNusuWWMPBCeIq3g0O3JUY1a moTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=Q/v8+n+PwVBM0XjoVx5TzmNKlYN8l55CNy8cylCqkgE=; b=DJZCV7lIhpEINDLVpGz+d6+E235re3pkuiii2O2llp+gzD4QQWOVLMG+g4+hEzdPRS vlUnzIoVHw4pndfhlm73zb01mTOtrnyv7NExG5yNgiFwgVwpUdYFL+0SalJlWI/siSIG RPODEYzKxWX1Utc/sFxQF4G5oS6WWpNAtRKSi4/7mNtJ5qYxxyYXXEPAeuckYg/XbM7j dDG3pxfrge/bT6IvTSzHGo9ZklYGrZ8PZuvFGB4JKadzVo7aA/EKEasBWiNSaWkd1fGy BbcgyR+NdgOY9Oa21DYcstFodz0SL/xSOF5cGRiGLrf9Uim8z8azieIqn08sOYR3cKl2 ikzg== X-Gm-Message-State: AKaTC03gWOxOv3ED7gywn78a3Tm47zeif0mf4uVUi3ad+aiodQz49cw6QZa3lYRITOb/ZJZh/4Itx3Y+NFPI6Q== X-Received: by 10.107.170.129 with SMTP id g1mr63766480ioj.51.1481346293684; Fri, 09 Dec 2016 21:04:53 -0800 (PST) MIME-Version: 1.0 Received: by 10.107.26.196 with HTTP; Fri, 9 Dec 2016 21:04:53 -0800 (PST) In-Reply-To: References: From: Joe Witt Date: Sat, 10 Dec 2016 00:04:53 -0500 Message-ID: Subject: Re: Content Repository Cleanup To: dev@nifi.apache.org Cc: Ricky Saltzer Content-Type: text/plain; charset=UTF-8 archived-at: Sat, 10 Dec 2016 05:05:34 -0000 Alan, That retention percentage only has to do with the archive of data which kicks in once a given chunk of content is no longer reachable by active flowfiles in the flow. For it to grow to 100% typically would mean that you have data backlogged in the flow that account for that much space. If that is certainly not the case for you then we need to dig deeper. If you could do screenshots or share log files and stack dumps around this time those would all be helpful. If the screenshots and such are too sensitive please just share as much as you can. Thanks Joe On Fri, Dec 9, 2016 at 9:55 PM, Alan Jackoway wrote: > One other note on this, when it came back up there were tons of messages > like this: > > 2016-12-09 18:36:36,244 INFO [main] o.a.n.c.repository.FileSystemRepository > Found unknown file /path/to/content_repository/498/1481329796415-87538 > (1071114 bytes) in File System Repository; archiving file > > I haven't dug into what that means. > Alan > > On Fri, Dec 9, 2016 at 9:53 PM, Alan Jackoway wrote: > >> Hello, >> >> We have a node on which nifi content repository keeps growing to use 100% >> of the disk. It's a relatively high-volume process. It chewed through more >> than 100GB in the three hours between when we first saw it hit 100% of the >> disk and when we just cleaned it up again. >> >> We are running nifi 1.1 for this. Our nifi.properties looked like this: >> >> nifi.content.repository.implementation=org.apache. >> nifi.controller.repository.FileSystemRepository >> nifi.content.claim.max.appendable.size=10 MB >> nifi.content.claim.max.flow.files=100 >> nifi.content.repository.directory.default=./content_repository >> nifi.content.repository.archive.max.retention.period=12 hours >> nifi.content.repository.archive.max.usage.percentage=50% >> nifi.content.repository.archive.enabled=true >> nifi.content.repository.always.sync=false >> >> I just bumped retention period down to 2 hours, but should max usage >> percentage protect us from using 100% of the disk? >> >> Unfortunately we didn't get jstacks on either failure. If it hits 100% >> again I will make sure to get that. >> >> Thanks, >> Alan >>