nifi-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Elli Schwarz <>
Subject Re: Potential 1.11.X showstopper
Date Thu, 06 Feb 2020 18:59:10 GMT
 We seem to be experiencing the same problems. We recently upgraded several of our Nifis from
1.9.2 to 1.11.0, and now many of them are failing with "too many open files". Nothing else
changed other than the upgrade, and our data volume is the same as before. The only solution
we've been able to come up with is to run a script to check for this condition and restart
the Nifi. Any other ideas?
Thank you!

    On Sunday, February 2, 2020, 9:11:34 AM EST, Mike Thomsen <>
 Without further details, this is what I did to see if it was something
other than the usual issue of having not enough file handlers available.
Something like a legitimate case of someone forgetting to close file
objects or something in the code itself.

1. Setup a 8core/32GB VM on AWS w/ Amazon AMI.
2. Pushed 1.11.1RC1
3. Pushed the RAM settings to 6/12GB
4. Disabled flowfile archiving because I only allocated 8GB of storage.
5. Setup a flow that used 2 generateflow instances to generate massive
amounts of garbage data using all available cores. (All queues were setup
to hold 250k flow files)
6. Kicked it off and let it run for probably about 20 minutes.

No apparent problem with closing and releasing resources here.

On Sat, Feb 1, 2020 at 8:00 AM Joe Witt <> wrote:

> these are usually very easy to find.
> run lsof -p pid.  and share results
> thanks
> On Sat, Feb 1, 2020 at 7:56 AM Mike Thomsen <>
> wrote:
> >
> >
> >
> > No idea if this is valid or not. I asked for clarification to see if
> there
> > might be a specific processor or something that is triggering this.
> >
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message