nifi-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bryan Bende <>
Subject Re: nifi-jms-processor not closing connection properly in "primary node only" execution mode
Date Fri, 26 May 2017 17:12:09 GMT

Sorry for the delayed response...

I think the issue is that currently the processor is started/schedule
on all nodes, regardless of whether it is primary or not, but then
only the instance on the primary node is triggered to run.

So when the primary node changes from node1 to node2, nothing is
stopping the instance on node1, which means the @OnStopped method on
node1 won't execute, and thus leaves the connection open.

I believe there is a JIRA somewhere to enhance the framework to detect
the change of primary nodes and then unschedule (stop) the processor
on the other nodes.

In the meantime, a change could be made to the JMS processor to lazily
initialize the connection and to use a method annotated with
@OnPrimaryNodeStateChange so that the processor could know if it is
the primary node or not.

If that method got called and the instance detected it was no longer
the primary node then it could close the connection even though the
processor would still be considered scheduled.

Feel free to create a JIRA to track this issue.



On Wed, May 24, 2017 at 2:33 AM, Dominik Benz <> wrote:
> Hi,
> we're using Nifi to ingest data from a JMS source with relatively high
> traffic (40-60msg per sec, see also my previous post
> Our cluster setup is:
> * Nifi 1.1.1
> * 3 Nifi nodes (nifi01, nifi02, nifi03)
> * JMS source is TIBCO
> We are consuming non-durable from a JMS topic; for this reason, we are using
> the "primary node only" execution mode in order not to consume a message
> several times (once per Nifi node). Screenshots of our configuration is
> attached. This works well in principle - until the master node changes.
> Then, we see reproducably the following pattern (node names are exemplary):
> 1) nifi01 is the current primary node, holds a connection to TIBCO, and
> successfully consumes messages
> 2) primary node switches from nifi01 to nifi02
> 3) nifi02 opens a connection to TIBCO and consumes messages successfully
> 4) nifi01 keeps the connection to TIBCO open, but stops consuming
> The last point causes trouble, because then TIBCO starts buffering messsages
> for the consumer nifi01, which does not consume messages -> buffer load
> increases and puts TIBCO under high pressure. My questions are:
> a) do we miss anything in our approach / could we make configuration etc.
> better (screenshots below)?
> b) I would have assumed that Nifi internally stops a particular processor in
> "primary node only" when the primary node switches; having a look a the
> nifi-jms-connector code
> it seems to me that the @onStopped close() method should do the right thing.
> Is that correct?
> I'm grateful for any feedback / pointers how to solve this issue.
> Many thanks,
>   Dominik
> nifi_jms_connectionfactory.png
> <>
> nifi_jms_properties.png
> <>
> nifi_jms_scheduling.png
> <>
> --
> View this message in context:
> Sent from the Apache NiFi Developer List mailing list archive at

View raw message