manifoldcf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aeham Abushwashi (JIRA)" <>
Subject [jira] [Commented] (CONNECTORS-1123) ZK node leak
Date Wed, 17 Dec 2014 17:40:13 GMT


Aeham Abushwashi commented on CONNECTORS-1123:

Hi Karl,

Do these persistent nodes get expired at some point or will they live in ZK forever? If I
ingest 100 million docs through manifold, will I end up with 100 million znodes?

The impact of the high node count on my environment currently is that I can no longer perform
certain admin operations that require enumerating and processing groups of nodes without tinkering
with the client and pushing certain configurable limits higher. In some cases, even the tinkering
doesn't help. For example, I have a utility that enumerates and deletes all nodes with a certain
prefix. I use it to clear the state of a previous manifold run before a new cluster is deployed.
The utility fails and errors when I try to run it against the cluster in its current state.
Perhaps this would become a non-issue with the hierarchical manifold znode structure?

My ZK ensemble broke down a few weeks ago and I had to reset it and reset some of the other
applications that use it. Now I don't know if this was due to what we're discussing, but I
did come to the conclusion that the size of the data in the ensemble at the time was partly
to blame.

Some of these, and other, downsides of very high node counts are posted here by one of the
ZK project committers


> ZK node leak
> ------------
>                 Key: CONNECTORS-1123
>                 URL:
>             Project: ManifoldCF
>          Issue Type: Bug
>          Components: Framework core
>    Affects Versions: ManifoldCF 1.8
>         Environment: 4-node manifold cluster, 3-node ZK ensemble for coordination and
configuration management
>            Reporter: Aeham Abushwashi
> Looking at the stats of the zookeeper cluster, I was struck by the very high node count
reported by the ZK stat command  which shows there being just over 3.84 MILLION nodes. The
number keeps rising as long as the manifold nodes are running. Stopping manifold does NOT
reduce the number significantly, nor does restarting the ZK ensemble.
> The ZK ensemble was initialised around 20 days ago. Manifold has been running on and
off on this cluster since that time.
> The flat nature of the manifold node structure in ZK (at least in the dev_1x branch)
makes it difficult to identify node names but after tweaking the jute.maxbuffer parameter
on the client, I was able to get a list of all nodes. There's a huge number of nodes with
the name pattern org.apache.manifoldcf.locks-<Output Connection>:<Hash>. 
> I could see using this node name pattern used in IncrementalIngester#documentDeleteMultiple
and IncrementalIngester#documentRemoveMultiple. However, I'm not expecting any deletions in
the tests I've been running recently - perhaps this is part of the duplicate deletion logic
which came up in an email thread earlier today? or maybe there's another code path I missed
entirely and which creates nodes with names like the above.

This message was sent by Atlassian JIRA

View raw message