zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dan Benediktson <dbenedikt...@twitter.com.INVALID>
Subject Re: Cleaning up giant znode
Date Thu, 25 Aug 2016 20:21:23 GMT
Yes, IIRC, the packet limit is enforced differently on server side and
client side: the server side imposes it per-node that it sends, while the
client imposes it for the whole response message. Since your problem is not
a single big node, but a long list of nodes, I expect if you just override
jute.maxbuffer on the client side to a big enough value (using the system
property when launching your client), you should be able to list the nodes
successfully.

Dan

On Thu, Aug 25, 2016 at 1:19 PM, Galo Navarro <anglorvaroa@gmail.com> wrote:

> Have you tried raising jute.maxbuffer [1]? That might work around the
> listing issue and let you delete children individually.
>
> [1]: https://zookeeper.apache.org/doc/r3.4.8/zookeeperAdmin.html#
> Experimental+Options%2FFeatures
>
> Cheers,
> Galo
>
> On 25 August 2016 at 21:51, Jens Rantil <jens.rantil@tink.se> wrote:
>
> > Hi,
> >
> > A code mistake lead to us to write a lot of znodes with very random names
> > under the same directory/hierarchy in our Zookeeper ensemble.
> > Unfortunately, I am reaching the packet limit when trying to list the
> > nodes, so I can't delete them.
> >
> > Does anyone have any idea how I could clean up these nodes?
> >
> > Thanks,
> > Jens
> > --
> >
> > Jens Rantil
> > Backend Developer @ Tink
> >
> > Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
> > For urgent matters you can reach me at +46-708-84 18 32.
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message