zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sam Tunnicliffe <...@beobal.com>
Subject Re: counter with zookeeper
Date Thu, 02 Dec 2010 16:14:36 GMT
Hi Ted,

that's right, our key space is partitioned so we have a slightly easier
problem than say, twitter generating unique tweet ids. In that sort of
scenario, and if incrementing by exactly 1 weren't a requirement I'd
definitely go with your solution. I'll update the README to make it a bit
clearer.

Cheers,
Sam

On 2 December 2010 15:59, Ted Dunning <ted.dunning@gmail.com> wrote:

> Shame on me for not reading more carefully and stating the obvious.
>
> In my (slight) defense, the README talks about making sure that the the
> counter is incremented by exactly 1.  I took that statement and ran with
> it.
>  A slight elaboration there might have helped me realize that your
> implementation was considerably more sophisticated.
>
> On Thu, Dec 2, 2010 at 7:55 AM, David Rosenstrauch <darose@darose.net
> >wrote:
>
> > On 12/02/2010 10:47 AM, Ted Dunning wrote:
> >
> >> I would recommend that you increment the counter by 100 or 1000 and then
> >> increment a local counter over the implied range.  This will drive the
> >> amortized ZK overhead down to tens of microseconds which should be good
> >> for
> >> almost any application. Your final ids will still be almost entirely
> >> contiguous.  You could implement a fancier counter in ZK that remembers
> >> returned chunks for re-use to get perfect contiguity if you really
> wanted
> >> that.
> >>
> >
> > This is what our library does.  You request chunks of, say, 1000 ID's,
> and
> > then push back any remaining unused ID's in the chunk you took.
> >
> > DR
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message