zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Rosenstrauch <dar...@darose.net>
Subject Re: counter with zookeeper
Date Thu, 02 Dec 2010 17:21:24 GMT
I don't, for several reasons:

a) We request ID's frequently enough that old ones do eventually get 
used up.

b) Even with a largely fragmented ID space, the largest we've seen the 
bytes contents of that ZK node get up to is 5-6KBytes.  So no real 
worries about either storage space or network I/O when reading/writing 
to/from the ZK node.


On 12/02/2010 10:58 AM, Claudio Martella wrote:
> I like Ted's idea too.
> David, how do you handle re-compaction of your fragmented ID space?
> On 12/2/10 4:55 PM, David Rosenstrauch wrote:
>> On 12/02/2010 10:47 AM, Ted Dunning wrote:
>>> I would recommend that you increment the counter by 100 or 1000 and then
>>> increment a local counter over the implied range.  This will drive the
>>> amortized ZK overhead down to tens of microseconds which should be
>>> good for
>>> almost any application. Your final ids will still be almost entirely
>>> contiguous.  You could implement a fancier counter in ZK that remembers
>>> returned chunks for re-use to get perfect contiguity if you really
>>> wanted
>>> that.
>> This is what our library does.  You request chunks of, say, 1000 ID's,
>> and then push back any remaining unused ID's in the chunk you took.
>> DR

View raw message