zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mudit Verma <mudit.f2004...@gmail.com>
Subject Re: Write throughput is low !!
Date Thu, 14 Aug 2014 17:18:22 GMT
Thanks. 

We are building a dedicated locking infra, something like Chubby, which should be able to
synchronise requests from 10,000 of clients, simultaneously accessing shared backend storage.
On each access client should first be able to get a lock. 

Thanks
Mudit

On 14 Aug 2014, at 06:09 pm, Jordan Zimmerman <jordan@jordanzimmerman.com> wrote:

> Here is the standard graph of ZK tps:
> 
> http://zookeeper.apache.org/doc/r3.1.2/images/zkperfRW.jpg
> 
> If you are doing 100% writes with 5 ZK instances, 5K tps sounds right. You can get more
tps if you switch to 3 instances. The real question, though, is why are you doing so many
writes?
> 
> -Jordan
> 
> 
> From: Mudit Verma <mudit.f2004912@gmail.com>
> Reply: user@zookeeper.apache.org <user@zookeeper.apache.org>>
> Date: August 14, 2014 at 10:50:53 AM
> To: user@zookeeper.apache.org <user@zookeeper.apache.org>>
> Subject:  Write throughput is low !! 
> 
>> Hi All, 
>> 
>> We, at our company, have a modest Zookeeper lab setup with 5 servers. 
>> 
>> During our lab tests, we found that somehow we can not go beyond 5K Writes/Sec in
total with roughly 100 clients (all the clients issues writes to different znodes in a tight
loop). Can someone point me to the official numbers? Is this a good number for Zookeeper or
it can scale much beyond? 
>> 
>> I understand that, parallelization by writing to different znodes will not increase
the throughput as there is only one log underneath. 
>> 
>> PS: Lab is hosted on a high speed network (1gigbit/sec) and network latencies are
far lower in contract to the the numbers we have. 
>> 
>> 
>> Thanks 
>> Mudit


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message