hadoop-zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From li li <liqiyuan...@gmail.com>
Subject the scale of the data in the node
Date Wed, 14 Apr 2010 02:22:09 GMT
Dear developer,
     We are just making research using zookeeper in our experiment.Now,I do
research about the performance of the zookeeper.
    Now ,we need to know if the scale of data setting in the node influences
the speed of writing action.
    For example,when we have 1 M bytes data to write to the znode,which is
better between the two cases.NO.1,we set the 1M bytes data once.NO.2, we
break up the 1M data into several sections ,each section with 128 bytes.And
then,we write the data using several clients.Do these two cases have
different effection in the writing action?Which one is better?
    Thanks for your reading,I'm looking forward to your reply.
    With best wishes!

Lily

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message