hadoop-zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: the scale of the data in the node
Date Wed, 14 Apr 2010 05:15:30 GMT
Writing a large amount of data in really small pieces is going to be slower
than larger pieces.

This might reverse at very large sizes.

But you should test this if you really need to know the correct answer.

On Tue, Apr 13, 2010 at 7:22 PM, li li <liqiyuan312@gmail.com> wrote:

> Dear developer,
>     We are just making research using zookeeper in our experiment.Now,I do
> research about the performance of the zookeeper.
>    Now ,we need to know if the scale of data setting in the node influences
> the speed of writing action.
>    For example,when we have 1 M bytes data to write to the znode,which is
> better between the two cases.NO.1,we set the 1M bytes data once.NO.2, we
> break up the 1M data into several sections ,each section with 128 bytes.And
> then,we write the data using several clients.Do these two cases have
> different effection in the writing action?Which one is better?
>    Thanks for your reading,I'm looking forward to your reply.
>    With best wishes!
>
> Lily
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message