bookkeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Maciej Smoleński <jezd...@gmail.com>
Subject Re: Low write bandwidth
Date Wed, 10 Jun 2015 16:26:25 GMT
I run test and measure some statistics for tcp packets, see below.
The stats below contain extended period - 4 seonds before test and 2 after.

I tested with 3000 request each 100K big (quorum size is 2).
The performance is actually: 310 entries / sec.
Before it was 250 entries / sec before, as there was some unnecessary
logging (log4j config was not the classpath) - detected later with profiler.
it shows: timestamp avgPacketSizeInBytes tcpPacketsNumber
1433952597 52.00 2
1433952598 40.46 0
1433952599 42.21 0
1433952600 46.00 0
1433952601 8238.27 3103
1433952602 8739.31 7799
1433952603 8441.14 8173
1433952604 8820.88 7761
1433952605 8814.23 8232
1433952606 8790.77 8257
1433952607 8602.83 8649
1433952608 8849.24 8156
1433952609 8710.08 9000
1433952610 8809.13 8839
1433952611 8089.91 418
1433952612 46.00 0
1433952613 46.00 0

On Wed, Jun 10, 2015 at 4:23 PM, Robin Dhamankar <robindh@apache.org> wrote:

> Sorry I missed that you also benchmarked this with ramfs. So you don't
> need the data to be durable, I presume?
>
> Can you measure how many TCP packets are being transmitted per entry - we
> can potentially get some gains by tuning those settings
>
> Are you saying you have only one request outstanding at a time and the
> previous request has to be acknowledged before the next request can be sent?
>
> If that is the case, given that there is a durable write to the journal
> required before an add is acknowledged by the bookie, there isn't much more
> room to improve beyond the 250 requests per second you are currently getting
> On Jun 10, 2015 7:00 AM, "Maciej Smoleński" <jezdnia@gmail.com> wrote:
>
>> Thank You for Your comment.
>>
>> Unfortunately, these option will not help in my case.
>> In my case BookKeeper client will receive next request when previous
>> request is confirmed.
>> It is expected also that there will be only single stream of such
>> requests.
>>
>> I would like to understand how to achieve performance equal to the
>> network bandwidth.
>>
>>
>>
>> On Wed, Jun 10, 2015 at 2:27 PM, Flavio Junqueira <fpjunqueira@yahoo.com>
>> wrote:
>>
>>> BK currently isn't wired to stream bytes to a ledger, so writing
>>> synchronously large entries as you're doing is likely not to get the best
>>> its performance. A couple of things you could try to get higher performance
>>> are to write asynchronously and to have multiple clients writing.
>>>
>>> -Flavio
>>>
>>>
>>>
>>>
>>>   On Wednesday, June 10, 2015 12:08 PM, Maciej Smoleński <
>>> jezdnia@gmail.com> wrote:
>>>
>>>
>>>
>>> Hello,
>>>
>>> I'm testing BK performance when appending 100K entries synchronously
>>> from 1 thread (using one ledger).
>>> The performance I get is 250 entries/s.
>>>
>>> What performance should I expect ?
>>>
>>> My setup:
>>>
>>> Ledger:
>>> Ensemble size: 3
>>> Quorum size: 2
>>>
>>> 1 client machine and 3 server machines.
>>>
>>> Network:
>>> Each machine with bonding: 4 x 1000Mbps on each machine
>>> manually tested between client and server: 400MB/s
>>>
>>> Disk:
>>> I tested two configurations:
>>> dedicated disks with ext3 (different for zookeeper, journal, data,
>>> index, log)
>>> dedicated ramfs partitions (different for zookeeper, journal, data,
>>> index, log)
>>>
>>> In both configurations the performance is the same: 250 entries / s
>>> (25MB / s).
>>> I confirmed this with measured network bandwidth:
>>> - on client 50 MB/s
>>> - on server 17 MB/s
>>>
>>> I run java with profiler enabled on BK client and BK server but didn't
>>> find anything unexpected (but I don't know bookkeeper internals).
>>>
>>> I tested it with two BookKeeper versions:
>>> - 4.3.0
>>> - 4.2.2
>>> The result were the same with both BookKeeper versions.
>>>
>>> What should be changed/checked to get better performance ?
>>>
>>> Kind regards,
>>> Maciej
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>

Mime
View raw message