cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ajay <>
Subject Re: Optimal Batch size (Unlogged) for Java driver
Date Mon, 02 Mar 2015 15:55:16 GMT
I have a column family with 15 columns where there are timestamp,
timeuuid,  few text fields and rest int  fields.  If I calculate the size
of its column name  and it's value and divide 5kb (recommended max size for
batch) with the value,  I get result as 12. Is it correct?. Am I missing

On 02-Mar-2015 12:13 pm, "Ankush Goyal" <> wrote:

> Hi Ajay,
> I would suggest, looking at the approximate size of individual elements in
> the batch, and based on that compute max size (chunk size).
> Its not really a straightforward calculation, so I would further suggest
> making that chunk size a runtime parameter that you can tweak and play
> around with until you reach stable state.
> On Sunday, March 1, 2015 at 10:06:55 PM UTC-8, Ajay Garga wrote:
>> Hi,
>> I am looking at a way to compute the optimal batch size in the client
>> side similar to the below mentioned bug in the server side (generic as we
>> are exposing REST APIs for Cassandra, the column family and the data are
>> different each request).
>> <>
>> How do we compute(approximately using ColumnDefintions or ColumnMetadata)
>> the size of a row of a column family from the client side using Cassandra
>> Java driver?
>> Thanks
>> Ajay
>  To unsubscribe from this group and stop receiving emails from it, send an
> email to

View raw message