cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From adrien ruffie <>
Subject RE: loosing data during saving data from java
Date Sat, 19 Oct 2019 06:17:17 GMT
Thank Jeff 🙂

but if you save several data to fast with cassandra repository and if cassandra doesn't have
the same speed and inserts more slowly.
What is the bevahior ? cassandra store the overflow in a additionnal buffer ? No data can
be lost on the cassandra's side ?

Thank a lot.

De : Jeff Jirsa <>
Envoyé : samedi 19 octobre 2019 00:41
À : cassandra <>
Objet : Re: loosing data during saving data from java

There is no buffer in cassandra that is known to (or suspected to) lose acknowledged writes
if it's overwhelmed.

There may be a client bug where you send so many async writes that they overwhelm a bounded
queue, or otherwise get dropped or timeout, but those would be client bugs, and I'm not sure
this list can help you with them.

On Fri, Oct 18, 2019 at 3:16 PM adrien ruffie <<>>
Hello all,

I have a table cassandra where I insert quickly several java entity
about 15.000 entries by minutes. But at the process ending, I only
have for exemple 199.921 entries instead 312.212
If I truncate the table and relaunch the process, several time I get 199.354
or 189.012 entries ... not a really fixed entries saved any time ...

several coworker tell me, they heard about a buffer which can be overwhelmed
sometimes, and loosing several entities stacked for insertion ...
right ?
Because I don't understand why this loosing insertion appears ...
And I java code is very simple like below:

myEntitiesList.forEach(myEntity -> {
  try {;
    } catch (Exception e) {

And the repository is a:
public interface MyEntityRepository extends ReactiveCassandraRepository<MyEntity, String>

Some one already heard about this problem ?

Thank you very must and best regards

View raw message