cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Jirsa <jji...@gmail.com>
Subject Re: loosing data during saving data from java
Date Fri, 18 Oct 2019 22:41:52 GMT
There is no buffer in cassandra that is known to (or suspected to)
lose acknowledged writes if it's overwhelmed.

There may be a client bug where you send so many async writes that they
overwhelm a bounded queue, or otherwise get dropped or timeout, but those
would be client bugs, and I'm not sure this list can help you with them.



On Fri, Oct 18, 2019 at 3:16 PM adrien ruffie <adriennolarsen@hotmail.fr>
wrote:

> Hello all,
>
> I have a table cassandra where I insert quickly several java entity
> about 15.000 entries by minutes. But at the process ending, I only
> have for exemple 199.921 entries instead 312.212
> If I truncate the table and relaunch the process, several time I get
> 199.354
> or 189.012 entries ... not a really fixed entries saved any time ...
>
> several coworker tell me, they heard about a buffer which can be
> overwhelmed
> sometimes, and loosing several entities stacked for insertion ...
> right ?
> Because I don't understand why this loosing insertion appears ...
> And I java code is very simple like below:
>
> myEntitiesList.forEach(myEntity -> {
>   try {
>     myEntitiesRepository.save(myEntity).subscribe();
>     } catch (Exception e) {
>     e.printStackTrace();
>     }
>     }
>
> And the repository is a:
> public interface MyEntityRepository extends ReactiveCassandraRepository<M
> yEntity, String> {
> }
>
>
> Some one already heard about this problem ?
>
> Thank you very must and best regards
>
> Adrian
>

Mime
View raw message