cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From DuyHai Doan <doanduy...@gmail.com>
Subject Re: Is there a way to do Read and Set at Cassandra level?
Date Sat, 05 Nov 2016 11:54:02 GMT
"But then don't I need to evict for every batch of writes?"

Yes, that's why I think an in-memory distributed data structure is the good
fit for your scenario. Using a log structured merged tree like C* for this
use-case is not the most efficient choice

On Sat, Nov 5, 2016 at 11:52 AM, Kant Kodali <kant@peernova.com> wrote:

> But then don't I need to evict for every batch of writes? I thought cache
> would make sense when reads/writes > 1 per say. What do you think?
>
> On Sat, Nov 5, 2016 at 3:33 AM, DuyHai Doan <doanduyhai@gmail.com> wrote:
>
>> "I have a requirement where I need to know last value that is written
>> successfully so I could read that value and do some computation and include
>> it in the subsequent write"
>>
>> Maybe keeping the last written value in a distributed cache is cheaper
>> than doing a read before write in Cassandra ?
>>
>> On Sat, Nov 5, 2016 at 11:24 AM, Kant Kodali <kant@peernova.com> wrote:
>>
>>> I have a requirement where I need to know last value that is written
>>> successfully so I could read that value and do some computation and include
>>> it in the subsequent write. For now we are doing read before write which
>>> significantly degrades the performance. Light weight transactions are more
>>> of a compare and set than a Read and Set. The very first thing I tried is
>>> to see if I can eliminate this need by the application but looks like it is
>>> a strong requirement for us so I am wondering if there is any way I can
>>> optimize that? I know batching could help in the sense I can do one read
>>> for every batch so that the writes in the batch doesn't take a read
>>> performance hit but I wonder if there is any clever ideas or tricks I can
>>> do?
>>>
>>
>>
>

Mime
View raw message