cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dorian Hoxha <>
Subject Re: Way to write to dc1 but keep data only in dc2
Date Mon, 03 Oct 2016 15:29:01 GMT
Thanks for the explanation Eric.

I would think it as something like:
The keyspace will be on dc1 + dc2, with the option that no long-term-data
is in dc1. So you write to dc1 (to the right nodes), they write to
commit-log/memtable and when they push for inter-dc-replication dc1 then
deletes local data. While dc2 doesn't push data to dc1 for replication.

On Mon, Oct 3, 2016 at 5:25 PM, Eric Stevens <> wrote:

> It sounds like you're trying to avoid the latency of waiting for a write
> confirmation to a remote data center?
> App ==> DC1 ==high-latency==> DC2
> If you need the write to be confirmed before you consider the write
> successful in your application (definitely recommended unless you're ok
> with losing data and the app having no idea), you're not going to solve the
> fundamental physics problem of having to wait for a round-trip between
> _something_ and DC2.  DC1 can't acknowledge the write until it's in
> memtables and commitlog of a node that owns that data, so under the hoods
> it's doing basically the same thing your app would have to do.  In fact,
> putting DC1 in the middle just introduces a (possibly trivial but
> definitely not zero) amount of additional latency over:
> App ==high-latency==> DC2
> The only exception would be if you had an expectation that latency between
> DC1 and DC2 would be lower than latency between App and DC2, which I admit
> is not impossible.
> On Fri, Sep 30, 2016 at 1:49 PM Dorian Hoxha <>
> wrote:
>> Thanks Edward. Looks like it's not possible what I really wanted (to use
>> some kind of a quorum write ex).
>> Note that the queue is ordered, but I need just so they eventually
>> happen, but with more consistency than ANY (2 nodes or more).
>> On Fri, Sep 30, 2016 at 12:25 AM, Edward Capriolo <>
>> wrote:
>>> You can do something like this, though your use of terminology like
>>> "queue" really do not apply.
>>> You can setup your keyspace with replication in only one data center.
>>> CREATE KEYSPACE NTSkeyspace WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy',
'dc2' : 3 };
>>> This will make the NTSkeyspace like only in one data center. You can
>>> always write to any Cassandra node, since they will transparently proxy the
>>> writes to the proper place. You can configure your client to ONLY bind to
>>> specific hosts or data centers/hosts DC1.
>>> You can use a write consistency level like ANY. IF you use a consistency
>>> level like ONE. It will cause the the write to block anyway waiting for
>>> completion on the other datacenter.
>>> Since you mentioned the words "like a queue" I would suggest an
>>> alternative is to writing the data do a distributed commit log like kafka.
>>> At that point you can decouple the write systems either through producer
>>> consumer or through a tool like Kafka's mirror maker.
>>> On Thu, Sep 29, 2016 at 5:24 PM, Dorian Hoxha <>
>>> wrote:
>>>> I have dc1 and dc2.
>>>> I want to keep a keyspace only on dc2.
>>>> But I only have my app on dc1.
>>>> And I want to write to dc1 (lower latency) which will not keep data
>>>> locally but just push it to dc2.
>>>> While reading will only work for dc2.
>>>> Since my app is mostly write, my app ~will be faster while not having
>>>> to deploy to the app to dc2 or write directly to dc2 with higher latency.
>>>> dc1 would act like a queue or something and just push data + delete
>>>> locally.
>>>> Does this make sense ?
>>>> Thank You

View raw message