cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bryan Cheng <br...@blockcypher.com>
Subject Re: How to prevent queries being routed to new DC?
Date Thu, 03 Sep 2015 19:25:01 GMT
Hey Tom,

I'd recommend you enable tracing and do a few queries in a controlled
environment to verify that queries are being routed to your new nodes.
Provided you have followed the procedure outlined above (specifically, have
set auto_bootstrap to false in your new cluster), rebuild has not been run,
the application is not connecting to the new cluster, and all your queries
are run at LOCAL_* quorum levels, I do not believe those queries should be
routed to the new dc.

On Thu, Sep 3, 2015 at 12:14 PM, Tom van den Berge <
tom.vandenberge@gmail.com> wrote:

> Hi Bryan,
>
> It does not generate any errors. A query for a specific row simply does
> not return the row if it is sent to a node in the new DC. This makes sense,
> because the node is still empty.
>
> On Thu, Sep 3, 2015 at 9:03 PM, Bryan Cheng <bryan@blockcypher.com> wrote:
>
>> This all seems fine so far. Are you able to see what errors are being
>> returned?
>>
>> We had a similar issue where one of our secondary, less used keyspaces
>> was on a replication strategy that was not DC-aware, which was causing
>> errors about being unable to satisfy LOCAL_ONE and LOCAL_QUORUM quoroum
>> levels.
>>
>>
>> On Thu, Sep 3, 2015 at 11:53 AM, Tom van den Berge <
>> tom.vandenberge@gmail.com> wrote:
>>
>>> Hi Bryan,
>>>
>>> I'm using the PropertyFileSnitch, and it contains entries for all nodes
>>> in the old DC, and all nodes in the new DC. The replication factor for both
>>> DCs is 1.
>>>
>>> With the first approach I described, the new nodes join the cluster, and
>>> show up correctly under the new DC, so all seems to be fine.
>>> With the second approach (join_ring=false), they don't show up at all,
>>> which is also what I expected.
>>>
>>>
>>> On Thu, Sep 3, 2015 at 8:44 PM, Bryan Cheng <bryan@blockcypher.com>
>>> wrote:
>>>
>>>> Hey Tom,
>>>>
>>>> What's your replication strategy look like? When your new nodes join
>>>> the ring, can you verify that they show up under a new DC and not as part
>>>> of the old?
>>>>
>>>> --Bryan
>>>>
>>>> On Thu, Sep 3, 2015 at 11:27 AM, Tom van den Berge <
>>>> tom.vandenberge@gmail.com> wrote:
>>>>
>>>>> I want to start using vnodes in my cluster. To do so, I've set up a
>>>>> new data center with the same number of nodes as the existing one, as
>>>>> described in
>>>>> http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configVnodesProduction_t.html.
>>>>> The new DC is in the same physical location as the old one.
>>>>>
>>>>> The problem I'm running into is that as soon as the nodes in the new
>>>>> data center are started, the application that is using the nodes in the
old
>>>>> data center is frequently getting error messages because queries don't
>>>>> return the expected data. I'm pretty sure this is because somehow these
>>>>> queries are routed to the new, empty data center. The application is
not
>>>>> connecting to the nodes in the new DC.
>>>>>
>>>>> I've tried two different things to prevent this:
>>>>>
>>>>> 1) Ensure that all queries use either LOCAL_ONE or LOCAL_QUORUM
>>>>> consistency. Nevertheless, I'm still seeing failed queries.
>>>>> 2) Start the new nodes with -Dcassandra.join_ring=false, to prevent
>>>>> them from participating in the cluster. Although they don't show up in
>>>>> nodetool ring, I'm still seeing failed queries.
>>>>>
>>>>> If I understand it correctly, both measures should prevent queries
>>>>> from ending up in the new DC, but somehow they don't in my situation.
>>>>>
>>>>> How is it possible that queries are routed to the new, emtpy data
>>>>> center? And more importantly, how can I prevent it?
>>>>>
>>>>> Thanks,
>>>>> Tom
>>>>>
>>>>
>>>>
>>>
>>
>

Mime
View raw message