incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vasileios Vlachos <vasileiosvlac...@gmail.com>
Subject Re: Adding datacenter for move to vnodes
Date Fri, 07 Feb 2014 13:18:21 GMT
Thanks for you input.

Yes, you can mix Vnode-enabled and Vnode-disabled nodes. What you described
is exactly what happened. We had a node which was responsible for 90%+ of
the load. What is the actual result of this though?

Say you have 6 nodes with 300G each. So you decommission N1 and you bring
it back in with Vnodes. Is that going to stream back 90%+ of the 300Gx6, or
it eventually will hold the 90%+ of all the data stored into your cluster?
If the second is what actually happens, this process should be safe on a
live cluster as well, given that you are going to upgrade the other 5 nodes
straight after...

Any thoughts?

Thanks,

Bill
On 7 Feb 2014 12:58, "Alain RODRIGUEZ" <arodrime@gmail.com> wrote:

> @Bill
>
> An other DC for this migration is the less impacting way to do it. You set
> a new cluster, switch to it when it's ready. No performance or down time
> issues.
>
> Decommissioning a node is quite an heavy operation since it will give part
> of its data to all the remaining nodes, increasing network, disk load and
> data size on all the remaining nodes.
>
> An other option is "cassandra-shuffle", but afaik, it never worked
> properly and people recommend using a new cluster to switch.
>
> @Andrey & Bill
>
> I think you can mix vnodes with physical nodes, yet, you might have a node
> with 99% of the data, since it will take care of a lot of ranges (256 ?)
> while other nodes will take care of only 1. Might not be an issue on a dev
> or demo cluster but it will certainly be in a production environnement.
>
>
>
>
> 2014-02-07 0:28 GMT+01:00 Andrey Ilinykh <ailinykh@gmail.com>:
>
>> My understanding is you can't mix vnodes and regular nodes in the same
>> DC. Is it correct?
>>
>>
>>
>> On Thu, Feb 6, 2014 at 2:16 PM, Vasileios Vlachos <
>> vasileiosvlachos@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> My question is why would you need another DC to migrate to Vnodes? How
>>> about decommissioning each node in turn, changing the cassandra.yaml
>>> accordingly, delete the data and bring the node back in the cluster and let
>>> it bootstrap from the others?
>>>
>>> We did that recently with our demo cluster. Is that wrong in any way?
>>> The only think to take into consideration is the disk space I think. We are
>>> not using amazon, but I am not sure how would that be different for this
>>> particular issue.
>>>
>>> Thanks,
>>>
>>> Bill
>>> On 6 Feb 2014 16:34, "Alain RODRIGUEZ" <arodrime@gmail.com> wrote:
>>>
>>>> Glad it helps.
>>>>
>>>> Good luck with this.
>>>>
>>>> Cheers,
>>>>
>>>> Alain
>>>>
>>>>
>>>> 2014-02-06 17:30 GMT+01:00 Katriel Traum <katriel@google.com>:
>>>>
>>>>> Thank you Alain! That was exactly what I was looking for. I was
>>>>> worried I'd have to do a rolling restart to change the snitch.
>>>>>
>>>>> Katriel
>>>>>
>>>>>
>>>>>
>>>>> On Thu, Feb 6, 2014 at 1:10 PM, Alain RODRIGUEZ <arodrime@gmail.com>wrote:
>>>>>
>>>>>> Hi, we did this exact same operation here too, with no issue.
>>>>>>
>>>>>> Contrary to Paulo we did not modify our snitch.
>>>>>>
>>>>>> We simply added a "dc_suffix" in the property in
>>>>>> cassandra-rackdc.properties conf file for nodes in the new cluster
:
>>>>>>
>>>>>> # Add a suffix to a datacenter name. Used by the Ec2Snitch and
>>>>>> Ec2MultiRegionSnitch
>>>>>>
>>>>>> # to append a string to the EC2 region name.
>>>>>>
>>>>>> dc_suffix=-xl
>>>>>>
>>>>>> So our new cluster DC is basically : eu-west-xl
>>>>>>
>>>>>> I think this is less risky, at least it is easier to do.
>>>>>>
>>>>>> Hope this help.
>>>>>>
>>>>>>
>>>>>> 2014-02-02 11:42 GMT+01:00 Paulo Ricardo Motta Gomes <
>>>>>> paulo.motta@chaordicsystems.com>:
>>>>>>
>>>>>> We had a similar situation and what we did was first migrate the
1.1
>>>>>>> cluster to GossipingPropertyFileSnitch, making sure that for
each node we
>>>>>>> specified the correct availability zone as the rack in
>>>>>>> the cassandra-rackdc.properties. In this way,
>>>>>>> the GossipingPropertyFileSnitch is equivalent to the EC2MultiRegionSnitch,
>>>>>>> so the data location does not change and no repair is needed
afterwards.
>>>>>>> So, if your nodes are located in the us-east-1e AZ, your cassandra-rackdc.properties
>>>>>>> should look like:
>>>>>>>
>>>>>>> dc=us-east
>>>>>>> rack=1e
>>>>>>>
>>>>>>> After this step is complete on all nodes, then you can add a
new
>>>>>>> datacenter specifying different dc and rack on the
>>>>>>> cassandra-rackdc.properties of the new DC. Make sure you upgrade
your
>>>>>>> initial datacenter to 1.2 before adding a new datacenter with
vnodes
>>>>>>> enabled (of course).
>>>>>>>
>>>>>>> Cheers
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Feb 2, 2014 at 6:37 AM, Katriel Traum <katriel@google.com>wrote:
>>>>>>>
>>>>>>>> Hello list.
>>>>>>>>
>>>>>>>> I'm upgrading a 1.1 cassandra cluster to 1.2(.13).
>>>>>>>> I've read here and in other places that the best way to migrate
to
>>>>>>>> vnodes is to add a new DC, with the same amount of nodes,
and run rebuild
>>>>>>>> on each of them.
>>>>>>>> However, I'm faced with the fact that I'm using EC2MultiRegion
>>>>>>>> snitch, which automagically creates the DC and RACK.
>>>>>>>>
>>>>>>>> Any ideas how I can go about adding a new DC with this kind
of
>>>>>>>> setup? I need these new machines to be in the same EC2 Region
as the
>>>>>>>> current ones, so adding to a new Region is not an option.
>>>>>>>>
>>>>>>>> TIA,
>>>>>>>> Katriel
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Paulo Motta*
>>>>>>>
>>>>>>> Chaordic | *Platform*
>>>>>>> *www.chaordic.com.br <http://www.chaordic.com.br/>*
>>>>>>> +55 48 3232.3200
>>>>>>> +55 83 9690-1314
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>
>

Mime
View raw message