Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 24C1A91D2 for ; Thu, 15 Mar 2012 10:07:37 +0000 (UTC) Received: (qmail 48667 invoked by uid 500); 15 Mar 2012 10:07:34 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 48594 invoked by uid 500); 15 Mar 2012 10:07:34 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 48586 invoked by uid 99); 15 Mar 2012 10:07:34 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 15 Mar 2012 10:07:34 +0000 X-ASF-Spam-Status: No, hits=3.3 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL,TO_NO_BRKTS_PCNT,TRACKER_ID X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: 209.85.210.44 is neither permitted nor denied by domain of tamar@tok-media.com) Received: from [209.85.210.44] (HELO mail-pz0-f44.google.com) (209.85.210.44) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 15 Mar 2012 10:07:29 +0000 Received: by dakl33 with SMTP id l33so5008083dak.31 for ; Thu, 15 Mar 2012 03:07:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:in-reply-to:references:from:date :message-id:subject:to:content-type:x-gm-message-state; bh=G0JvyDGUmXGkx3qWxWmPtb9q345wZmmfhBWAYci6JKM=; b=mZvxv7EsO2rEol6gSrJQ9uZIBr4C4sfyc6IW7h+t+yyu7t43FawYv8q9UrHt2z5zsD T+AsHGPjNjrMrVc/x3nHltNm7geAhSHoHd7MeZQItmBFLJBKu7EFbXMzbW+fXh+ewWE2 M+Vp3MDsqWf2aNF2pPlaWa43gpOwHIM/b11izuzDbHDe1Itm7jDNEyt/GSXZpY2+zDbS RD9MwN29kfHIwBQHewH9pyf5u7cc3iQD6VE11Ux3UyGvIKfUWNQ6QfnH9sEh+RGYvR9c Ypxa77bx1dz+roecAmQLkCMSI7WH+iHjjUW050L/wSGFb0qYPdeorZ11WGuUV+Nxwfxp Kg3g== Received: by 10.68.238.39 with SMTP id vh7mr4038695pbc.30.1331806028639; Thu, 15 Mar 2012 03:07:08 -0700 (PDT) MIME-Version: 1.0 Received: by 10.68.222.3 with HTTP; Thu, 15 Mar 2012 03:06:44 -0700 (PDT) X-Originating-IP: [93.173.180.17] In-Reply-To: <3070CA76-1339-4AC9-918D-B03BF84360F2@thelastpickle.com> References: <53D8D9D1-33F3-4D70-952C-4C8F47EB339D@thelastpickle.com> <3070CA76-1339-4AC9-918D-B03BF84360F2@thelastpickle.com> From: Tamar Fraenkel Date: Thu, 15 Mar 2012 12:06:44 +0200 Message-ID: Subject: Re: Failure in Cassandra Ring Running on ESXi (Ubuntu 11.10) To: user@cassandra.apache.org Content-Type: multipart/related; boundary=e89a8ff2470f3f2f4f04bb454169 X-Gm-Message-State: ALoCoQlpG3NN/RaNIaf332Nww9L/9+VpjeL61U2U0AOV9GkjIgj326nZfV4i4XQ2DIce5TO/O6Az X-Virus-Checked: Checked by ClamAV on apache.org --e89a8ff2470f3f2f4f04bb454169 Content-Type: multipart/alternative; boundary=e89a8ff2470f3f2f4d04bb454168 --e89a8ff2470f3f2f4d04bb454168 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Yes I am using the cassandra community. Re-installing will be a hassle... Any idea how to just fix the daemon issue? Thanks *Tamar Fraenkel * Senior Software Engineer, TOK Media [image: Inline image 1] tamar@tok-media.com Tel: +972 2 6409736 Mob: +972 54 8356490 Fax: +972 2 5612956 On Thu, Mar 15, 2012 at 11:56 AM, aaron morton wro= te: > I have a problem though. I installed cassandra following DataStax, and > cassandra is not a daemon > > Are you using Cassandra for Data Stax Community ? > > This will give you a nice install > http://wiki.apache.org/cassandra/DebianPackaging > > Cheers > > ----------------- > Aaron Morton > Freelance Developer > @aaronmorton > http://www.thelastpickle.com > > On 15/03/2012, at 10:38 PM, Tamar Fraenkel wrote: > > Thanks for your prompt response. > I have a problem though. I installed cassandra following DataStax, and > cassandra is not a daemon. i.e. I have to manyally start it, and I don't > have a script /etc/init.d/cassandra. > > For this reason, I need to restart it after rebooting my vm manually, and > it is not part of the rc... > > Any good points on how to set up cassandra as a daemon? > > Thanks, > > *Tamar Fraenkel * > Senior Software Engineer, TOK Media > > > > tamar@tok-media.com > Tel: +972 2 6409736 > Mob: +972 54 8356490 > Fax: +972 2 5612956 > > > > > > On Thu, Mar 15, 2012 at 11:34 AM, aaron morton w= rote: > >> 1. How can I prevent this? I guess my setup is limited, and this may >> happen, but is there a way to improve things. >> >> Not really, you need more memory on the box. >> >> >> 2. Assuming that I will run out of memory from time to time, how do I >> setup a monit \ god task to restart cassandra in case it does. >> >> Super simple, to the point of not been very good, >> /etc/monit/conf.d/cassandra.monitrc >> The monit docs are pretty good. >> >> check process cassandra with pidfile /var/run/cassandra.pid >> start program =3D "/etc/init.d/cassandra start" >> stop program =3D "/etc/init.d/cassandra stop" >> >> You will also need to prevent the init.d script from starting, i >> used update-rc.d >> >> I'm not an ops guy; google is your friend; the monit docs are good. >> >> Cheers >> >> >> ----------------- >> Aaron Morton >> Freelance Developer >> @aaronmorton >> http://www.thelastpickle.com >> >> On 15/03/2012, at 8:44 PM, Tamar Fraenkel wrote: >> >> I added a third node to the cluster. Sure enough, this morning I come an= d >> only one node is up, in the other two the cassandra process is not runni= ng. >> >> In the cassandra log there is nothing, but in /var/log/syslog I see >> In one node: >> Mar 15 07:50:51 Cassandra3 kernel: [58566.666906] Out of memory: Kill >> process 2840 (java) score 383 or sacrifice child >> Mar 15 07:50:51 Cassandra3 kernel: [58566.667066] Killed process 2840 >> (java) total-vm:956792kB, anon-rss:689752kB, file-rss:21680kB >> And in the other: >> Mar 14 18:36:02 Cassandra2 kernel: [16262.267300] Out of memory: Kill >> process 2611 (java) score 409 or sacrifice child >> Mar 14 18:36:02 Cassandra2 kernel: [16262.267325] Killed process 2611 >> (java) total-vm:968040kB, anon-rss:748644kB, file-rss:18436kB >> >> Two questions: >> 1. How can I prevent this? I guess my setup is limited, and this may >> happen, but is there a way to improve things. >> 2. Assuming that I will run out of memory from time to time, how do I >> setup a monit \ god task to restart cassandra in case it does. >> >> Thanks, >> >> *Tamar Fraenkel * >> Senior Software Engineer, TOK Media >> >> >> >> tamar@tok-media.com >> Tel: +972 2 6409736 >> Mob: +972 54 8356490 >> Fax: +972 2 5612956 >> >> >> >> >> >> On Tue, Mar 13, 2012 at 11:12 AM, aaron morton = wrote: >> >>> If you are on Ubuntu it may be this >>> http://wiki.apache.org/cassandra/FAQ#ubuntu_hangs >>> >>> otherwise I would look for GC problems. >>> >>> Cheers >>> >>> >>> ----------------- >>> Aaron Morton >>> Freelance Developer >>> @aaronmorton >>> http://www.thelastpickle.com >>> >>> On 13/03/2012, at 7:53 PM, Tamar Fraenkel wrote: >>> >>> Done it. Now it generally runs ok, till one of the nodes get's stuck >>> with 100% cpu and I need to reboot it. >>> >>> Last lines in the system.log just before are: >>> INFO [OptionalTasks:1] 2012-03-13 07:36:43,850 MeteredFlusher.java >>> (line 62) flushing high-traffic column family CFS(Keyspace=3D'tok', >>> ColumnFamily=3D'tk_vertical_tag_story_indx') (estimated 35417890 bytes) >>> INFO [OptionalTasks:1] 2012-03-13 07:36:43,869 ColumnFamilyStore.java >>> (line 704) Enqueuing flush of Memtable-tk_vertical_tag_story_indx@20028= 20169(1620316/35417890 >>> serialized/live bytes, 30572 ops) >>> INFO [FlushWriter:76] 2012-03-13 07:36:43,869 Memtable.java (line 246) >>> Writing Memtable-tk_vertical_tag_story_indx@2002820169(1620316/35417890 >>> serialized/live bytes, 30572 ops) >>> INFO [FlushWriter:76] 2012-03-13 07:36:44,015 Memtable.java (line 283) >>> Completed flushing >>> /opt/cassandra/data/tok/tk_vertical_tag_story_indx-hc-191-Data.db (2134= 123 >>> bytes) >>> INFO [OptionalTasks:1] 2012-03-13 07:37:37,886 MeteredFlusher.java >>> (line 62) flushing high-traffic column family CFS(Keyspace=3D'tok', >>> ColumnFamily=3D'tk_vertical_tag_story_indx') (estimated 34389135 bytes) >>> INFO [OptionalTasks:1] 2012-03-13 07:37:37,887 ColumnFamilyStore.java >>> (line 704) Enqueuing flush of Memtable-tk_vertical_tag_story_indx@18699= 53681(1573252/34389135 >>> serialized/live bytes, 29684 ops) >>> INFO [FlushWriter:76] 2012-03-13 07:37:37,887 Memtable.java (line 246) >>> Writing Memtable-tk_vertical_tag_story_indx@1869953681(1573252/34389135 >>> serialized/live bytes, 29684 ops) >>> INFO [FlushWrit >>> >>> Any idea? >>> I am considering adding a third node, so that replication factor of 2 >>> won't stuck my system when one node goes down. Does it make sense? >>> >>> Thanks >>> >>> >>> *Tamar Fraenkel * >>> Senior Software Engineer, TOK Media >>> >>> >>> >>> tamar@tok-media.com >>> Tel: +972 2 6409736 >>> Mob: +972 54 8356490 >>> Fax: +972 2 5612956 >>> >>> >>> >>> >>> >>> On Tue, Mar 6, 2012 at 7:51 PM, aaron morton w= rote: >>> >>>> Reduce these settings for the CF >>>> row_cache (disable it) >>>> key_cache (disable it) >>>> >>>> Increase these settings for the CF >>>> bloom_filter_fp_chance >>>> >>>> Reduce these settings in cassandra.yaml >>>> >>>> flush_largest_memtables_at >>>> memtable_flush_queue_size >>>> sliced_buffer_size_in_kb >>>> in_memory_compaction_limit_in_mb >>>> concurrent_compactors >>>> >>>> >>>> Increase these settings >>>> index_interval >>>> >>>> >>>> While it obviously depends on load, I would not be surprised if you ha= d >>>> a lot of trouble running cassandra with that setup. >>>> >>>> Cheers >>>> A >>>> >>>> >>>> ----------------- >>>> Aaron Morton >>>> Freelance Developer >>>> @aaronmorton >>>> http://www.thelastpickle.com >>>> >>>> On 6/03/2012, at 11:02 PM, Tamar Fraenkel wrote: >>>> >>>> Arron, Thanks for your response. I was afraid this is the issue. >>>> Can you give me some direction regarding the fine tuning of my VMs, I >>>> would like to explore that option some more. >>>> Thanks! >>>> >>>> *Tamar Fraenkel * >>>> Senior Software Engineer, TOK Media >>>> >>>> >>>> >>>> tamar@tok-media.com >>>> Tel: +972 2 6409736 >>>> Mob: +972 54 8356490 >>>> Fax: +972 2 5612956 >>>> >>>> >>>> >>>> >>>> >>>> On Tue, Mar 6, 2012 at 11:58 AM, aaron morton wrote: >>>> >>>>> You do not have enough memory allocated to the JVM and are suffering >>>>> from excessive GC as a result. >>>>> >>>>> There are some tuning things you can try, but 480MB is not enough. 1G= B >>>>> would be a better start, 2 better than that. >>>>> >>>>> Consider using https://github.com/pcmanus/ccm for testing multiple >>>>> instances on a single server rather than a VM. >>>>> >>>>> Cheers >>>>> >>>>> ----------------- >>>>> Aaron Morton >>>>> Freelance Developer >>>>> @aaronmorton >>>>> http://www.thelastpickle.com >>>>> >>>>> On 6/03/2012, at 10:21 PM, Tamar Fraenkel wrote: >>>>> >>>>> I have some more info, after couple of hours running the problematic >>>>> node became again 100% CPU and I had to reboot it, last lines from lo= g show >>>>> it did GC: >>>>> >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:28:00,880 GCInspector.java >>>>> (line 122) GC for Copy: 203 ms for 1 collections, 185983456 used; max= is >>>>> 513802240 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:28:50,595 GCInspector.java >>>>> (line 122) GC for Copy: 3927 ms for 1 collections, 156572576 used; ma= x is >>>>> 513802240 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:28:55,434 StatusLogger.java >>>>> (line 50) Pool Name Active Pending Blocked >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,298 StatusLogger.java >>>>> (line 65) ReadStage 2 2 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,499 StatusLogger.java >>>>> (line 65) RequestResponseStage 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,500 StatusLogger.java >>>>> (line 65) ReadRepairStage 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,500 StatusLogger.java >>>>> (line 65) MutationStage 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,500 StatusLogger.java >>>>> (line 65) ReplicateOnWriteStage 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,500 StatusLogger.java >>>>> (line 65) GossipStage 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,501 StatusLogger.java >>>>> (line 65) AntiEntropyStage 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,501 StatusLogger.java >>>>> (line 65) MigrationStage 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,501 StatusLogger.java >>>>> (line 65) StreamStage 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,501 StatusLogger.java >>>>> (line 65) MemtablePostFlusher 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,502 StatusLogger.java >>>>> (line 65) FlushWriter 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,502 StatusLogger.java >>>>> (line 65) MiscStage 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,502 StatusLogger.java >>>>> (line 65) InternalResponseStage 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,502 StatusLogger.java >>>>> (line 65) HintedHandoff 0 0 0 >>>>> INFO [ScheduledTasks:1] 2012-03-06 10:29:03,553 StatusLogger.java >>>>> (line 69) CompactionManager n/a 0 >>>>> >>>>> Thanks, >>>>> >>>>> *Tamar Fraenkel * >>>>> Senior Software Engineer, TOK Media >>>>> >>>>> >>>>> >>>>> tamar@tok-media.com >>>>> Tel: +972 2 6409736 >>>>> Mob: +972 54 8356490 >>>>> Fax: +972 2 5612956 >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, Mar 6, 2012 at 9:12 AM, Tamar Fraenkel w= rote: >>>>> >>>>>> Works.. >>>>>> >>>>>> But during the night my setup encountered a problem. >>>>>> I have two VMs on my cluster (running on VmWare ESXi). >>>>>> Each VM has1GB memory, and two Virtual Disks of 16 GB >>>>>> They are running on a small server with 4CPUs (2.66 GHz), and 4 GB >>>>>> memory (together with two other VMs) >>>>>> I put cassandra data on the second disk of each machine. >>>>>> VMs are running Ubuntu 11.10 and cassandra 1.0.7. >>>>>> >>>>>> I left them running overnight and this morning when I came: >>>>>> In one node cassandra was down, and the last thing in the system.log >>>>>> is: >>>>>> >>>>>> INFO [CompactionExecutor:150] 2012-03-06 00:55:04,821 >>>>>> CompactionTask.java (line 113) Compacting >>>>>> [SSTableReader(path=3D'/opt/cassandra/data/tok/tk_vertical_tag_story= _indx-hc-1243-Data.db'), >>>>>> SSTableReader(path=3D'/opt/cassandra/data/tok/tk_vertical_tag_story_= indx-hc-1245-Data.db'), >>>>>> SSTableReader(path=3D'/opt/cassandra/data/tok/tk_vertical_tag_story_= indx-hc-1242-Data.db'), >>>>>> SSTableReader(path=3D'/opt/cassandra/data/tok/tk_vertical_tag_story_= indx-hc-1244-Data.db')] >>>>>> INFO [CompactionExecutor:150] 2012-03-06 00:55:07,919 >>>>>> CompactionTask.java (line 221) Compacted to >>>>>> [/opt/cassandra/data/tok/tk_vertical_tag_story_indx-hc-1246-Data.db,= ]. >>>>>> 32,424,771 to 26,447,685 (~81% of original) bytes for 58,938 keys a= t >>>>>> 8.144165MB/s. Time: 3,097ms. >>>>>> >>>>>> >>>>>> The other node was using all it's CPU and I had to restart it. >>>>>> After that, I can see that the last lines in it's system.log are tha= t >>>>>> the other node is down... >>>>>> >>>>>> INFO [FlushWriter:142] 2012-03-06 00:55:02,418 Memtable.java (line >>>>>> 246) Writing Memtable-tk_vertical_tag_story_indx@1365852701(1122169/= 25154556 >>>>>> serialized/live bytes, 21173 ops) >>>>>> INFO [FlushWriter:142] 2012-03-06 00:55:02,742 Memtable.java (line >>>>>> 283) Completed flushing >>>>>> /opt/cassandra/data/tok/tk_vertical_tag_story_indx-hc-1244-Data.db (= 2075930 >>>>>> bytes) >>>>>> INFO [GossipTasks:1] 2012-03-06 08:02:18,584 Gossiper.java (line >>>>>> 818) InetAddress /10.0.0.31 is now dead. >>>>>> >>>>>> How can I trace why that happened? >>>>>> Also, I brought cassandra up in both nodes. They both spend long tim= e >>>>>> reading commit logs, but now they seem to run. >>>>>> Any idea how to debug or improve my setup? >>>>>> Thanks, >>>>>> Tamar >>>>>> >>>>>> >>>>>> >>>>>> *Tamar Fraenkel * >>>>>> Senior Software Engineer, TOK Media >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> tamar@tok-media.com >>>>>> Tel: +972 2 6409736 >>>>>> Mob: +972 54 8356490 >>>>>> Fax: +972 2 5612956 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Mar 5, 2012 at 7:30 PM, aaron morton >>>>> > wrote: >>>>>> >>>>>>> Create nodes that do not share seeds, and give the clusters >>>>>>> different names as a safety measure. >>>>>>> >>>>>>> Cheers >>>>>>> >>>>>>> ----------------- >>>>>>> Aaron Morton >>>>>>> Freelance Developer >>>>>>> @aaronmorton >>>>>>> http://www.thelastpickle.com >>>>>>> >>>>>>> On 6/03/2012, at 12:04 AM, Tamar Fraenkel wrote: >>>>>>> >>>>>>> I want tow separate clusters. >>>>>>> *Tamar Fraenkel * >>>>>>> Senior Software Engineer, TOK Media >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> tamar@tok-media.com >>>>>>> Tel: +972 2 6409736 >>>>>>> Mob: +972 54 8356490 >>>>>>> Fax: +972 2 5612956 >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Mar 5, 2012 at 12:48 PM, aaron morton < >>>>>>> aaron@thelastpickle.com> wrote: >>>>>>> >>>>>>>> Do you want to create two separate clusters or a single cluster >>>>>>>> with two data centres ? >>>>>>>> >>>>>>>> If it's the later, token selection is discussed here >>>>>>>> http://www.datastax.com/docs/1.0/install/cluster_init#token-gen-ca= ssandra >>>>>>>> >>>>>>>> >>>>>>>> Moreover all tokens must be unique (even across datacenters), >>>>>>>> although - from pure curiosity - I wonder what is the rationale be= hind this. >>>>>>>> >>>>>>>> Otherwise data is not evenly distributed. >>>>>>>> >>>>>>>> By the way, can someone enlighten me about the first line in the >>>>>>>> output of the nodetool. Obviously it contains a token, but nothing= else. It >>>>>>>> seems like a formatting glitch, but maybe it has a role. >>>>>>>> >>>>>>>> It's the exclusive lower bound token for the first node in the >>>>>>>> ring. This also happens to be the token for the last node in the r= ing. >>>>>>>> >>>>>>>> In your setup >>>>>>>> 10.0.0.19 "owns" (85070591730234615865843651857942052864+1) to 0 >>>>>>>> 10.0.0.28 "owns" (0 + 1) to 8507059173023461586584365185794205286= 4 >>>>>>>> >>>>>>>> (does not imply primary replica, just used to map keys to nodes.) >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ----------------- >>>>>>>> Aaron Morton >>>>>>>> Freelance Developer >>>>>>>> @aaronmorton >>>>>>>> http://www.thelastpickle.com >>>>>>>> >>>>>>>> On 5/03/2012, at 11:38 PM, Hontv=E1ri J=F3zsef Levente wrote: >>>>>>>> >>>>>>>> You have to use PropertyFileSnitch and NetworkTopologyStrategy to >>>>>>>> create a multi-datacenter setup with two circles. You can start re= ading >>>>>>>> from this page: >>>>>>>> >>>>>>>> http://www.datastax.com/docs/1.0/cluster_architecture/replication#= about-replica-placement-strategy >>>>>>>> >>>>>>>> Moreover all tokens must be unique (even across datacenters), >>>>>>>> although - from pure curiosity - I wonder what is the rationale be= hind this. >>>>>>>> >>>>>>>> By the way, can someone enlighten me about the first line in the >>>>>>>> output of the nodetool. Obviously it contains a token, but nothing= else. It >>>>>>>> seems like a formatting glitch, but maybe it has a role. >>>>>>>> >>>>>>>> On 2012.03.05. 11:06, Tamar Fraenkel wrote: >>>>>>>> >>>>>>>> Hi! >>>>>>>> I have a Cassandra cluster with two nodes >>>>>>>> >>>>>>>> nodetool ring -h localhost >>>>>>>> Address DC Rack Status State Load >>>>>>>> Owns Token >>>>>>>> >>>>>>>> 85070591730234615865843651857942052864 >>>>>>>> 10.0.0.19 datacenter1 rack1 Up Normal 488.74 KB >>>>>>>> 50.00% 0 >>>>>>>> 10.0.0.28 datacenter1 rack1 Up Normal 504.63 KB >>>>>>>> 50.00% 85070591730234615865843651857942052864 >>>>>>>> >>>>>>>> I want to create a second ring with the same name but two >>>>>>>> different nodes. >>>>>>>> using tokengentool I get the same tokens as they are affected from >>>>>>>> the number of nodes in a ring. >>>>>>>> >>>>>>>> My question is like this: >>>>>>>> Lets say I create two new VMs, with IPs: 10.0.0.31 and 10.0.0.11 >>>>>>>> *In 10.0.0.31 cassandra.yaml I will set* >>>>>>>> initial_token: 0 >>>>>>>> seeds: "10.0.0.31" >>>>>>>> listen_address: 10.0.0.31 >>>>>>>> rpc_address: 0.0.0.0 >>>>>>>> >>>>>>>> *In 10.0.0.11 cassandra.yaml I will set* >>>>>>>> initial_token: 85070591730234615865843651857942052864 >>>>>>>> seeds: "10.0.0.31" >>>>>>>> listen_address: 10.0.0.11 >>>>>>>> rpc_address: 0.0.0.0 >>>>>>>> >>>>>>>> *Would the rings be separate?* >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> *Tamar Fraenkel * >>>>>>>> Senior Software Engineer, TOK Media >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> tamar@tok-media.com >>>>>>>> Tel: +972 2 6409736 >>>>>>>> Mob: +972 54 8356490 >>>>>>>> Fax: +972 2 5612956 >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > --e89a8ff2470f3f2f4d04bb454168 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Yes I am using the cassandra community.
Re-installing = will be a=A0hassle...
Any idea how to just fix the daemon issue?<= /div>
Thanks
Tama= r Fraenkel=A0
Senior Software Engineer, TOK Media=A0

3D"Inline

tamar@tok-media.com
Tel:= =A0=A0=A0+972 2 6409736=A0
Mob:=A0=A0+972 54 8356490=A0
Fax:=A0=A0= =A0+972 2 5612956=A0




On Thu, Mar 15, 2012 at 11:56 AM, aaron = morton <aar= on@thelastpickle.com> wrote:
I have a problem though. I installed cassandra f= ollowing DataStax, and cassandra is not a daemon
Are you using Cassandra for Data Stax Community ?=A0

This will give you a nice install=A0http://wiki.apache.org/cassandra/= DebianPackaging

Cheers

<= div style=3D"word-wrap:break-word">
-----------------
Aaron Morton
Freelance Deve= loper
@aaronmorton

On 15/03/2012, at 10:38 PM, Tamar Fra= enkel wrote:

Thanks for your prompt response.
I have a problem though.= I installed cassandra following DataStax, and cassandra is not a daemon. i= .e. I have to manyally start it, and I don't have a script /etc/init.d/= cassandra.

For this reason, I need to restart it after rebooting m= y vm manually, and it is not part of the rc...

Any= good points on how to set up cassandra as a daemon?

Thanks,

Tamar Fraenkel=A0
Senior Software Enginee= r, TOK Media=A0

= <tokLogo.png>

tamar@tok-media.co= m
Tel:=A0=A0=A0+972 2 6409736=A0
Mo= b:=A0=A0+972 54 8356490=A0
Fax:=A0=A0=A0<= a value=3D"+97225612956">+972 2 5612956=A0





On Thu, Mar 15, 2012 at 11:34 AM, aaron = morton <aaron@thelastpickle.com> wrote:
1. How can I prevent this? I guess my setup is limited, and t= his may happen, but is there a way to improve things.
Not really, you need more memory on the bo= x.
=A0
=
2. Assuming that I will run out of memory from time to time, how do I = setup a monit \ god task to restart cassandra in case it does.
Super simple, to = the point of not been very good, /etc/monit/conf.d/cassandra.monitrc=A0
The monit docs are pretty good.=A0

che= ck process cassandra with pidfile /var/run/cassandra.pid
=A0 start program =3D "/etc/init.d/cassandra start"
=A0 stop program =3D "/etc/init.d/cassandra stop"

You will also need to prevent the init.d script from star= ting, i used=A0update-rc.d

I'm not an ops guy; google is your friend; the moni= t docs are good.

Cheers

=

<= div style=3D"word-wrap:break-word">
-----------------
Aaron Morton
Freelance Deve= loper
@aaronmorton

On 15/03/2012, at 8:44 PM, Tamar Fraenkel wrote:
I added a thi= rd node to the cluster. Sure enough, this morning I come and only one node = is up, in the other two the cassandra process is not running.

In the cassandra log there is nothing, but in /var/log/syslo= g I see
In one node:
Mar 15 07:50:51 Cassandra3 kernel: [58566.= 666906] Out of memory: Kill process 2840 (java) score 383 or sacrifice chil= d
Mar 15 07:50:51 Cassandra3 kernel: [58566.667066] Killed proces= s 2840 (java) total-vm:956792kB, anon-rss:689752kB, file-rss:21680kB
And in the other:
Mar 14 18:36:02 Cassandra2 kern= el: [16262.267300] Out of memory: Kill process 2611 (java) score 409 or s= acrifice child
Mar 14 18:36:02 Cassandra2 kernel: [16262.267325] Killed process 261= 1 (java) total-vm:968040kB, anon-rss:748644kB, file-rss:18436kB

Two questions:
1. How can I prevent thi= s? I guess my setup is limited, and this may happen, but is there a way to = improve things.
2. Assuming that I will run out of memory from ti= me to time, how do I setup a monit \ god task to restart cassandra in case = it does.

Thanks,

Tamar Fraenkel=A0
Senior Software Engin= eer, TOK Media=A0

<tokLogo= .png>

tamar@tok= -media.com
Tel:=A0=A0=A0+972 2 6409736= =A0
Mob:=A0=A0+972 54 8356490=A0
Fax:= =A0=A0=A0+972 2 5612956=A0





On Tue, Mar 13, 2012 at 11:12 AM, aaron = morton <aaron@thelastpickle.com> wrote:
If you are on Ubuntu it may be this=A0<= a href=3D"http://wiki.apache.org/cassandra/FAQ#ubuntu_hangs" target=3D"_bla= nk">http://wiki.apache.org/cassandra/FAQ#ubuntu_hangs

otherwise I would look for GC problems.=A0

Cheers


<= div style=3D"word-wrap:break-word">
-----------------
Aaron Morton
Freelance Deve= loper
@aaronmorton

On 13/03/2012, at 7:53 PM, Tamar Fraenkel wrote:
Done it. Now = it generally runs ok, till one of the nodes get's stuck with 100% cpu a= nd I need to reboot it.

Last lines in the system.log just before are:
=A0INFO [OptionalTasks:1] 2012-03-13 07:36:43,850 MeteredFlushe= r.java (line 62) flushing high-traffic column family CFS(Keyspace=3D'to= k', ColumnFamily=3D'tk_vertical_tag_story_indx') (estimated 354= 17890 bytes)
=A0INFO [OptionalTasks:1] 2012-03-13 07:36:43,869 ColumnFamilyStore.ja= va (line 704) Enqueuing flush of Memtable-tk_vertical_tag_story_indx@200282= 0169(1620316/35417890 serialized/live bytes, 30572 ops)
=A0INFO [= FlushWriter:76] 2012-03-13 07:36:43,869 Memtable.java (line 246) Writing Me= mtable-tk_vertical_tag_story_indx@2002820169(1620316/35417890 serialized/li= ve bytes, 30572 ops)
=A0INFO [FlushWriter:76] 2012-03-13 07:36:44,015 Memtable.java (line 2= 83) Completed flushing /opt/cassandra/data/tok/tk_vertical_tag_story_indx-h= c-191-Data.db (2134123 bytes)
=A0INFO [OptionalTasks:1] 2012-03-1= 3 07:37:37,886 MeteredFlusher.java (line 62) flushing high-traffic column f= amily CFS(Keyspace=3D'tok', ColumnFamily=3D'tk_vertical_tag_sto= ry_indx') (estimated 34389135 bytes)
=A0INFO [OptionalTasks:1] 2012-03-13 07:37:37,887 ColumnFamilyStore.ja= va (line 704) Enqueuing flush of Memtable-tk_vertical_tag_story_indx@186995= 3681(1573252/34389135 serialized/live bytes, 29684 ops)
=A0INFO [= FlushWriter:76] 2012-03-13 07:37:37,887 Memtable.java (line 246) Writing Me= mtable-tk_vertical_tag_story_indx@1869953681(1573252/34389135 serialized/li= ve bytes, 29684 ops)
=A0INFO [FlushWrit

Any idea?
I am c= onsidering adding a third node, so that replication factor of 2 won't s= tuck my system when one node goes down. Does it make sense?

Thanks


Tamar Fraenkel=A0
Senior Software = Engineer, TOK Media=A0

<tokLogo.png>

tamar@tok= -media.com
Tel:=A0=A0=A0+972 2 6409736= =A0
Mob:=A0=A0+972 54 8356490=A0
Fax:= =A0=A0=A0+972 2 5612956=A0





On Tue, Mar 6, 2012 at 7:51 PM, aaron mo= rton <aaron@thelastpickle.com> wrote:
Reduce these settings for the CF
row_cache (disable it)
key_cache (disable it)
=
Increase these settings for the CF
bloom_filter_fp= _chance

Reduce these settings in cassandra.yaml

<= /div>
flush_largest_memtables_at
memtable_flush_queue_size
sliced_buffer_size_in_kb
in_memory_compaction_limit_in_mb=
concurrent_compactors


Increase= these settings=A0
index_interval


While it obviously depends on load, I would not be surprised if you= had a lot of trouble running cassandra with that setup.=A0

Cheers
A
<= div>

<= div style=3D"word-wrap:break-word">
-----------------
Aaron Morton
Freelance Deve= loper
@aaronmorton

On 6/03/2012, at 11:02 PM, Tamar Fraenkel wrote:
Arron, T= hanks for your response. I was afraid this is the issue.
Can you give me some direction regarding the fine tuning of my VMs, I would= like to explore that option some more.
Thanks!

Tama= r Fraenkel=A0
Senior Software Engineer, TOK Media=A0

<tokLogo.png>

tamar@tok= -media.com
Tel:=A0=A0=A0+972 2 6409736=A0
Mob:=A0=A0<= a value=3D"+972548356490">+972 54 8356490=A0
Fax:=A0=A0=A0+972 2 5612956=A0




On Tue, Mar 6, 2012 at 11:58 AM, aaron m= orton <aaron@thelastpickle.com> wrote:
You do not have enough memory allocated= to the JVM and are suffering from excessive GC as a result.

=
There are some tuning things you can try, but 480MB is not enough. 1GB= would be a better start, 2 better than that.=A0

Consider using=A0https://github.com/pcmanus/ccm=A0for testing mul= tiple instances on a single server rather than a VM.

Cheers

<= div style=3D"word-wrap:break-word">
-----------------
Aaron Morton
Freelance Deve= loper
@aaronmorton

On 6/03/2012, at 10:21 PM, Tamar Fraenkel wrote:
I have s= ome more info, after couple of hours running the problematic node became ag= ain 100% CPU and I had to reboot it, last lines from log show it did GC:

=A0INFO [ScheduledTasks:1] 2012-03-06 10:28:00,880= GCInspector.java (line 122) GC for Copy: 203 ms for 1 collections, 1859834= 56 used; max is 513802240
=A0INFO [ScheduledTasks:1] 2012-03-06 10:28:50,595 GCInspector.java (l= ine 122) GC for Copy: 3927 ms for 1 collections, 156572576 used; max is 513= 802240
=A0INFO [ScheduledTasks:1] 2012-03-06 10:28:55,434 StatusL= ogger.java (line 50) Pool Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Activ= e =A0 Pending =A0 Blocked
=A0INFO [ScheduledTasks:1] 2012-03-06 10:29:03,298 StatusLogger.java (= line 65) ReadStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 2 =A0 = =A0 =A0 =A0 2 =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-0= 3-06 10:29:03,499 StatusLogger.java (line 65) RequestResponseStage =A0 =A0 = =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06 10:29:03,500 StatusLogger.java (= line 65) ReadRepairStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 = =A0 0 =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06 10:= 29:03,500 StatusLogger.java (line 65) MutationStage =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06 10:29:03,500 StatusLogger.java (= line 65) ReplicateOnWriteStage =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0 = =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06 10:29:03,= 500 StatusLogger.java (line 65) GossipStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06 10:29:03,501 StatusLogger.java (= line 65) AntiEntropyStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 = =A0 0 =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06 10:= 29:03,501 StatusLogger.java (line 65) MigrationStage =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06 10:29:03,501 StatusLogger.java (= line 65) StreamStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 = =A0 =A0 0 =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06= 10:29:03,501 StatusLogger.java (line 65) MemtablePostFlusher =A0 =A0 =A0 = =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06 10:29:03,502 StatusLogger.java (= line 65) FlushWriter =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 = =A0 =A0 0 =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06= 10:29:03,502 StatusLogger.java (line 65) MiscStage =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06 10:29:03,502 StatusLogger.java (= line 65) InternalResponseStage =A0 =A0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0 = =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06 10:29:03,= 502 StatusLogger.java (line 65) HintedHandoff =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0
=A0INFO [ScheduledTasks:1] 2012-03-06 10:29:03,553 StatusLogger.java (= line 69) CompactionManager =A0 =A0 =A0 =A0 =A0 =A0 =A0 n/a =A0 =A0 =A0 =A0 = 0

Thanks,

Tamar Fraenkel=A0
Senior Software Engineer, TOK Media=A0

<= span><tokLogo.png>

tamar@tok-media.com
Tel:=A0=A0=A0+972 2 6409736=A0
Mob:=A0=A0+972 54 8356490=A0
Fax:=A0=A0= =A0+972 2 5612956=A0





On Tue, Mar 6, 2012 at 9:12 AM, Tam= ar Fraenkel <tamar@tok-media.com> wrote:
Works..

But during the night my se= tup encountered a problem.
I have two VMs on my cluster (running = on VmWare ESXi).
Each VM has1GB memory, and two Virtual Disks of = 16 GB
They are running on a small server with 4CPUs (2.66 GHz), and 4 GB mem= ory (together with two other VMs)
I put cassandra data on the sec= ond disk of each machine.
VMs are running Ubuntu 11.10 and cassan= dra 1.0.7.

I left them running overnight and this morning when I c= ame:
In one node cassandra was down, and the last thing in the sy= stem.log is:

=A0INFO = [CompactionExecutor:150] 2012-03-06 00:55:04,821 CompactionTask.java (line = 113) Compacting [SSTableReader(path=3D'/opt/cassandra/data/tok/tk_verti= cal_tag_story_indx-hc-1243-Data.db'), SSTableReader(path=3D'/opt/ca= ssandra/data/tok/tk_vertical_tag_story_indx-hc-1245-Data.db'), SSTableR= eader(path=3D'/opt/cassandra/data/tok/tk_vertical_tag_story_indx-hc-124= 2-Data.db'), SSTableReader(path=3D'/opt/cassandra/data/tok/tk_verti= cal_tag_story_indx-hc-1244-Data.db')]
=A0INFO [CompactionExecutor:150] 2012-03-06 00= :55:07,919 CompactionTask.java (line 221) Compacted to [/opt/cassandra/data= /tok/tk_vertical_tag_story_indx-hc-1246-Data.db,]. =A032,424,771 to 26,447,= 685 (~81% of original) bytes for 58,938 keys at 8.144165MB/s. =A0Time: 3,09= 7ms.


The other node was using all it= 9;s CPU and I had to restart it.
After that, I can see that the l= ast lines in it's system.log are that the other node is down...

=A0INFO [FlushWriter:142] = 2012-03-06 00:55:02,418 Memtable.java (line 246) Writing Memtable-tk_vertic= al_tag_story_indx@1365852701(1122169/25154556 serialized/live bytes, 21173 = ops)
=A0INFO [FlushWriter:142] 2012-03-06 00:55:02,= 742 Memtable.java (line 283) Completed flushing /opt/cassandra/data/tok/tk_= vertical_tag_story_indx-hc-1244-Data.db (2075930 bytes)
=A0INFO [GossipTasks:1] 2012-03-06 08:02:18,584 Gossi= per.java (line 818) InetAddress /10.0.0.31 is now dead.

How can I trace why that happened?
Also= , I brought cassandra up in both nodes. They both spend long time reading c= ommit logs, but now they seem to run.
Any idea how to debug or im= prove my setup?
Thanks,
Tamar


=
Tamar Frae= nkel=A0
Senior Software Engineer, TOK Media=A0

<tokLogo.png>


tamar@tok= -media.com
Tel:=A0=A0=A0+972 2 6409736= =A0
Mob:=A0=A0+972 54 8356490=A0
Fax:= =A0=A0=A0+972 2 5612956=A0





On Mon, Mar 5, 2012 at 7:30 PM, aaron mo= rton <aaron@thelastpickle.com> wrote:
Create nodes that do not share seeds, a= nd give the clusters different names as a safety measure.=A0

=
Cheers

<= div style=3D"word-wrap:break-word">
-----------------
Aaron Morton
Freelance Deve= loper
@aaronmorton

On 6/03/2012, at 12:04 AM, Tamar Fraenkel wrote:
I want tow se= parate clusters.
Tamar Fraenkel=A0
Senior Software Engineer,= TOK Media=A0

<tokLogo.png>


tamar@tok= -media.com
Tel:=A0=A0=A0+972 2 6409736= =A0
Mob:=A0=A0+972 54 8356490=A0
Fax:= =A0=A0=A0+972 2 5612956=A0





On Mon, Mar 5, 2012 at 12:48 PM, aaron m= orton <aaron@thelastpickle.com> wrote:
Do you want to create two separate clus= ters or a single cluster with two data centres ?=A0

If i= t's the later, token selection is discussed here=A0http://www.datastax.com/docs/1.0/install/cluster_init#token-gen-ca= ssandra
=A0
Moreover all tokens must be unique (even across datacenter= s), although - from pure curiosity - I wonder what is the rationale behind = this.
Otherwise data is not evenly distributed.

By the way, can someone enlighten me about the first line in= the output of the nodetool. Obviously it contains a token, but nothing els= e. It seems like a formatting glitch, but maybe it has a role.=A0
It's the exclusive lower bound token for the f= irst node in the ring. This also happens to be the token for the last node = in the ring.=A0

In your setup=A0
10.0.0.19 "owns" (85070591730234615865843651857942052864+1) to 0
10.0.0.28 "owns"=A0=A0(0 + 1) to=A085070591730234615865843651857942052864
=
(does not imply primary replica, just used to map keys to nodes.)
=A0


<= div style=3D"word-wrap:break-word">
-----------------
Aaron Morton
Freelance Deve= loper
@aaronmorton

On 5/03/2012, at 11:38 PM, Hontv=E1ri J=F3zse= f Levente wrote:

=20 =20 =20
You have to use PropertyFileSnitch and NetworkTopologyStrategy to create a multi-datacenter setup with two circles. You can start reading from this page:
http://www.datastax.c= om/docs/1.0/cluster_architecture/replication#about-replica-placement-strate= gy

Moreover all tokens must be unique (even across datacenters), although - from pure curiosity - I wonder what is the rationale behind this.

By the way, can someone enlighten me about the first line in the output of the nodetool. Obviously it contains a token, but nothing else. It seems like a formatting glitch, but maybe it has a role.

On 2012.03.05. 11:06, Tamar Fraenkel wrote:
Hi!
I have a=A0Cassandra=A0 cluster=A0with two nodes

nodetool ring -h localhost
Address =A0 =A0 =A0 =A0 DC =A0 =A0 = =A0 =A0 =A0Rack =A0 =A0 =A0 =A0Status State =A0 Load =A0 =A0 =A0 =A0 =A0 =A0Owns = =A0 =A0Token
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A085070591730234615865843651857942052864
10.0.0.19 =A0 =A0 =A0 datacenter1 ra= ck1 =A0 =A0 =A0 Up =A0 =A0 Normal =A0488.74 KB =A0 =A0 =A0 50.00% =A0= 0
10.0.0.28 =A0 =A0 =A0 datacenter1 ra= ck1 =A0 =A0 =A0 Up =A0 =A0 Normal =A0504.63 KB =A0 =A0 =A0 50.00% =A085070591730234615865843651857942052864

I want to create a second ring with the same name but two different nodes.
using tokengentool I get the same tokens as they are affected from the number of nodes in a ring.

My question is like this:
Lets say I create two new VMs, with IPs: 10.0.0.31 and 10.0.0.11
In 10.0.0.31 cassandra.yaml I will set
initial_token: 0
seeds: "10.0.0.31"
listen_address:=A010.0.0.31
rpc_address: 0.0.0.0

In 10.0.0.11=A0cassandra.yaml=A0I will set
initial_token:=A085070591730234615865843651857942052864
seeds: "10.0.0.31"
listen_address: 10.0.0.11
rpc_address: 0.0.0.0=A0

Would the rings be separate?

Thanks,

Tamar Fraenkel=A0
Senior Software Engineer, TOK Media=A0

<Mail Attachment.png>= ;


= tamar@tok-media.com
Tel:=A0=A0=A0+972 2 6409736=A0
Mob:=A0=A0+972 54 8356490=A0
Fax:=A0=A0=A0+972 2 5612956=A0



















--e89a8ff2470f3f2f4d04bb454168-- --e89a8ff2470f3f2f4f04bb454169 Content-Type: image/png; name="tokLogo.png" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: ii_135b91fb888fa9ff iVBORw0KGgoAAAANSUhEUgAAAF0AAAAnCAYAAABtyERkAAAABGdBTUEAALGPC/xhBQAACkNpQ0NQ SUNDIFByb2ZpbGUAAHgBnZZ3VFNZE8Dvey+90BJCkRJ6DU1KAJESepFeRSUkAUIJGBKwV0QFVxQV aYoiiyIuuLoUWSuiWFgUFLAvyCKgrIuriIplX/QcZf/Y/b6z88ec35s7c+/cmbnnPAAovoFCUSas AECGSCIO8/FgxsTGMfHdAAZEgAPWAHB52VlB4d4RABU/Lw4zG3WSsUygz/p1/xe4xfINYTI/m/5/ pcjLEkvQnULQkLl8QTYP5TyU03MlWTL7JMr0xDQZwxgZi9EEUVaVcfIXNv/s84XdZMzPEPFRH1nO WfwMvow7UN6SIxWgjASinJ8jFOSifBtl/XRphhDlNyjTMwTcbAAwFJldIuCloGyFMkUcEcZBeR4A BEryLE6cxRLBMjRPADiZWcvFwuQUCdOYZ8K0dnRkM30FuekCiYQVwuWlccV8JiczI4srWg7AlzvL ooCSrLZMtMj21o729iwbC7T8X+VfF796/TvIevvF42Xo555BjK5vtm+x32yZ1QCwp9Da7PhmSywD oGUTAKr3vtn0DwAgnwdA841Z92HI5iVFIslysrTMzc21EAp4FrKCfpX/6fDV859h1nkWsvO+1o7p KUjiStMlTFlReZnpmVIxMzuLyxMwWX8bYnTr/xw4K61ZeZiHCZIEYoEIPSoKnTKhKBltt4gvlAgz RUyh6J86/B/DZuUgwy9zjQKt5iOgL7EACjfoAPm9C2BoZIDE70dXoK99CyRGAdnLi9Ye/TL3KKPr n/XfFFyEfsLZwmSmzMwJi2DypOIcGaNvQqawgATkAR2oAS2gB4wBC9gAB+AM3IAX8AfBIALEgsWA B1JABhCDXLAKrAf5oBDsAHtAOagCNaAONIAToAWcBhfAZXAd3AR94D4YBCPgGZgEr8EMBEF4iArR IDVIGzKAzCAbiA3Nh7ygQCgMioUSoGRIBEmhVdBGqBAqhsqhg1Ad9CN0CroAXYV6oLvQEDQO/Qm9 gxGYAtNhTdgQtoTZsDscAEfAi+BkeCm8As6Dt8OlcDV8DG6GL8DX4T54EH4GTyEAISMMRAdhIWyE gwQjcUgSIkbWIAVICVKNNCBtSCdyCxlEJpC3GByGhmFiWBhnjC8mEsPDLMWswWzDlGOOYJoxHZhb mCHMJOYjlorVwJphnbB+2BhsMjYXm48twdZim7CXsH3YEexrHA7HwBnhHHC+uFhcKm4lbhtuH64R dx7XgxvGTeHxeDW8Gd4FH4zn4iX4fHwZ/hj+HL4XP4J/QyATtAk2BG9CHEFE2EAoIRwlnCX0EkYJ M0QFogHRiRhM5BOXE4uINcQ24g3iCHGGpEgyIrmQIkippPWkUlID6RLpAeklmUzWJTuSQ8lC8jpy Kfk4+Qp5iPyWokQxpXAo8RQpZTvlMOU85S7lJZVKNaS6UeOoEup2ah31IvUR9Y0cTc5Czk+OL7dW rkKuWa5X7rk8Ud5A3l1+sfwK+RL5k/I35CcUiAqGChwFrsIahQqFUwoDClOKNEVrxWDFDMVtikcV ryqOKeGVDJW8lPhKeUqHlC4qDdMQmh6NQ+PRNtJqaJdoI3Qc3YjuR0+lF9J/oHfTJ5WVlG2Vo5SX KVcon1EeZCAMQ4YfI51RxDjB6Ge8U9FUcVcRqGxVaVDpVZlWnaPqpipQLVBtVO1TfafGVPNSS1Pb qdai9lAdo26qHqqeq75f/ZL6xBz6HOc5vDkFc07MuacBa5hqhGms1Dik0aUxpaml6aOZpVmmeVFz Qouh5aaVqrVb66zWuDZNe762UHu39jntp0xlpjsznVnK7GBO6mjo+OpIdQ7qdOvM6BrpRupu0G3U fahH0mPrJent1mvXm9TX1g/SX6Vfr3/PgGjANkgx2GvQaTBtaGQYbbjZsMVwzEjVyM9ohVG90QNj qrGr8VLjauPbJjgTtkmayT6Tm6awqZ1pimmF6Q0z2MzeTGi2z6zHHGvuaC4yrzYfYFFY7qwcVj1r yIJhEWixwaLF4rmlvmWc5U7LTsuPVnZW6VY1Vvetlaz9rTdYt1n/aWNqw7OpsLk9lzrXe+7aua1z X9ia2Qps99vesaPZBdlttmu3+2DvYC+2b7Afd9B3SHCodBhg09kh7G3sK45YRw/HtY6nHd862TtJ nE44/eHMck5zPuo8Ns9onmBezbxhF10XrstBl8H5zPkJ8w/MH3TVceW6Vrs+dtNz47vVuo26m7in uh9zf+5h5SH2aPKY5jhxVnPOeyKePp4Fnt1eSl6RXuVej7x1vZO9670nfex8Vvqc98X6Bvju9B3w 0/Tj+dX5Tfo7+K/27wigBIQHlAc8DjQNFAe2BcFB/kG7gh4sMFggWtASDIL9gncFPwwxClka8nMo LjQktCL0SZh12KqwznBa+JLwo+GvIzwiiiLuRxpHSiPbo+Sj4qPqoqajPaOLowdjLGNWx1yPVY8V xrbG4eOi4mrjphZ6LdyzcCTeLj4/vn+R0aJli64uVl+cvvjMEvkl3CUnE7AJ0QlHE95zg7nV3KlE v8TKxEkeh7eX94zvxt/NHxe4CIoFo0kuScVJY8kuybuSx1NcU0pSJoQcYbnwRapvalXqdFpw2uG0 T+nR6Y0ZhIyEjFMiJVGaqCNTK3NZZk+WWVZ+1uBSp6V7lk6KA8S12VD2ouxWCR39meqSGks3SYdy 5udU5LzJjco9uUxxmWhZ13LT5VuXj67wXvH9SsxK3sr2VTqr1q8aWu2++uAaaE3imva1emvz1o6s 81l3ZD1pfdr6XzZYbSje8Gpj9Ma2PM28dXnDm3w21efL5YvzBzY7b67agtki3NK9de7Wsq0fC/gF 1wqtCksK32/jbbv2nfV3pd992p60vbvIvmj/DtwO0Y7+na47jxQrFq8oHt4VtKt5N3N3we5Xe5bs uVpiW1K1l7RXunewNLC0tUy/bEfZ+/KU8r4Kj4rGSo3KrZXT+/j7eve77W+o0qwqrHp3QHjgzkGf g83VhtUlh3CHcg49qYmq6fye/X1drXptYe2Hw6LDg0fCjnTUOdTVHdU4WlQP10vrx4/FH7v5g+cP rQ2shoONjMbC4+C49PjTHxN+7D8RcKL9JPtkw08GP1U20ZoKmqHm5c2TLSktg62xrT2n/E+1tzm3 Nf1s8fPh0zqnK84onyk6Szqbd/bTuRXnps5nnZ+4kHxhuH1J+/2LMRdvd4R2dF8KuHTlsvfli53u neeuuFw5fdXp6qlr7Gst1+2vN3fZdTX9YvdLU7d9d/MNhxutNx1vtvXM6znb69p74Zbnrcu3/W5f 71vQ19Mf2X9nIH5g8A7/ztjd9Lsv7uXcm7m/7gH2QcFDhYcljzQeVf9q8mvjoP3gmSHPoa7H4Y/v D/OGn/2W/dv7kbwn1Cclo9qjdWM2Y6fHvcdvPl34dORZ1rOZifzfFX+vfG78/Kc/3P7omoyZHHkh fvHpz20v1V4efmX7qn0qZOrR64zXM9MFb9TeHHnLftv5Lvrd6Ezue/z70g8mH9o+Bnx88Cnj06e/ AAOb8/zszueKAAAACXBIWXMAAA7DAAAOwwHHb6hkAAAHmklEQVRoBeVaa0wUVxQ+uzPLLqj1lbRW IkKKYgNq0WpRq0mjEWJRqVaLGmoCFdpCpYhGLP6hlvpCWl+NmGATFYH+qPhqq9W0wSqgLYhPFCxI Q2L7oyjCPthX7xkY2MedYRaGWbAnmcy959x77plvzp69c+5R2QnBICLLn7UAqg6D2aAJg8jyblPZ 7ubAatma/wVzVSVYblaB7U4l2OquAfgQGxlyodV4qQn+rAqYKZGgDp4M7MSpwASFgHr4KCIcuKRC T3/8+G8oLi7uVyvnzZsL4eHhomvYDQYw37gBphMFYC09D6AhP0ICrkpDprGkLQB614vAF0Iudnos aGa9DZqps0TXcxWePn0G6uvrXdkQFBQES5YsduPTGFKw5DzdaDRAeXkFTYdsvNDQUFFdpp8vgiF7 O9ibGjtARqB7SZbKIsDL5D8DdKsyuF+AFFUIeF9wMBqN8NVXX0NjY6PgckuXLsYfqHfJ2tQELR+l gD7+Y7A/+ktWY2xN10Gfsxz0x3aD3WSQVberMgR8z57cHgGPjY31Lujm6mp4tup9sJ694PoMsvYt l/OgbUc8WP9pklUvr4wH/PbtOzzL7Y4ejoAjceHFbYQCjParZdD23lpJK7FRq4Cd9jq3a2FCOsMU 2cFYa++A3dAGtkf3wPJ7oaguzut3xoHf5mPAvOgvOtYToaeAo26vgI4hRZ+xVfTZ2HdiQbt8BWgm TxEcxwYGd8nspkyw1NwAc9k5Es/pL8D+rBH0BPihX54Dlda3a25fGgUFJ0Cqh/PrcKAHBgZCYWEB z6PeGxoaYMuWTKps+/ZsQB1SCHcorWkbwV5Pj99MVBT4paYCG9wNqBS9Kq2O7FYiuMvyYAUYC3eA rYlsM10IgTd8mw1+H37hIvG8W1RUBBcvXhKc6BhSHAcp/kdqOn8BbGV/ONrQ1dZuTocX9u/3GPAu BZ0NduIUGJJxBNi5Sa4iro87G3N1GVUmlYmAnzp1RnC4EOA4QVHQ0cuNufuohmoz04n30UGiTuiB iZ7vF7cJNJGbqCNNJXQ7qINdmH0BHFUpCnr7b1eoYUXzSaKsgDtipIteC0xIlCOLa+Mfq+VBtRu/ J8aVK1d77eG8bmVB/+VXfl2nu9+6BKe+nB30eN2KT6kqzbc8CzFVVVVw4MBBqi5kioUUx0mKgm45 9p3j2lwbvVw9cqQbX04GExAMav+ZbiptDbfceEIMBHzXrhwhsWTAUQG3exHUJKMAt4k00kwTz8fQ 5vSGx86MhvaTzrsZ632S35FANTU1oukBqR7OL6WYp9v1en5NpzsTON6p318dle/QXqt+8uSp6Ny2 NvqzCU1SDHQhA54HPu7V8Q9WKikGunr0aKpN1oZHVL7cTLuhVW6VTvqOHz8OmNaVQsqBPop+sGCu rJJiZ5/HWK6dddPBhES68XrLwBCUn58PmIvpiRQDHQ3RpKxzs8e8/zDYmpvd+HIyzDfLqSkB5tXZ Hi8zYsRwwTmYgykpKRGU8wJFQfeZ+ya/rtO9LSfXqS9nx24ygukk/etTE+a+jRRbOywslBxS5ALe hQhTA7jbESNFQdeEvwaqoHFu9liOFoP+UJ4bv68MBFx/YCPVy9XkVIkJmCB5CQQ6PX0D6HQ6SEgQ /5jbu3cfiO14FAVd5esLug3rqQ9qyt4DrbtzwC4hJlIVuDB5wK33f3KRdHTxGE8qBQQEdAGOc8aM eQlSUpIFpyPgBw8Kf7kqCjpaqYtZCkz0QqrB5m8OQ0t8PFjq6qhyqcz2ikvQmhkNQoBj9hErB6TS 2LEvcx7uOH7OnNkQEfGGI8upjfEdD7pppDjoaMTQ7M+pYQZltuskEXX3LjY9ItvTZmgvJ2BnrQZj fhLYWxqo83HH4rsyhSrzlJmUlAhif6yFhUWA5xCupFgawHFhNdk+Dis8yp2PCh1mOI7Htu1JM5jL S7kjO67YiBzXIVkbH4DtIamNwQMLhjDw6pSh3JFUwwJAF/eZbKdGGN9TU9dDVtY2x2Wc2jt37uL+ fHEsT14BHRdn/P054Fsi3uJtod4tD+ug/YeTYP7+UM91L1QNHUzOwz/YJnsh0qRJk7hkl9CBBsb3 vLzD3MvhzfMa6GgAAi9E7RXlYDy0D2z3rnUWGwmN7JnvE5MF2vnLZPNw1xVjYmKgqqpasPwCa2lC Qy/BggXzualeiemuRrv2jRnpoE+MA9st56yg67ie+uy0WBiy9UfQLVrTb4CjDRg60tLoOXvexvz8 I11pggEJOm9ob++ayI0c2Hj47MlevLfr4TzcRiYkxIuqwOovTBN4NbyIWihRqB4/gxxQvAJMGPnY 8Q8ENkT6VlDiEpKHYfioqKgQLMnAcjss2Rg0oKtDZoA2IZXUwZCvWp08NSuS0fRgYHJyMilV2SL4 RYppYMmg68iDCn0MoKy/iF1Eio6WkaKjsP734CBSnUsjIT5tLO7bExPXQWnpZZqY43Gl0oJSBQTN 4yZ2r4IuQEqisTTaJy0ddO+uJOen9JRw96TB15Ls6Uo8mnrmdNCuJp69cCFgnuZ5pQEBOuZidHFr wGe2Z0X8g/WleD28WGprgZ0gPcU6WIF2tNvroDsa839pP5cfRwP95f0HN8yA0D1xOrwAAAAASUVO RK5CYII= --e89a8ff2470f3f2f4f04bb454169--