incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apoorva Gaurav <apoorva.gau...@myntra.com>
Subject Re: Dead node appearing in datastax driver
Date Tue, 01 Apr 2014 12:44:23 GMT
Did that and I actually see a significant reduction in write latency.


On Tue, Apr 1, 2014 at 5:35 PM, Sylvain Lebresne <sylvain@datastax.com>wrote:

> On Tue, Apr 1, 2014 at 1:49 PM, Apoorva Gaurav <apoorva.gaurav@myntra.com>wrote:
>
>> Hello Sylvian,
>>
>> Queried system.peers on three live nodes and host4 is appearing on two of
>> these.
>>
>
> That's why the driver thinks they are still there. You're most probably
> running into https://issues.apache.org/jira/browse/CASSANDRA-6053 since
> you are on C* 2.0.4. As said, this is relatively harmless, but you should
> think about upgrading to 2.0.6 to fix it in the future (you could manually
> remove the bad entries in System.peers in the meantime if you want, they
> are really just leftover that shouldn't be here).
>
> --
> Sylvain
>
>
>>
>> On Tue, Apr 1, 2014 at 5:06 PM, Sylvain Lebresne <sylvain@datastax.com>wrote:
>>
>>> On Tue, Apr 1, 2014 at 12:50 PM, Apoorva Gaurav <
>>> apoorva.gaurav@myntra.com> wrote:
>>>
>>>> Hello All,
>>>>
>>>> We had a 4 node cassandra 2.0.4 cluster  ( lets call them host1, host2,
>>>> host3 and host4), out of which we've removed one node (host4) using
>>>> nodetool removenode command. Now using nodetool status or nodetool ring we
>>>> no longer see host4. It's also not appearing in Datastax opscenter. But its
>>>> intermittently appearing in Metadata.getAllHosts() while connecting using
>>>> datastax driver 1.0.4.
>>>>
>>>> Couple of questions :-
>>>> -How is it appearing.
>>>>
>>>
>>> Not sure. Can you try querying the peers system table on each of your
>>> nodes (with cqlsh: SELECT * FROM system.peers) and see if the host4 is
>>> still mentioned somewhere?
>>>
>>>
>>>> -Can this have impact on read / write performance of client.
>>>>
>>>
>>> No. If the host doesn't exists, the driver might try to reconnect to it
>>> at times, but since it won't be able to, it won't try to use it for reads
>>> and writes. That does mean you might have a reconnection task running with
>>> some regularity, but 1) it's not on the write/read path of queries and 2)
>>> provided you've left the default reconnection policy, this will happen once
>>> every 10 minutes and will be pretty cheap so that it will consume an
>>> completely negligible amount of ressources. That doesn't mean I'm not
>>> interested tracking down why that happens in the first place though.
>>>
>>> --
>>> Sylvain
>>>
>>>
>>>
>>>>
>>>> Code which we are using to connect is
>>>>
>>>>      public void connect() {
>>>>
>>>>         PoolingOptions poolingOptions = new PoolingOptions();
>>>>
>>>>         cluster = Cluster.builder()
>>>>
>>>>                 .addContactPoints(inetAddresses.toArray(newString[]{}))
>>>>
>>>>                 .withLoadBalancingPolicy(new RoundRobinPolicy())
>>>>
>>>>                 .withPoolingOptions(poolingOptions)
>>>>
>>>>                 .withPort(port)
>>>>
>>>>                 .withCredentials(username, password)
>>>>
>>>>                 .build();
>>>>
>>>>         Metadata metadata = cluster.getMetadata();
>>>>
>>>>         System.out.printf("Connected to cluster: %s\n",
>>>> metadata.getClusterName());
>>>>
>>>>         for (Host host : metadata.getAllHosts()) {
>>>>
>>>>             System.out.printf("Datacenter: %s; Host: %s; Rack: %s\n",
>>>> host.getDatacenter(), host.getAddress(), host.getRack());
>>>>
>>>>         }
>>>>
>>>>     }
>>>>
>>>>
>>>>
>>>> --
>>>> Thanks & Regards,
>>>> Apoorva
>>>>
>>>
>>>
>>
>>
>> --
>> Thanks & Regards,
>> Apoorva
>>
>
>


-- 
Thanks & Regards,
Apoorva

Mime
View raw message