incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aaron morton <aa...@thelastpickle.com>
Subject Re: balancing issue with Random partitioner
Date Tue, 13 Sep 2011 00:18:17 GMT
Try a reapir on 100.5 , it will then request the data from the existing nodes. 

You will then need to clean on the existing three nodes once the repair has completed. 

Cheers

-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 13/09/2011, at 9:32 AM, David McNelis wrote:

> Auto-bootstrapping is turned on and the node had  been started several hours ago.   Since
the node already shows up as part of the ring I would imagine that nodetool join wouldn't
do anything.    Is there a command to jumpstart bootstrapping?
> 
> On Mon, Sep 12, 2011 at 4:22 PM, Jonathan Ellis <jbellis@gmail.com> wrote:
> Looks kind of like the 4th node was added to the cluster w/o bootstrapping.
> 
> On Mon, Sep 12, 2011 at 3:59 PM, David McNelis
> <dmcnelis@agentisenergy.com> wrote:
> > We are running the datastax .8 rpm distro.  We have a situation where we
> > have 4 nodes and each owns 25% of the keys.  However the last node in the
> > ring does not seem to be  getting much of a load at all.
> > We are using the random partitioner, we have a total of about 20k keys that
> > are sequential...
> > Our nodetool ring  output is currently:
> > Address         DC          Rack        Status State   Load            Owns
> >    Token
> >
> >    127605887595351923798765477786913079296
> > 10.181.138.167  datacenter1 rack1       Up     Normal  99.37 GB
> >  25.00%  0
> > 192.168.100.6   datacenter1 rack1       Up     Normal  106.25 GB
> > 25.00%  42535295865117307932921825928971026432
> > 10.181.137.37   datacenter1 rack1       Up     Normal  77.7 GB
> > 25.00%  85070591730234615865843651857942052863
> > 192.168.100.5   datacenter1 rack1       Up     Normal  494.67 KB
> > 25.00%  127605887595351923798765477786913079296
> >
> > Nothing is running on netstats on .37 or .5.
> > I understand that the nature of the beast would cause the load to differ
> > between the nodes...but I wouldn't expect it to be so drastic.  We had the
> > token for .37 set to 85070591730234615865843651857942052864, and I
> > decremented and moved it to try to kickstart some streaming on the thought
> > that something may have failed, but that didn't yield any appreciable
> > results.
> > Are we seeing completely abnormal behavior?  Should I consider making the
> > token for the fourth node considerably smaller?  We calculated the node's
> > tokens using the standard python script.
> > --
> > David McNelis
> > Lead Software Engineer
> > Agentis Energy
> > www.agentisenergy.com
> > o: 630.359.6395
> > c: 219.384.5143
> > A Smart Grid technology company focused on helping consumers of energy
> > control an often under-managed resource.
> >
> >
> 
> 
> 
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
> 
> 
> 
> -- 
> David McNelis
> Lead Software Engineer
> Agentis Energy
> www.agentisenergy.com
> o: 630.359.6395
> c: 219.384.5143
> 
> A Smart Grid technology company focused on helping consumers of energy control an often
under-managed resource.
> 
> 


Mime
View raw message