incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kirill A. Korinskiy <catap+cassan...@catap.ru>
Subject Re: Cassandra cluster setup [WAS Re: usage]
Date Tue, 29 Sep 2009 07:57:13 GMT
At Mon, 28 Sep 2009 18:07:03 -0500,
Michael Greene <michael.greene@gmail.com> wrote:
> 
> Which partitioner are you using?

org.apache.cassandra.dht.RandomPartitioner

> What are the tokens for the nodes or what does bin/nodeprobe ring
> output?

Starting Token                                 Ending Token                              
  Size Address        Ring
155352990208345788173011248201827004586        20283731607556365006538603533864462732    
     2 67.218.100.94  |<--|
20283731607556365006538603533864462732         21836380263552532517023620922568993549    
     2 67.218.100.93  |   ^
21836380263552532517023620922568993549         27825817942936339100357854565774016889    
     2 67.218.100.92  v   |
27825817942936339100357854565774016889         56705206375993081178630024509360001548    
     2 67.218.100.116 |   ^
56705206375993081178630024509360001548         154786577292345968165033245777473466277   
     2 67.218.100.115 v   |
154786577292345968165033245777473466277        155352990208345788173011248201827004586   
     2 67.218.100.117 |-->|


> Are you setting the initial token or allowing Cassandra to
> choose one?

No, initialtoken is empty and Cassandra chose itinital toke.

> What do your keys look like?  Are they well-distributed?
> 

So. I try to insert as key md5(X) where X is sequences 1..100000 and
all key seting on 67.218.100.116 and 67.218.100.115.

> Michael
> 
> On Mon, Sep 28, 2009 at 5:34 PM, Kirill A. Korinskiy
> <catap+cassandra@catap.ru> wrote:
> > At Mon, 28 Sep 2009 17:27:35 -0500,
> > Michael Greene <michael.greene@gmail.com> wrote:
> >>
> >> This is a new thread continued from the Facebook-usage thread.
> >>
> >
> > sure
> >
> >> Cassandra automatically shards your data based on the Partitioner you
> >> have setup in storage-conf.xml.  The copies are controlled by the
> >> ReplicationFactor setting in the same configuration file.  If all your
> >> nodes are in the same data center, then the default
> >> ReplicaPlacementStrategy of RackUnawareStrategy should be fine.
> >>
> >
> > ok. But on my test all my data go to two nodes.
> >
> > --
> > wbr, Kirill
> >

-- 
wbr, Kirill

Mime
View raw message