ok but If it's not defined ? for example 1 field is a date there is no end
for date ?
On Tue, Nov 10, 2009 at 6:57 PM, Jonathan Ellis wrote:
> no.
>
> for randompartitioner, you use integers from 0 to 2**127, but for OPP
> you use strings from your key universe.
>
> On Tue, Nov 10, 2009 at 10:37 AM, Richard grossman
> wrote:
> > If I understand good if I transform my String key :: to
> > some long value and If I've 3 server then I put on the first server
> > initialToken : 0
> > second : Long.max() /2
> > third : Long.max()
> >
> > Is it correct ?? or there is something better ?
> >
> > thanks
> >
> > On Tue, Nov 10, 2009 at 6:01 PM, Jonathan Ellis
> wrote:
> >>
> >> for OPP, tokens are equivalent to keys so pick keys evenly spaced apart
> >>
> >> On Tue, Nov 10, 2009 at 9:55 AM, Richard grossman
> >> wrote:
> >> > hi
> >> >
> >> > I've understand this but I don't know what to write into initialtoken
> is
> >> > it
> >> > "1" or "a" or something else ?
> >> > as I've said in a previous post My keys are build like
> >> > ::
> >> > Is there any link ?
> >> >
> >> > Thanks,
> >> >
> >> > On Tue, Nov 10, 2009 at 5:48 PM, Jonathan Ellis
> >> > wrote:
> >> >>
> >> >> if you're not specifying initialtoken, every time you wipe your
> >> >> installation it will generate new tokens. for a small number of
> >> >> machines you'll definitely see some random tokens better balanced
> than
> >> >> others.
> >> >>
> >> >> On Tue, Nov 10, 2009 at 9:38 AM, Richard grossman <
> richiesgr@gmail.com>
> >> >> wrote:
> >> >> > Hi
> >> >> >
> >> >> > I've build the 0.4.2 from the tags in SVN.
> >> >> > I've made exactly the same cluster with same configuration as 0.4.1
> >> >> > I've delete all the data in all server
> >> >> >
> >> >> > Now I send the data to first server and the data is not more
> >> >> > distributed
> >> >> > across the other server as previously.
> >> >> > I've configured replica to 1
> >> >> >
> >> >> > here is my storage-conf.xml
> >> >> >
> >> >> >
> >> >> >
> >> >> > BeeCluster
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> > >> >> > Name="channelShowLink"
> >> >> > FlushPeriodInMinutes="15"/>
> >> >> >
> >> >> > >> >> > Name="channelShow"
> >> >> > FlushPeriodInMinutes="15"/>
> >> >> >
> >> >> > >> >> > Name="userAction"
> >> >> > FlushPeriodInMinutes="15"/>
> >> >> >
> >> >> > >> >> > Name="headends"
> >> >> > FlushPeriodInMinutes="15"/>
> >> >> >
> >> >> > >> >> > Name="similarity"
> >> >> > FlushPeriodInMinutes="500"/>
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> org.apache.cassandra.dht.OrderPreservingPartitioner
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> org.apache.cassandra.locator.EndPointSnitch
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> org.apache.cassandra.locator.RackUnawareStrategy
> >> >> >
> >> >> > 1
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> /home/beecloud/cassandrapart/commitlog
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> /home/beecloud/cassandrapart/data
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> /home/beecloud/cassandrapart/callouts
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> /home/beecloud/cassandrapart/bootstrap
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> /home/beecloud/cassandrapart/staging
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> > 192.168.249.200
> >> >> > 192.168.249.222
> >> >> > 192.168.249.95
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> > 50000
> >> >> >
> >> >> >
> >> >> >
> 128
> >> >> >
> >> >> >
> >> >> >
> >> >> > 192.168.249.200
> >> >> >
> >> >> > 7000
> >> >> >
> >> >> > 7001
> >> >> >
> >> >> >
> >> >> > 0.0.0.0
> >> >> >
> >> >> > 9160
> >> >> >
> >> >> > false
> >> >> >
> >> >> >
> >> >> >
> >> >> > 64
> >> >> >
> >> >> >
> >> >> > 32
> >> >> > 8
> >> >> >
> >> >> >
> >> >> > 64
> >> >> >
> >> >> >
> >> >> > 64
> >> >> >
> >> >> >
> 0.1
> >> >> >
> >> >> >
> >> >> > 8
> >> >> > 32
> >> >> >
> >> >> >
> >> >> > periodic
> >> >> >
> >> >> > 1000
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> >> >> > 864000
> >> >> >
> >> >> >
> >> >> > 1
> >> >> > 1
> >> >> >
> >> >> >
> >> >> > 256
> >> >> >
> >> >> >
> >> >> >
> >> >> > Is there anyone with the same problem ??
> >> >> >
> >> >> > Thanks
> >> >> >
> >> >
> >> >
> >
> >
>