cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Masood Mortazavi <masoodmortaz...@gmail.com>
Subject Re: 3 node installation
Date Thu, 25 Feb 2010 18:03:37 GMT
All nodes always agree on the ring.
In fact,
   "nodeprobe -host <name> ring"
is probably one of commands and "nodeprobe" one of the the most reliable
tools in Cassandra, as far as I can tell.

These are good suggestions. Thanks.

(I don't know whether it is worth describing this in a JIRA as a bug. I
would be willing to do it if you like me to do so.)

On Thu, Feb 25, 2010 at 6:19 AM, Jonathan Ellis <jbellis@gmail.com> wrote:

> Then it sounds like a bug.
>
> Do A and B agree on nodeprobe ring output?
>
> Can you turn on debug logging and paste what A and B log after "get_slice"
> ?
>
> On Thu, Feb 25, 2010 at 1:35 AM, Masood Mortazavi
> <masoodmortazavi@gmail.com> wrote:
> > Besides what I just said below, I should have also added that in the
> > scenario discussed here:
> >
> > While RackUnawareStrategy is used ...
> >
> > Node B which seems to have a copy of all data at all times, has an IP
> > address whose 3rd octet is different from IP addresses of both node A and
> C,
> > which have the same third octet.
> >
> > A, B and C are all set as "Seed" in the "seeds" section.
> >
> > Bootstrap is set true for all of them.
> >
> > In storage-conf.xml, the only thing that differs for the three nodes is
> > their own interfaces.
> > As just noted, the Replica factor is 2.
> > That's it.
> > On Wed, Feb 24, 2010 at 11:18 PM, Masood Mortazavi
> > <masoodmortazavi@gmail.com> wrote:
> >>
> >> Yes.
> >> Identical with replication factor of 2.
> >> m.
> >>
> >> On Wed, Feb 24, 2010 at 8:33 PM, Jonathan Ellis <jbellis@gmail.com>
> wrote:
> >>>
> >>> Is the configuration identical on all nodes?  Specifically, is
> >>> ReplicationFactor set to 2 on all nodes?
> >>>
> >>> On Wed, Feb 24, 2010 at 10:07 PM, Masood Mortazavi
> >>> <masoodmortazavi@gmail.com> wrote:
> >>> > I wonder if anyone can provide an explanation for the following
> >>> > behavior
> >>> > observed in a three-node cluster:
> >>> >
> >>> > 1. In a three-node (A, B and C) installation, I use the cli,
> connected
> >>> > to
> >>> > node A, to set 10 data items.
> >>> >
> >>> > 2. On cli connected to node A, I do get, and can see all 10 data
> items.
> >>> >
> >>> > 3. I take node C down, I do step 2, and only see some of the 10 data
> >>> > items.
> >>> > Some of the data items are unavailable as follows:
> >>> > cassandra> get Keyspace1.Standard1['test6']
> >>> > Exception null
> >>> > UnavailableException()
> >>> >         at
> >>> > org.apache.cassandra.service.Cassandra$get_slice_result.read(Cassandr
> >>> > a.java:3274)
> >>> >         at
> >>> > org.apache.cassandra.service.Cassandra$Client.recv_get_slice(Cassandr
> >>> > a.java:296)
> >>> >         at
> >>> > org.apache.cassandra.service.Cassandra$Client.get_slice(Cassandra.jav
> >>> > a:270)
> >>> >         at
> >>> > org.apache.cassandra.cli.CliClient.doSlice(CliClient.java:241)
> >>> >         at
> >>> > org.apache.cassandra.cli.CliClient.executeGet(CliClient.java:300)
> >>> >         at
> >>> > org.apache.cassandra.cli.CliClient.executeCLIStmt(CliClient.java:57)
> >>> >         at
> >>> > org.apache.cassandra.cli.CliMain.processCLIStmt(CliMain.java:131)
> >>> >         at org.apache.cassandra.cli.CliMain.main(CliMain.java:172)
> >>> >
> >>> > 4. Following step 3, with no other changes other than connecting the
> >>> > same
> >>> > cli instance to the other remaining node, meaning node B (which is
a
> >>> > node
> >>> > with largest memory, by the way, although I don't think it matters
> >>> > here), I
> >>> > can see all 10 test data items.
> >>> >
> >>> > The replica number is 2.
> >>> >
> >>> >
> >>> >
> >>
> >
> >
>

Mime
View raw message