incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Prasanna Rajaperumal <praja...@cisco.com>
Subject Re: A Simple scenario, Help needed
Date Fri, 01 Apr 2011 08:09:02 GMT
Hi ,

I happened to figure out the problem.
I had set the replication_factor=1 in cassandra.yaml
Changing it to 2, made sure the entire keyspace is stored in each node. ( It has its half
and the others half as well)

For others looking at an explanation on Replication Factor and Consistency Level
http://permalink.gmane.org/gmane.comp.db.hector.user/392

Thanks
Prasanna


On Mar 31, 2011, at 6:49 PM, Prasanna Rajaperumal wrote:

> Hi All,
> 
> I am trying out a very simple scenario and I dont seem to get it working. It would be
great if I am pointed to some things here.
> 
> I have set up a 2 node cluster, cassandra.yaml being the default and same for each other
than the seed: being each other and I have set the Thrift RPC address and listen_address to
publicly available hostnames. Replication factory is set to 1
> 
> I have a client (using Hector) to do some basic operations like write, read, delete.
> 
>         CassandraHostConfigurator config = new CassandraHostConfigurator("arti-dev-logger-2:9160,arti-dev-logger-1:9160");
>         config.setAutoDiscoverHosts(true);
>         Cluster cluster = HFactory.createCluster("dev_cluster", config);
>         Keyspace artiKeyspace = HFactory.createKeyspace(this.getArti_persistence_cassandra_keyspace(),
cluster, new ConsistencyLevelPolicy(){
> 			@Override
> 			public HConsistencyLevel get(OperationType op) {
> 				return HConsistencyLevel.ONE;
> 			}
> 			@Override
> 			public HConsistencyLevel get(OperationType op, String cfName) {
> 				return HConsistencyLevel.ONE;
> 			}
>         });
> 
> Nodetool show the ring fine.
> 
> [root@arti-dev-logger-1 bin]# ./nodetool -host arti-dev-logger-1 ring
> Address         Status State   Load            Owns    Token                        
              
>                                                        140881507882391765636814029248607183802
    
> 171.71.189.47   Up     Normal  54.3 KB         60.79%  74161420796139335783812688622390550898
     
> 171.71.189.48   Up     Normal  66.96 KB        39.21%  140881507882391765636814029248607183802
> 
> [root@arti-dev-logger-1 bin]# ./nodetool -host arti-dev-logger-2 ring
> Address         Status State   Load            Owns    Token                        
              
>                                                        140881507882391765636814029248607183802
    
> 171.71.189.47   Up     Normal  54.3 KB         60.79%  74161420796139335783812688622390550898
     
> 171.71.189.48   Up     Normal  66.96 KB        39.21%  140881507882391765636814029248607183802

> 
> I observe, If I have arti-dev-logger-1 down and run my test against against the cluster,
my test is successful.
> If I bring up the arti-dev-logger-1 and take down arti-dev-logger-2, My test complains

> 
> com.cisco.step.arti.persistence.CassandraException: : May not be enough replicas present
to handle consistency level.
> 
> I would imagine I am doing something very fundamental here, I have not attached any test
case hoping that any experienced person looking at might be able to figure out what is going
on right away.
> 
> Thanks
> Prasanna
> 

Prasanna Rajaperumal
prajaper@cisco.com




Mime
View raw message