incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ruslan usifov <ruslan.usi...@gmail.com>
Subject NPE in cassandra0.7 (from trunk) while bootstrap
Date Fri, 22 Oct 2010 10:11:26 GMT
I try play with cassandra 0.7 (i build it from trunk) and its looks better
then 0.6 brunch, but when i try to add new node with auto_bootstrap: true i
got NPE (192.168.0.37 initial node with data on it, 192.168.0.220
bootstraped node):

DEBUG 14:00:58,931 Checking to see if compaction of Schema would be useful
DEBUG 14:00:58,948 Checking to see if compaction of IndexInfo would be
useful
 INFO 14:00:58,929 Upgrading to 0.7. Purging hints if there are any. Old
hints will be snapshotted.
 INFO 14:00:58,954 Cassandra version: 0.7.0-beta2-SNAPSHOT
 INFO 14:00:58,954 Thrift API version: 19.2.0
 INFO 14:00:58,961 Loading persisted ring state
 INFO 14:00:58,962 Starting up server gossip
 INFO 14:00:58,968 switching in a fresh Memtable for LocationInfo at
CommitLogContext(file='/data/cassandra/0.7/commitlog/CommitLog-1
287741658826.log', position=700)
 INFO 14:00:58,969 Enqueuing flush of Memtable-LocationInfo@14222419(227
bytes, 4 operations)
 INFO 14:00:58,970 Writing Memtable-LocationInfo@14222419(227 bytes, 4
operations)
 INFO 14:00:59,089 Completed flushing
/data/cassandra/0.7/data/system/LocationInfo-e-1-Data.db
DEBUG 14:00:59,093 Checking to see if compaction of LocationInfo would be
useful
DEBUG 14:00:59,094 discard completed log segments for
CommitLogContext(file='/data/cassandra/0.7/commitlog/CommitLog-1287741658826.lo
g', position=700), column family 0.
DEBUG 14:00:59,095 Marking replay position 700 on commit log
CommitLogSegment(/data/cassandra/0.7/commitlog/CommitLog-1287741658826.l
og)
DEBUG 14:00:59,116 attempting to connect to /192.168.0.37
ERROR 14:00:59,118 Exception encountered during startup.
java.lang.NullPointerException
        at
org.apache.cassandra.db.SystemTable.isBootstrapped(SystemTable.java:308)
        at
org.apache.cassandra.service.StorageService.initServer(StorageService.java:437)
        at
org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:159)
        at
org.apache.cassandra.thrift.CassandraDaemon.setup(CassandraDaemon.java:55)
        at
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:215)
        at
org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:134)
Exception encountered during startup.
java.lang.NullPointerException
        at
org.apache.cassandra.db.SystemTable.isBootstrapped(SystemTable.java:308)
        at
org.apache.cassandra.service.StorageService.initServer(StorageService.java:437)
        at
org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:159)
        at
org.apache.cassandra.thrift.CassandraDaemon.setup(CassandraDaemon.java:55)
        at
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:215)
        at
org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:134)



Is it bug or i do something wrong?






PS: here is my cassandra yaml

# Cassandra storage config YAML

cluster_name: 'Test Cluster'

initial_token:

auto_bootstrap: true

hinted_handoff_enabled: true

authenticator: org.apache.cassandra.auth.AllowAllAuthenticator

authority: org.apache.cassandra.auth.AllowAllAuthority

partitioner: org.apache.cassandra.dht.RandomPartitioner

# directories where Cassandra should store data on disk.
data_file_directories:
    - /data/cassandra/0.7/data

# commit log
commitlog_directory: /data/cassandra/0.7/commitlog

# saved caches
saved_caches_directory: /data/cassandra/0.7/saved_caches

# Size to allow commitlog to grow to before creating a new segment
commitlog_rotation_threshold_in_mb: 128

commitlog_sync: periodic

commitlog_sync_period_in_ms: 10000

seeds:
    - 192.168.0.37

disk_access_mode: auto

concurrent_reads: 8
concurrent_writes: 32

memtable_flush_writers: 1

# TCP port, for commands and data
storage_port: 7000
listen_address: 192.168.0.220

rpc_address: 192.168.0.220
rpc_port: 9160

# enable or disable keepalive on rpc connections
rpc_keepalive: true

binary_memtable_throughput_in_mb: 256

# Add column indexes to a row after its contents reach this size.
# Increase if your column values are large, or if you have a very large
# number of columns.  The competing causes are, Cassandra has to
# deserialize this much of the row to read a single column, so you want
# it to be small - at least if you do many partial-row reads - but all
# the index data is read for each access, so you don't want to generate
# that wastefully either.
column_index_size_in_kb: 64

# Size limit for rows being compacted in memory.  Larger rows will spill
# over to disk and use a slower two-pass compaction process.  A message
# will be logged specifying the row key.
in_memory_compaction_limit_in_mb: 64

# Time to wait for a reply from other nodes before failing the command
rpc_timeout_in_ms: 10000

# phi value that must be reached for a host to be marked down.
# most users should never need to adjust this.
# phi_convict_threshold: 8

# endpoint_snitch -- Set this to a class that implements
# IEndpointSnitch, which will let Cassandra know enough
# about your network topology to route requests efficiently.
# Out of the box, Cassandra provides
#  - org.apache.cassandra.locator.SimpleSnitch:
#    Treats Strategy order as proximity. This improves cache locality
#    when disabling read repair, which can further improve throughput.
#  - org.apache.cassandra.locator.RackInferringSnitch:
#    Proximity is determined by rack and data center, which are
#    assumed to correspond to the 3rd and 2nd octet of each node's
#    IP address, respectively
# org.apache.cassandra.locator.PropertyFileSnitch:
#  - Proximity is determined by rack and data center, which are
#    explicitly configured in cassandra-rack.properties.
endpoint_snitch: org.apache.cassandra.locator.SimpleSnitch

# dynamic_snitch -- This boolean controls whether the above snitch is
# wrapped with a dynamic snitch, which will monitor read latencies
# and avoid reading from hosts that have slowed (due to compaction,
# for instance)
dynamic_snitch: true
# controls how often to perform the more expensive part of host score
# calculation
dynamic_snitch_update_interval_in_ms: 100
# controls how often to reset all host scores, allowing a bad host to
# possibly recover
dynamic_snitch_reset_interval_in_ms: 600000
# if set greater than zero and read_repair_chance is < 1.0, this will allow
# 'pinning' of replicas to hosts in order to increase cache capacity.
# The badness threshold will control how much worse the pinned host has to
be
# before the dynamic snitch will prefer other replicas over it.  This is
# expressed as a double which represents a percentage.
dynamic_snitch_badness_threshold: 0.0

# request_scheduler -- Set this to a class that implements
# RequestScheduler, which will schedule incoming client requests
# according to the specific policy. This is useful for multi-tenancy
# with a single Cassandra cluster.
# NOTE: This is specifically for requests from the client and does
# not affect inter node communication.
# org.apache.cassandra.scheduler.NoScheduler - No scheduling takes place
# org.apache.cassandra.scheduler.RoundRobinScheduler - Round robin of
# client requests to a node with a separate queue for each
# request_scheduler_id. The scheduler is further customized by
# request_scheduler_options as described below.
request_scheduler: org.apache.cassandra.scheduler.NoScheduler

# Scheduler Options vary based on the type of scheduler
# NoScheduler - Has no options
# RoundRobin
#  - throttle_limit -- The throttle_limit is the number of in-flight
#                      requests per client.  Requests beyond
#                      that limit are queued up until
#                      running requests can complete.
#                      The value of 80 here is twice the number of
#                      concurrent_reads + concurrent_writes.
#  - default_weight -- default_weight is optional and allows for
#                      overriding the default which is 1.
#  - weights -- Weights are optional and will default to 1 or the
#               overridden default_weight. The weight translates into how
#               many requests are handled during each turn of the
#               RoundRobin, based on the scheduler id.
#
# request_scheduler_options:
#    throttle_limit: 80
#    default_weight: 5
#    weights:
#      Keyspace1: 1
#      Keyspace2: 5

# request_scheduler_id -- An identifer based on which to perform
# the request scheduling. Currently the only valid option is keyspace.
# request_scheduler_id: keyspace

# The Index Interval determines how large the sampling of row keys
#  is for a given SSTable. The larger the sampling, the more effective
#  the index is at the cost of space.
index_interval: 128

Mime
View raw message