incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Morton <>
Subject Re: Starting up cassandra 0.7
Date Wed, 17 Nov 2010 19:28:24 GMT
That does look like a bug


On 17 Nov, 2010,at 03:55 PM, Aaron Morton <> wrote:

It's certainly looks suspect. I've had a look at the code around SSTableImport and SSTableExport
and the isDeleted value for the col is based on IColumn.isMarkedForDelete read when the data
was exported. I'll try to have a look tonight, or if someone is still up in the states they
may help. The current 0.7 branch has the same code. 

Are there any other errors in the log? 

The code sample you've got there uses the TimestampClocks that were once a part of the 0.7
development, but have been removed. So your data sample may not be usable going forwards.
Can you jump to beta3?


On 17 Nov, 2010,at 02:03 PM, CassUser CassUser <> wrote:

Looking at this closer.  I noticed the following in the SSTableImport Class:

            if (col.isDeleted) {
                cfamily.addColumn(path, hexToBytes(col.value), new TimestampClock(col.timestamp));
            } else {
                cfamily.addTombstone(path, hexToBytes(col.value), new TimestampClock(col.timestamp));

This appears to be backwards.

On Tue, Nov 16, 2010 at 4:03 PM, CassUser CassUser <> wrote:
Looked at how DatabaseDescriptor is loading the yaml file.  Using that approach solves the
problem with the column_families mapping exception.

The problems we are running into currently is regarding a known dataset not being loaded into
our test instance correctly.

1.  Create temp directory to host cassandra test instance
2.  Create keyspace directories
3  Update yaml file, and copy to test location
4.  Add schema from yaml using DatabaseDescriptor.
readTablesFromYaml() & DatabaseDescriptor.
4.  Use SSTableImport.importJson to setup a known dataset.  We have JSON files converted
to 0.7 compliant (byte [] rows)
5.  Start Cassandra instance using the EmbeddedCassandraService class

Everything appears to work from the log messages.  I get the message " Sampling index for
..", and finally "Listening for thrift clients...".  When i use a client to query the data
in test instance, I notice keys with no columns/values stored (via keyrange slice).  Using
a open ended column slice for a known key I'm getting zero results.

I'm currently running cassandra 0.7. beta2.  Is the steps I've outlined above suppose to
work in 0.7?  We did something similar in 0.6.4 without any problems.  Is there known bugs
I can look into?


On Tue, Nov 16, 2010 at 12:24 PM, Aaron Morton <> wrote:
I've not used the embedded service. 

The code in o.a.c.service.EmbeddedCassandraService says it will read the yaml file. If the
cluster does not have a schema stored I think it will load the one from yaml. 

Have you tried starting it up with an empty system data dir ? Does it pickup the schema from
the yaml?


On 17 Nov, 2010,at 09:17 AM, CassUser CassUser <> wrote:

Loading yaml file like so:
        FileInputStream yamlInputStream = new FileInputStream(
        Constructor constructor = new Constructor(Config.class);
        Yaml yaml = new Yaml(new Loader(constructor));
        Config conf = (Config) yaml.load(yamlInputStream);

Fails on the last line.

Although if i have the CFs defined like this:
- column_families:
  - !!org.apache.cassandra.config.RawColumnFamily
    column_metadata: []
    column_type: null
    comment: null
    compare_subcolumns_with: null
    compare_with: BytesType
    default_validation_class: null
    gc_grace_seconds: 864000
    keys_cached: 200000.0
    max_compaction_threshold: 32
    min_compaction_threshold: 4
    name: Similarity
    preload_row_cache: false
    read_repair_chance: 1.0
    rows_cached: 0.0

it appears to startup.  Any idea whats going on here?

On Tue, Nov 16, 2010 at 11:58 AM, CassUser CassUser <> wrote:

This is embedded for testing cassandra 0.7 beta2.  using EmbeddedCassandraService.

and manually adding schema programmatically using:
        for (KSMetaData table : DatabaseDescriptor.readTablesFromYaml()) {
            for (CFMetaData cfm : table.cfMetaData().values()) {

Is this the correct way to start up a test server, with the schema loaded?

On Tue, Nov 16, 2010 at 11:41 AM, Aaron Morton <> wrote:

AFAIK the ArrayStoreException is similar to a type mismatch. Is it possible you have something
mixed up in your class path or source code if you built from source? 

It looks like the column family info was deserialised into a o.a.c.config.RawColumnFamily
but when that object was added to the RawColumnFamily[] array on o.a.c.config.RawKeyspace
if was the wrong type. 

Have you tried a clean build ? There are some things in the call stack which look custom,
are you starting from the command line or is this embedded for testing?

Hope that helps.

On 17 Nov, 2010,at 08:09 AM, CassUser CassUser <> wrote:

Here is the yaml:

# Cassandra YAML generated from previous config
# Configuration wiki:
authenticator: org.apache.cassandraauth.AllowAllAuthenticator

auto_bootstrap: false
binary_memtable_throughput_in_mb: 256
cluster_name: Test Cluster
column_index_size_in_kb: 64
commitlog_rotation_threshold_in_mb: 128
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
compaction_thread_priority: 1
concurrent_reads: 8
concurrent_writes: 32
disk_access_mode: auto
dynamic_snitch: false
endpoint_snitch: org.apache.cassandra.locator.SimpleSnitch
request_scheduler: org.apache.cassandra.scheduler.RoundRobinScheduler
request_scheduler_id: keyspace
hinted_handoff_enabled: true
in_memory_compaction_limit_in_mb: 128
index_interval: 128
    - name: myKeyspace

      replica_placement_strategy: orgapachecassandra.locator.SimpleStrategy

      replication_factor: 1
        - name: Standard1
          rows_cached: 100
          keys_cached: 1
          compare_with: UTF8Type
memtable_flush_after_mins: 60
memtable_operations_in_millions: 0.3
memtable_throughput_in_mb: 64
partitioner: org.apache.cassandra.dht.RandomPartitioner
phi_convict_threshold: 8
rpc_keepalive: true
rpc_port: 9160
rpc_timeout_in_ms: 10000
sliced_buffer_size_in_kb: 64
snapshot_before_compaction: false
storage_port: 7000
thrift_framed_transport_size_in_mb: 15
thrift_max_message_length_in_mb: 16

and here is the exception i'm receiving:

Caused by: Can't construct a java object for,2002:org.apachecassandra.config.Config;
exception=Cannot create property=keyspaces for Java
Bean=org.apache.cassandra.configConfig@12e43f1; Cannot create property=column_families for
 in "<reader>", line 3, column 1:
    authenticator: org.apache.cassan ...

        at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(
        at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(
        at org.yaml.snakeyamlconstructor.BaseConstructorconstructDocument(

        at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(
        at org.yaml.snakeyaml.Loader.load(
        at org.yaml.snakeyaml.Yaml.load(
        at com.atsid.cassandra.testutils.CassandraTestRunner.updateYamlConfig(CassandraTestRunnerjava:134)
        at com.atsid.cassandra.testutils.CassandraTestRunner.init(
        at com.atsid.cassandra.testutils.CassandraTestRunner.main(
        ... 6 more
Caused by: org.yaml.snakeyaml.error.YAMLException: Cannot create property=keyspaces for JavaBean=org.apache.cassandraconfig.Config@12e43f1;
Cannot cr
eate property=column_families for JavaBean=org.apache.cassandra.config.RawKeyspace@1a8bd74;
        at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(
        at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.construct(
        at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(
        ... 14 more
Caused by: org.yaml.snakeyaml.error.YAMLException: Cannot create property=column_families
for JavaBean=org.apache.cassandra.config.RawKeyspace@1a8bd74
; null
        at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(
        at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.construct(
        at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(
        at org.yaml.snakeyaml.constructor.BaseConstructor.constructSequenceStep2(
        at org.yaml.snakeyaml.constructor.BaseConstructor.constructSequence(
        at org.yaml.snakeyaml.constructorConstructor$ConstructSequence.construct(

        at org.yaml.snakeyaml.constructor.BaseConstructorconstructObject(
        at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(
        ... 16 more
Caused by: java.lang.ArrayStoreException
        at java.lang.System.arraycopy(Native Method)
        at java.util.Arrays.copyOf(
        at java.util.ArrayList.toArray(
        at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(
        ... 23 more

  • Unnamed multipart/alternative (inline, None, 0 bytes)
    • Unnamed multipart/related (inline, None, 0 bytes)
View raw message