cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "vinoth (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-9651) Not able to view the data if the wrong timestamp gets updated
Date Thu, 25 Jun 2015 12:01:04 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-9651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

vinoth updated CASSANDRA-9651:
------------------------------
    Description: 
We have been using datastax version of apache-cassandra 2.1.2.

This is our CF Describe:

CREATE TABLE pidb.customer_health1 (
    domain text,
    sme text,
    starttime timestamp,
    device text,
    fair counter,
    good counter,
    poor counter,
    PRIMARY KEY ((domain, sme), starttime, device)
) WITH CLUSTERING ORDER BY (starttime ASC, device ASC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
    AND comment = ''
    AND compaction = {'min_threshold': '4', 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
'max_threshold': '32'}
    AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99.0PERCENTILE';

We have  data in this Column Family and we are using Perl Client to update the Cassandra .

So far it is good. Today something got messed while doing UPDATE , and we cannot retrieved
the data from column family.

While debugging we found the the timestamp is wrongly updated by the client. We know that
it is in milliseconds, some reasons it getting added as a microseconds.

Cassandra did not show any errors while writing . But when we use "SELECT" to get the data
back. it is showing the following error.

xyz@cqlsh:pidb> select * from customer_health1;
Traceback (most recent call last):
  File "/usr/local/cassandra/bin/cqlsh", line 960, in perform_simple_statement
    rows = self.session.execute(statement, trace=self.tracing_enabled)
  File "/usr/local/dsc-cassandra-2.1.2/bin/../lib/cassandra-driver-internal-only-2.1.2.zip/cassandra-driver-2.1.2/cassandra/cluster.py",
line 1294, in execute
    result = future.result(timeout)
  File "/usr/local/dsc-cassandra-2.1.2/bin/../lib/cassandra-driver-internal-only-2.1.2.zip/cassandra-driver-2.1.2/cassandra/cluster.py",
line 2788, in result
    raise self._final_exception
ValueError: year is out of range 

The entire data is not getting back by means of single wrong update.

  was:
We have been using datastax version of apache-cassandra 2.1.2.

This is our CF Describe:

CREATE TABLE pidb.customer_health1 (
    domain text,
    sme text,
    starttime timestamp,
    device text,
    fair counter,
    good counter,
    poor counter,
    PRIMARY KEY ((domain, sme), starttime, device)
) WITH CLUSTERING ORDER BY (starttime ASC, device ASC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
    AND comment = ''
    AND compaction = {'min_threshold': '4', 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
'max_threshold': '32'}
    AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND dclocal_read_repair_chance = 0.1
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99.0PERCENTILE';

We have huge data in this Column Family and we are using Perl Client to update the Cassandra
.

So far it is good. Today something got messed while doing UPDATE , and we cannot retrieved
the data from column family.

While debugging we found the the timestamp is wrongly updated by the client. We know that
it is in milliseconds, some reasons it getting added as a microseconds.

Cassandra did not show any errors while writing . But when we use "SELECT" to get the data
back. it is showing the following error.

xyz@cqlsh:pidb> select * from customer_health1;
Traceback (most recent call last):
  File "/usr/local/cassandra/bin/cqlsh", line 960, in perform_simple_statement
    rows = self.session.execute(statement, trace=self.tracing_enabled)
  File "/usr/local/dsc-cassandra-2.1.2/bin/../lib/cassandra-driver-internal-only-2.1.2.zip/cassandra-driver-2.1.2/cassandra/cluster.py",
line 1294, in execute
    result = future.result(timeout)
  File "/usr/local/dsc-cassandra-2.1.2/bin/../lib/cassandra-driver-internal-only-2.1.2.zip/cassandra-driver-2.1.2/cassandra/cluster.py",
line 2788, in result
    raise self._final_exception
ValueError: year is out of range 

The entire data is not getting back by means of single wrong update.


> Not able to view the data if the wrong timestamp gets updated
> -------------------------------------------------------------
>
>                 Key: CASSANDRA-9651
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9651
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: vinoth
>
> We have been using datastax version of apache-cassandra 2.1.2.
> This is our CF Describe:
> CREATE TABLE pidb.customer_health1 (
>     domain text,
>     sme text,
>     starttime timestamp,
>     device text,
>     fair counter,
>     good counter,
>     poor counter,
>     PRIMARY KEY ((domain, sme), starttime, device)
> ) WITH CLUSTERING ORDER BY (starttime ASC, device ASC)
>     AND bloom_filter_fp_chance = 0.01
>     AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
>     AND comment = ''
>     AND compaction = {'min_threshold': '4', 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy',
'max_threshold': '32'}
>     AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
>     AND dclocal_read_repair_chance = 0.1
>     AND default_time_to_live = 0
>     AND gc_grace_seconds = 864000
>     AND max_index_interval = 2048
>     AND memtable_flush_period_in_ms = 0
>     AND min_index_interval = 128
>     AND read_repair_chance = 0.0
>     AND speculative_retry = '99.0PERCENTILE';
> We have  data in this Column Family and we are using Perl Client to update the Cassandra
.
> So far it is good. Today something got messed while doing UPDATE , and we cannot retrieved
the data from column family.
> While debugging we found the the timestamp is wrongly updated by the client. We know
that it is in milliseconds, some reasons it getting added as a microseconds.
> Cassandra did not show any errors while writing . But when we use "SELECT" to get the
data back. it is showing the following error.
> xyz@cqlsh:pidb> select * from customer_health1;
> Traceback (most recent call last):
>   File "/usr/local/cassandra/bin/cqlsh", line 960, in perform_simple_statement
>     rows = self.session.execute(statement, trace=self.tracing_enabled)
>   File "/usr/local/dsc-cassandra-2.1.2/bin/../lib/cassandra-driver-internal-only-2.1.2.zip/cassandra-driver-2.1.2/cassandra/cluster.py",
line 1294, in execute
>     result = future.result(timeout)
>   File "/usr/local/dsc-cassandra-2.1.2/bin/../lib/cassandra-driver-internal-only-2.1.2.zip/cassandra-driver-2.1.2/cassandra/cluster.py",
line 2788, in result
>     raise self._final_exception
> ValueError: year is out of range 
> The entire data is not getting back by means of single wrong update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message