cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sylvain Lebresne (JIRA)" <>
Subject [jira] Updated: (CASSANDRA-1130) Cassandra throws Exceptions at startup when using TTL in SuperColumns
Date Fri, 28 May 2010 16:03:39 GMT


Sylvain Lebresne updated CASSANDRA-1130:

    Attachment: 0001-Allow-for-multiple-mark-on-a-file.patch

Attached fiie should fix the problem. 

The problem is unrelated to TTL per se but a problem in row iterations so the 
ticket title can be a bit misleading.
Citing irc: 
  "a columnGroupReader mark the file when it is created. Then it reset() when getting a next
   but with the way the row iteration works, a new columnGroupReader is created (and mark
the file) before 
   the previous one has retrieved it's block
  (it's because computeNext() create the next SSTableSliceIterator before getReduced() had
retrieved the actual 
   column of the previous one)"
The patch allow for each columnGroupReader to have it's own mark on the file

Btw, I was unable to reproduce previously because it's the cache preloading that 
trigged the error and I did not use one in my first tests. 

Thanks Jignesh for helping find this one.

> Cassandra throws Exceptions at startup when using TTL in SuperColumns
> ---------------------------------------------------------------------
>                 Key: CASSANDRA-1130
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7
>            Reporter: Jignesh Dhruv
>            Assignee: Sylvain Lebresne
>             Fix For: 0.7
>         Attachments: 0001-Allow-for-multiple-mark-on-a-file.patch,,
> Hello,
> I am trying to use TTL (timeToLive) feature in SuperColumns.
> My usecase is:
> - I have a SuperColumn and 3 subcolumns.
> - I try to expire data after 60 seconds.
> While Cassandra is up and running, I am successfully able to push and read data without
any problems. Data compaction and all occurs fine. After inserting say about 100000 records,
I stop Cassandra while data is still coming.
> On startup Cassandra throws an exception and won't start up. (This happens 1 in every
3 times). Exception varies like:
> - EOFException while reading data
> - negative value encountered exception
> - Heap Space Exception
> Cassandra simply won't start up.
> Again I get this problem only when I use TTL with SuperColumns. There are no issues with
using TTL with regular Columns.
> I tried to diagnose the problem and it seems to happen on startup when it sees a Column
that is marked Deleted and its trying to read data. Its off by some bytes and hence all these
> Caused by: Corrupt (negative) value length encountered
>         at org.apache.cassandra.utils.FBUtilities.readByteArray(
>         at org.apache.cassandra.db.ColumnSerializer.deserialize(
>         at org.apache.cassandra.db.SuperColumnSerializer.deserialize(
>         at org.apache.cassandra.db.SuperColumnSerializer.deserialize(
>         at org.apache.cassandra.db.filter.SSTableSliceIterator$ColumnGroupReader.getNextBlock(
>         at org.apache.cassandra.db.filter.SSTableSliceIterator$ColumnGroupReader.pollColumn(
>         ... 18 more
> Let me know if you need more information.
> Thanks,
> Jignesh

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message