cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jignesh Dhruv (JIRA)" <>
Subject [jira] Commented: (CASSANDRA-1130) Cassandra throws Exceptions at startup when using TTL in SuperColumns
Date Wed, 26 May 2010 17:30:41 GMT


Jignesh Dhruv commented on CASSANDRA-1130:

I checked out the latest source code this morning and I am still able to reproduce it.

My usecase is:
- Start Cassandra
- Keep adding SuperColumns with 3 subcolumns within each SuperColumn. Each subcolumn expires
in 35 seconds.
- Let cassandra run until you see statements  like "Deleted  files"
- Stop cassandra and try to start and it will give you all the exceptions that I am talking

Also I believe ExpiringColumn contains some more data compared to DeletedColumn. Correct?
In my testing I found that length of each DeletedColumn was similar to ExpiringColumn and
once a complete DeletedColumn record was read there were some more extra bytes at the end
of the record which is causing all this issue?

When you convert a ExpiringColumn to DeletedColumn, is it in place replacement or the old
record is marked for deletion by just changing the EXPIRING_MASK to DELETED_MASK.

I will try to produce a junit test case. But one needs to still stop cassandra when one sees
some files being deleted. At that point you will see the error that I am talking about.


> Cassandra throws Exceptions at startup when using TTL in SuperColumns
> ---------------------------------------------------------------------
>                 Key: CASSANDRA-1130
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7
>            Reporter: Jignesh Dhruv
>            Assignee: Sylvain Lebresne
>             Fix For: 0.7
> Hello,
> I am trying to use TTL (timeToLive) feature in SuperColumns.
> My usecase is:
> - I have a SuperColumn and 3 subcolumns.
> - I try to expire data after 60 seconds.
> While Cassandra is up and running, I am successfully able to push and read data without
any problems. Data compaction and all occurs fine. After inserting say about 100000 records,
I stop Cassandra while data is still coming.
> On startup Cassandra throws an exception and won't start up. (This happens 1 in every
3 times). Exception varies like:
> - EOFException while reading data
> - negative value encountered exception
> - Heap Space Exception
> Cassandra simply won't start up.
> Again I get this problem only when I use TTL with SuperColumns. There are no issues with
using TTL with regular Columns.
> I tried to diagnose the problem and it seems to happen on startup when it sees a Column
that is marked Deleted and its trying to read data. Its off by some bytes and hence all these
> Caused by: Corrupt (negative) value length encountered
>         at org.apache.cassandra.utils.FBUtilities.readByteArray(
>         at org.apache.cassandra.db.ColumnSerializer.deserialize(
>         at org.apache.cassandra.db.SuperColumnSerializer.deserialize(
>         at org.apache.cassandra.db.SuperColumnSerializer.deserialize(
>         at org.apache.cassandra.db.filter.SSTableSliceIterator$ColumnGroupReader.getNextBlock(
>         at org.apache.cassandra.db.filter.SSTableSliceIterator$ColumnGroupReader.pollColumn(
>         ... 18 more
> Let me know if you need more information.
> Thanks,
> Jignesh

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message