cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hau Phan (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-12215) NullPointerException during Compaction
Date Wed, 20 Jul 2016 20:14:20 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386470#comment-15386470
] 

Hau Phan edited comment on CASSANDRA-12215 at 7/20/16 8:13 PM:
---------------------------------------------------------------

One thing I noticed, when setting the gc_grace_seconds to 8640000 (100 days), the tables are
readable and compaction runs without error.  

Enabling tracing, doing a select * from the table shows 4 tombstones:
{code}
Read 124 live and 4 tombstone cells [SharedPool-Worker-2] | 2016-07-20 14:09:17.236000 | 127.0.0.1
|           1849
{code}

I have run 
- COPY table TO 'file'
- DROP table
- rm -rf'd the table directory on the file system
- CREATE table (with same schema)
- COPY table FROM 'file'
- Ran the select * again, and the tombstones still exist.

Expected behavior would be a clean table with no tombstones, yet the 4 tombstones exist. 
Reviewing the 'dump' file shows no tombstones.  






was (Author: nothau):
One thing I noticed, when setting the gc_grace_seconds to 8640000 (100 days), the tables are
readable and compaction runs without error.  

Enabling tracing, doing a select * from the table shows 4 tombstones:
{code}
Read 124 live and 4 tombstone cells [SharedPool-Worker-2] | 2016-07-20 14:09:17.236000 | 127.0.0.1
|           1849
{code}

I have run a COPY TO, truncated the table, then ran a COPY FROM, ran the select * again, and
the tombstones still exist.

Reviewing the 'dump' file, I'm not seeing anything marked tombstones and the data looks fine.
 My understanding is, compaction should remove the tombstone records if it's past the gc_grace_seconds
limit.  Also, if the 'dump' file doesn't show tombstones, where is cassandra retrieving that
info?





> NullPointerException during Compaction
> --------------------------------------
>
>                 Key: CASSANDRA-12215
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12215
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Compaction
>         Environment: Cassandra 3.0.8, cqlsh 5.0.1
>            Reporter: Hau Phan
>             Fix For: 3.0.x
>
>
> Running 3.0.8 on a single standalone node with cqlsh 5.0.1, the keyspace RF = 1 and class
SimpleStrategy.  
> Attempting to run a 'select * from <table>' and receiving this error:
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation failed
- received 0 responses and 1 failures" info={'failures': 1, 'received_responses': 0, 'required_responses':
1, 'consistency': 'ONE'}
> Cassandra system.log prints this:
> {code}
> ERROR [CompactionExecutor:5] 2016-07-15 13:42:13,219 CassandraDaemon.java:201 - Exception
in thread Thread[CompactionExecutor:5,1,main]
> java.lang.NullPointerException: null
> 	at org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:58)
~[apache-cassandra-3.0.8.jar:3.0.8]
> 	at org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
~[apache-cassandra-3.0.8.jar:3.0.8]
> 	at org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
~[apache-cassandra-3.0.8.jar:3.0.8]
> 	at org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
~[apache-cassandra-3.0.8.jar:3.0.8]
> 	at org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
~[apache-cassandra-3.0.8.jar:3.0.8]
> 	at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
~[apache-cassandra-3.0.8.jar:3.0.8]
> 	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-3.0.8.jar:3.0.8]
> 	at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
~[apache-cassandra-3.0.8.jar:3.0.8]
> 	at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
~[apache-cassandra-3.0.8.jar:3.0.8]
> 	at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
~[apache-cassandra-3.0.8.jar:3.0.8]
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_65]
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_65]
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_65]
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_65]
> 	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> {code}
> Doing a sstabledump -d shows a few rows with the column value of "<tombstone>",
telling me compaction doesn't seem to be working correctly.  
> # nodetool compactionstats 
> pending tasks: 1
> attempting to run a compaction gets:
> # nodetool compact <table> <cf>
> error: null
> -- StackTrace --
> java.lang.NullPointerException
> 	at org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:58)
> 	at org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
> 	at org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
> 	at org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
> 	at org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
> 	at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
> 	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> 	at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
> 	at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
> 	at org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:606)
> 	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> Since the table is pretty small, I can do a copy to, truncate the table, and copy from,
and the table is fine.  But my concern is if compaction fails to remove those rows, and the
table will eventually be very large in a production environment, the copy, truncate, and copy
will no longer be an option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message