cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Lohfink (JIRA)" <j...@apache.org>
Subject [jira] [Assigned] (CASSANDRA-12418) sstabledump JSON fails after row tombstone
Date Tue, 09 Aug 2016 11:26:20 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Chris Lohfink reassigned CASSANDRA-12418:
-----------------------------------------

    Assignee: Chris Lohfink

> sstabledump JSON fails after row tombstone
> ------------------------------------------
>
>                 Key: CASSANDRA-12418
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12418
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>            Reporter: Keith Wansbrough
>            Assignee: Chris Lohfink
>
> sstabledump fails in JSON generation on an sstable containing a row deletion, using Cassandra
3.10-SNAPSHOT accf7a4724e244d6f1ba921cb11d2554dbb54a76 from 2016-07-26.
> There are two exceptions displayed:
> * Fatal error parsing partition: aye org.codehaus.jackson.JsonGenerationException: Can
not start an object, expecting field name
> * org.codehaus.jackson.JsonGenerationException: Current context not an ARRAY but OBJECT
> Steps to reproduce:
> {code}
> cqlsh> create KEYSPACE foo WITH replication = {'class': 'SimpleStrategy', 'replication_factor':
1};
> cqlsh> create TABLE foo.bar (id text, str text, primary key (id));
> cqlsh> insert into foo.bar (id, str) values ('aye', 'alpha');
> cqlsh> insert into foo.bar (id, str) values ('bee', 'beta');
> cqlsh> delete from foo.bar where id = 'bee';
> cqlsh> insert into foo.bar (id, str) values ('bee', 'beth');
> cqlsh> select * from foo.bar;
>  id  | str
> -----+-------
>  bee |  beth
>  aye | alpha
> (2 rows)
> cqlsh> 
> {code}
> Now find the sstable:
> {code}
> $ cassandra/bin/nodetool flush
> $ cassandra/bin/sstableutil foo bar
> [..]
> Listing files...
> [..]
> /home/kw217/cassandra/data/data/foo/bar-407c56f05e1a11e6835def64bf5c656e/mb-1-big-Data.db
> [..]
> {code}
> Now check with sstabledump \-d. This works just fine.
> {code}
> $ cassandra/tools/bin/sstabledump -d /home/kw217/cassandra/data/data/foo/bar-407c56f05e1a11e6835def64bf5c656e/mb-1-big-Data.db
> [bee]@0 deletedAt=1470737827008101, localDeletion=1470737827
> [bee]@0 Row[info=[ts=1470737832405510] ]:  | [str=beth ts=1470737832405510]
> [aye]@31 Row[info=[ts=1470737784401778] ]:  | [str=alpha ts=1470737784401778]
> {code}
> Now run sstabledump. This should work as well, but it fails as follows:
> {code}
> $ cassandra/tools/bin/sstabledump /home/kw217/cassandra/data/data/foo/bar-407c56f05e1a11e6835def64bf5c656e/mb-1-big-Data.db
> ERROR 10:26:07 Fatal error parsing partition: aye
> org.codehaus.jackson.JsonGenerationException: Can not start an object, expecting field
name
> 	at org.codehaus.jackson.impl.JsonGeneratorBase._reportError(JsonGeneratorBase.java:480)
~[jackson-core-asl-1.9.2.jar:1.9.2]
> 	at org.codehaus.jackson.impl.WriterBasedGenerator._verifyValueWrite(WriterBasedGenerator.java:836)
~[jackson-core-asl-1.9.2.jar:1.9.2]
> 	at org.codehaus.jackson.impl.WriterBasedGenerator.writeStartObject(WriterBasedGenerator.java:273)
~[jackson-core-asl-1.9.2.jar:1.9.2]
> 	at org.apache.cassandra.tools.JsonTransformer.serializePartition(JsonTransformer.java:181)
~[main/:na]
> 	at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) ~[na:1.8.0_77]
> 	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) ~[na:1.8.0_77]
> 	at java.util.Iterator.forEachRemaining(Iterator.java:116) ~[na:1.8.0_77]
> 	at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
~[na:1.8.0_77]
> 	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[na:1.8.0_77]
> 	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[na:1.8.0_77]
> 	at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) ~[na:1.8.0_77]
> 	at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
~[na:1.8.0_77]
> 	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[na:1.8.0_77]
> 	at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) ~[na:1.8.0_77]
> 	at org.apache.cassandra.tools.JsonTransformer.toJson(JsonTransformer.java:99) ~[main/:na]
> 	at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:237) ~[main/:na]
> [
>   {
>     "partition" : {
>       "key" : [ "bee" ],
>       "position" : 0,
>       "deletion_info" : { "marked_deleted" : "2016-08-09T10:17:07.008101Z", "local_delete_time"
: "2016-08-09T10:17:07Z" }
>     }
>   }
> ]org.codehaus.jackson.JsonGenerationException: Current context not an ARRAY but OBJECT
> 	at org.codehaus.jackson.impl.JsonGeneratorBase._reportError(JsonGeneratorBase.java:480)
> 	at org.codehaus.jackson.impl.WriterBasedGenerator.writeEndArray(WriterBasedGenerator.java:257)
> 	at org.apache.cassandra.tools.JsonTransformer.toJson(JsonTransformer.java:100)
> 	at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:237)
> {code}
> If possible, please can this be fixed in the 3.0.x stream as well as trunk?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message