cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sylvain Lebresne (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-6428) Inconsistency in CQL native protocol
Date Tue, 03 Dec 2013 14:41:44 GMT


Sylvain Lebresne commented on CASSANDRA-6428:

As said in CASSANDRA-5428, the use of a short in the native protocol is, to a large extent,
an oversight. And we'll fix it in the next iteration of the binary protocol, but we can't
break every driver for that in the current version (for hopefully obvious reasons).

Collections are not meant to be large because it just doesn't work well from a language point
of view. A CQL collection is, from an API point of view, just one CQL column in one CQL row.
This is not the right place for things large, where by "large" I mean something that is not
meant to queried in it's entirety. CQL provides the notion of clustering columns that allow
to keep row sorted and query ranges of them. That is the right place for large things.

Now there is some wiggle room in what "large" (again in the sense of always fetch entirely)
means which depends on the use case. So maybe that yes, a collection would actually be a great
fit for you but the 64k limit is just a little too low. Sorry if that's the case, we'll lift
that limitation at some point but again, we just can't break everyone by changing the collection
format in current version of the protocol.

For alternatives, you can split into 2 tables. Or maybe, if the problem is just that the current
limit is slightly too low, you have an easy way to distribute your set values into 5-10 separate
set columns. Or otherwise, you can always do it like you would have in thrift, and use a table
  pk text,
  name text,
  value blob,
  PRIMARY KEY (pk, name)
where "name" would be both the name of your "static" properties (so 'val1', 'val2', etc..)
and your set elements (maybe with a prefix character that make sure they don't conflict with
the other properties), and "value" would be the values for the "static" properties and just
an empty blob for the set elements. It does mean you need to handle the coding/decoding of
your property values client side, but it's not *that* hard either.

And I'm sure there can be other idea but I don't know the details of your use case (and tbh,
it's not the right place to discuss modeling questions). 

> Inconsistency in CQL native protocol
> ------------------------------------
>                 Key: CASSANDRA-6428
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jan Chochol
> We are trying to use Cassandra CQL3 collections (sets and maps) for denormalizing data.
> Problem is, when size of these collections go above some limit. We found that current
limitation is 64k - 1 (65535) items in collection.
> We found that there is inconsistency in CQL binary protocol (all current available versions).

> In protocol (for set) there are these fields:
> {noformat}
> [value size: int] [items count: short] [items] ...
> {noformat}
> One example in our case (collection with 65536 elements):
> {noformat}
> 00 21 ff ee 00 00 00 20 30 30 30 30 35 63 38 69 65 33 67 37 73 61 ...
> {noformat}
> So decode {{value size}} is 1245166 bytes and {{items count}} is 0.
> This is wrong - you can not have collection with 0 items occupying more than 1MB.
> I understand that in unsigned short you can not have more than 65535, but I do not understand
why there is such limitation in protocol, when all data are currently sent.
> In this case we have several possibilities:
> * ignore {{items count}} field and read all bytes specified in {{value size}}
> ** there is problem that we can not be sure, that this behaviour will be kept over for
future versions of Cassandra, as it is quite strange
> * refactor our code to use only small collections (this seems quite odd, as Cassandra
has no problems with wide rows)
> * do not use collections, and fall-back to net wide rows
> * wait for change in protocol for removing unnecessary limitation

This message was sent by Atlassian JIRA

View raw message