cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jmcken...@apache.org
Subject cassandra git commit: Added compatibility table and test for primitive types
Date Mon, 23 May 2016 20:14:38 GMT
Repository: cassandra
Updated Branches:
  refs/heads/trunk b8f4ae004 -> 93b3aa8a4


Added compatibility table and test for primitive types

Patch by Giampaolo Trapasso; reviewed by Alex Petrov for CASSANDRA-11114


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/93b3aa8a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/93b3aa8a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/93b3aa8a

Branch: refs/heads/trunk
Commit: 93b3aa8a469f76a49e4d0975d0b6ad6e85432a47
Parents: b8f4ae0
Author: Alex Petrov <oleksandr.petrov@gmail.com>
Authored: Fri May 20 18:46:37 2016 +0200
Committer: Josh McKenzie <jmckenzie@apache.org>
Committed: Mon May 23 16:12:25 2016 -0400

----------------------------------------------------------------------
 doc/cql3/CQL.textile                            | 24 +++++-
 .../apache/cassandra/config/CFMetaDataTest.java | 87 ++++++++++++++++++--
 2 files changed, 101 insertions(+), 10 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/cassandra/blob/93b3aa8a/doc/cql3/CQL.textile
----------------------------------------------------------------------
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 4afdb4a..171bf77 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -417,11 +417,33 @@ p.
 The @ALTER@ statement is used to manipulate table definitions. It allows for adding new columns,
dropping existing ones, changing the type of existing columns, or updating the table options.
As with table creation, @ALTER COLUMNFAMILY@ is allowed as an alias for @ALTER TABLE@.
 
 The @<tablename>@ is the table name optionally preceded by the keyspace name.  The
@<instruction>@ defines the alteration to perform:
-* @ALTER@: Update the type of a given defined column. Note that the type of the "clustering
columns":#createTablepartitionClustering cannot be modified as it induces the on-disk ordering
of rows. Columns on which a "secondary index":#createIndexStmt is defined have the same restriction.
Other columns are free from those restrictions (no validation of existing data is performed),
but it is usually a bad idea to change the type to a non-compatible one, unless no data have
been inserted for that column yet, as this could confuse CQL drivers/tools.
+* @ALTER@: Update the type of a given defined column. Note that the type of the "clustering
columns":#createTablepartitionClustering can be modified only in very limited cases, as it
induces the on-disk ordering of rows. Columns on which a "secondary index":#createIndexStmt
is defined have the same restriction. To change the type of any other column, the column must
already exist in type definition and its type should be compatible with the new type. No validation
of existing data is performed. The compatibility table is available below.
 * @ADD@: Adds a new column to the table. The @<identifier>@ for the new column must
not conflict with an existing column. Moreover, columns cannot be added to tables defined
with the @COMPACT STORAGE@ option.
 * @DROP@: Removes a column from the table. Dropped columns will immediately become unavailable
in the queries and will not be included in compacted sstables in the future. If a column is
readded, queries won't return values written before the column was last dropped. It is assumed
that timestamps represent actual time, so if this is not your case, you should NOT readd previously
dropped columns. Columns can't be dropped from tables defined with the @COMPACT STORAGE@ option.
 * @WITH@: Allows to update the options of the table. The "supported @<option>@":#createTableOptions
(and syntax) are the same as for the @CREATE TABLE@ statement except that @COMPACT STORAGE@
is not supported. Note that setting any @compaction@ sub-options has the effect of erasing
all previous @compaction@ options, so you  need to re-specify all the sub-options if you want
to keep them. The same note applies to the set of @compression@ sub-options.
 
+h4. CQL type compatibility:
+
+CQL data types may be converted only as the following table.
+
+|_. Data type may be altered to:|_.Data type|
+|timestamp|bigint|
+|ascii, bigint, boolean, date, decimal, double, float, inet, int, smallint, text, time, timestamp,
timeuuid, tinyint, uuid, varchar, varint|blob|
+|int|date|
+|ascii, varchar|text|
+|bigint|time|
+|bigint|timestamp|
+|timeuuid|uuid|
+|ascii, text|varchar|
+|bigint, int, timestamp|varint|
+
+Clustering columns have stricter requirements, only the below conversions are allowed.
+
+|_. Data type may be altered to:|_.Data type|
+|ascii, text, varchar|blob|
+|ascii, varchar|text|
+|ascii, text|varchar|
+
 h3(#dropTableStmt). DROP TABLE
 
 __Syntax:__

http://git-wip-us.apache.org/repos/asf/cassandra/blob/93b3aa8a/test/unit/org/apache/cassandra/config/CFMetaDataTest.java
----------------------------------------------------------------------
diff --git a/test/unit/org/apache/cassandra/config/CFMetaDataTest.java b/test/unit/org/apache/cassandra/config/CFMetaDataTest.java
index 188f72f..6bfe5c0 100644
--- a/test/unit/org/apache/cassandra/config/CFMetaDataTest.java
+++ b/test/unit/org/apache/cassandra/config/CFMetaDataTest.java
@@ -26,18 +26,11 @@ import org.apache.cassandra.cql3.UntypedResultSet;
 import org.apache.cassandra.db.ColumnFamilyStore;
 import org.apache.cassandra.db.Keyspace;
 import org.apache.cassandra.db.Mutation;
-import org.apache.cassandra.db.marshal.AsciiType;
-import org.apache.cassandra.db.marshal.Int32Type;
-import org.apache.cassandra.db.marshal.UTF8Type;
+import org.apache.cassandra.db.marshal.*;
 import org.apache.cassandra.db.partitions.PartitionUpdate;
 import org.apache.cassandra.db.rows.UnfilteredRowIterators;
 import org.apache.cassandra.exceptions.ConfigurationException;
-import org.apache.cassandra.schema.CompressionParams;
-import org.apache.cassandra.schema.KeyspaceMetadata;
-import org.apache.cassandra.schema.KeyspaceParams;
-import org.apache.cassandra.schema.SchemaKeyspace;
-import org.apache.cassandra.schema.TableParams;
-import org.apache.cassandra.schema.Types;
+import org.apache.cassandra.schema.*;
 import org.apache.cassandra.thrift.CfDef;
 import org.apache.cassandra.thrift.ColumnDef;
 import org.apache.cassandra.thrift.IndexType;
@@ -193,4 +186,80 @@ public class CFMetaDataTest
         assertFalse(CFMetaData.isNameValid("@"));
         assertFalse(CFMetaData.isNameValid("!"));
     }
+
+    private static Set<String> primitiveTypes = new HashSet<String>(Arrays.asList(new
String[] { "ascii", "bigint", "blob", "boolean", "date",
+                                                                                        
        "decimal", "double", "float", "inet", "int",
+                                                                                        
        "smallint", "text", "time", "timestamp",
+                                                                                        
        "timeuuid", "tinyint", "uuid", "varchar",
+                                                                                        
        "varint" }));
+
+    @Test
+    public void typeCompatibilityTest() throws Throwable
+    {
+        Map<String, Set<String>> compatibilityMap = new HashMap<>();
+        compatibilityMap.put("bigint", new HashSet<>(Arrays.asList(new String[] {"timestamp"})));
+        compatibilityMap.put("blob", new HashSet<>(Arrays.asList(new String[] {"ascii",
"bigint", "boolean", "date", "decimal", "double",
+                                                                               "float", "inet",
"int", "smallint", "text", "time", "timestamp",
+                                                                               "timeuuid",
"tinyint", "uuid", "varchar", "varint"})));
+        compatibilityMap.put("date", new HashSet<>(Arrays.asList(new String[] {"int"})));
+        compatibilityMap.put("time", new HashSet<>(Arrays.asList(new String[] {"bigint"})));
+        compatibilityMap.put("text", new HashSet<>(Arrays.asList(new String[] {"ascii",
"varchar"})));
+        compatibilityMap.put("timestamp", new HashSet<>(Arrays.asList(new String[]
{"bigint"})));
+        compatibilityMap.put("varchar", new HashSet<>(Arrays.asList(new String[] {"ascii",
"text"})));
+        compatibilityMap.put("varint", new HashSet<>(Arrays.asList(new String[] {"bigint",
"int", "timestamp"})));
+        compatibilityMap.put("uuid", new HashSet<>(Arrays.asList(new String[] {"timeuuid"})));
+
+        for (String sourceTypeString: primitiveTypes)
+        {
+            AbstractType sourceType = CQLTypeParser.parse("KEYSPACE", sourceTypeString, Types.none());
+            for (String destinationTypeString: primitiveTypes)
+            {
+                AbstractType destinationType = CQLTypeParser.parse("KEYSPACE", destinationTypeString,
Types.none());
+
+                if (compatibilityMap.get(destinationTypeString) != null &&
+                    compatibilityMap.get(destinationTypeString).contains(sourceTypeString)
||
+                    sourceTypeString.equals(destinationTypeString))
+                {
+                    assertTrue(sourceTypeString + " should be compatible with " + destinationTypeString,
+                               destinationType.isValueCompatibleWith(sourceType));
+                }
+                else
+                {
+                    assertFalse(sourceTypeString + " should not be compatible with " + destinationTypeString,
+                                destinationType.isValueCompatibleWith(sourceType));
+                }
+            }
+        }
+    }
+
+    @Test
+    public void clusteringColumnTypeCompatibilityTest() throws Throwable
+    {
+        Map<String, Set<String>> compatibilityMap = new HashMap<>();
+        compatibilityMap.put("blob", new HashSet<>(Arrays.asList(new String[] {"ascii",
"text", "varchar"})));
+        compatibilityMap.put("text", new HashSet<>(Arrays.asList(new String[] {"ascii",
"varchar"})));
+        compatibilityMap.put("varchar", new HashSet<>(Arrays.asList(new String[] {"ascii",
"text" })));
+
+        for (String sourceTypeString: primitiveTypes)
+        {
+            AbstractType sourceType = CQLTypeParser.parse("KEYSPACE", sourceTypeString, Types.none());
+            for (String destinationTypeString: primitiveTypes)
+            {
+                AbstractType destinationType = CQLTypeParser.parse("KEYSPACE", destinationTypeString,
Types.none());
+
+                if (compatibilityMap.get(destinationTypeString) != null &&
+                    compatibilityMap.get(destinationTypeString).contains(sourceTypeString)
||
+                    sourceTypeString.equals(destinationTypeString))
+                {
+                    assertTrue(sourceTypeString + " should be compatible with " + destinationTypeString,
+                               destinationType.isCompatibleWith(sourceType));
+                }
+                else
+                {
+                    assertFalse(sourceTypeString + " should not be compatible with " + destinationTypeString,
+                                destinationType.isCompatibleWith(sourceType));
+                }
+            }
+        }
+    }
 }


Mime
View raw message