cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From s...@apache.org
Subject [6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk
Date Thu, 03 Aug 2017 10:56:57 GMT
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/457733bd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/457733bd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/457733bd

Branch: refs/heads/trunk
Commit: 457733bd54c4a6c0ffd75aa6774c634b7623f116
Parents: 8174fa5 61a47af
Author: Stefan Podkowinski <stefan.podkowinski@1und1.de>
Authored: Thu Aug 3 12:52:41 2017 +0200
Committer: Stefan Podkowinski <stefan.podkowinski@1und1.de>
Committed: Thu Aug 3 12:53:40 2017 +0200

----------------------------------------------------------------------
 CHANGES.txt                                     |   1 +
 NEWS.txt                                        |   5 +
 .../cassandra/auth/CassandraAuthorizer.java     | 123 ++-----------------
 .../cassandra/auth/CassandraRoleManager.java    | 120 +-----------------
 .../cassandra/auth/PasswordAuthenticator.java   |  39 +-----
 .../cassandra/schema/SchemaConstants.java       |   5 +-
 .../apache/cassandra/service/ClientState.java   |  13 +-
 .../apache/cassandra/service/StartupChecks.java |  36 +++++-
 .../cassandra/service/LegacyAuthFailTest.java   |  89 ++++++++++++++
 9 files changed, 149 insertions(+), 282 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/cassandra/blob/457733bd/CHANGES.txt
----------------------------------------------------------------------
diff --cc CHANGES.txt
index a528629,3b0524b..c03cfc3
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,109 -1,3 +1,110 @@@
 +4.0
++ * Support for migrating legacy users to roles has been dropped (CASSANDRA-13371)
 + * Introduce error metrics for repair (CASSANDRA-13387)
 + * Refactoring to primitive functional interfaces in AuthCache (CASSANDRA-13732)
 + * Update metrics to 3.1.5 (CASSANDRA-13648)
 + * batch_size_warn_threshold_in_kb can now be set at runtime (CASSANDRA-13699)
 + * Avoid always rebuilding secondary indexes at startup (CASSANDRA-13725)
 + * Upgrade JMH from 1.13 to 1.19 (CASSANDRA-13727)
 + * Upgrade SLF4J from 1.7.7 to 1.7.25 (CASSANDRA-12996)
 + * Default for start_native_transport now true if not set in config (CASSANDRA-13656)
 + * Don't add localhost to the graph when calculating where to stream from (CASSANDRA-13583)
 + * Allow skipping equality-restricted clustering columns in ORDER BY clause (CASSANDRA-10271)
 + * Use common nowInSec for validation compactions (CASSANDRA-13671)
 + * Improve handling of IR prepare failures (CASSANDRA-13672)
 + * Send IR coordinator messages synchronously (CASSANDRA-13673)
 + * Flush system.repair table before IR finalize promise (CASSANDRA-13660)
 + * Fix column filter creation for wildcard queries (CASSANDRA-13650)
 + * Add 'nodetool getbatchlogreplaythrottle' and 'nodetool setbatchlogreplaythrottle' (CASSANDRA-13614)
 + * fix race condition in PendingRepairManager (CASSANDRA-13659)
 + * Allow noop incremental repair state transitions (CASSANDRA-13658)
 + * Run repair with down replicas (CASSANDRA-10446)
 + * Added started & completed repair metrics (CASSANDRA-13598)
 + * Added started & completed repair metrics (CASSANDRA-13598)
 + * Improve secondary index (re)build failure and concurrency handling (CASSANDRA-10130)
 + * Improve calculation of available disk space for compaction (CASSANDRA-13068)
 + * Change the accessibility of RowCacheSerializer for third party row cache plugins (CASSANDRA-13579)
 + * Allow sub-range repairs for a preview of repaired data (CASSANDRA-13570)
 + * NPE in IR cleanup when columnfamily has no sstables (CASSANDRA-13585)
 + * Fix Randomness of stress values (CASSANDRA-12744)
 + * Allow selecting Map values and Set elements (CASSANDRA-7396)
 + * Fast and garbage-free Streaming Histogram (CASSANDRA-13444)
 + * Update repairTime for keyspaces on completion (CASSANDRA-13539)
 + * Add configurable upper bound for validation executor threads (CASSANDRA-13521)
 + * Bring back maxHintTTL propery (CASSANDRA-12982)
 + * Add testing guidelines (CASSANDRA-13497)
 + * Add more repair metrics (CASSANDRA-13531)
 + * RangeStreamer should be smarter when picking endpoints for streaming (CASSANDRA-4650)
 + * Avoid rewrapping an exception thrown for cache load functions (CASSANDRA-13367)
 + * Log time elapsed for each incremental repair phase (CASSANDRA-13498)
 + * Add multiple table operation support to cassandra-stress (CASSANDRA-8780)
 + * Fix incorrect cqlsh results when selecting same columns multiple times (CASSANDRA-13262)
 + * Fix WriteResponseHandlerTest is sensitive to test execution order (CASSANDRA-13421)
 + * Improve incremental repair logging (CASSANDRA-13468)
 + * Start compaction when incremental repair finishes (CASSANDRA-13454)
 + * Add repair streaming preview (CASSANDRA-13257)
 + * Cleanup isIncremental/repairedAt usage (CASSANDRA-13430)
 + * Change protocol to allow sending key space independent of query string (CASSANDRA-10145)
 + * Make gc_log and gc_warn settable at runtime (CASSANDRA-12661)
 + * Take number of files in L0 in account when estimating remaining compaction tasks (CASSANDRA-13354)
 + * Skip building views during base table streams on range movements (CASSANDRA-13065)
 + * Improve error messages for +/- operations on maps and tuples (CASSANDRA-13197)
 + * Remove deprecated repair JMX APIs (CASSANDRA-11530)
 + * Fix version check to enable streaming keep-alive (CASSANDRA-12929)
 + * Make it possible to monitor an ideal consistency level separate from actual consistency level (CASSANDRA-13289)
 + * Outbound TCP connections ignore internode authenticator (CASSANDRA-13324)
 + * Upgrade junit from 4.6 to 4.12 (CASSANDRA-13360)
 + * Cleanup ParentRepairSession after repairs (CASSANDRA-13359)
 + * Upgrade snappy-java to 1.1.2.6 (CASSANDRA-13336)
 + * Incremental repair not streaming correct sstables (CASSANDRA-13328)
 + * Upgrade the jna version to 4.3.0 (CASSANDRA-13300)
 + * Add the currentTimestamp, currentDate, currentTime and currentTimeUUID functions (CASSANDRA-13132)
 + * Remove config option index_interval (CASSANDRA-10671)
 + * Reduce lock contention for collection types and serializers (CASSANDRA-13271)
 + * Make it possible to override MessagingService.Verb ids (CASSANDRA-13283)
 + * Avoid synchronized on prepareForRepair in ActiveRepairService (CASSANDRA-9292)
 + * Adds the ability to use uncompressed chunks in compressed files (CASSANDRA-10520)
 + * Don't flush sstables when streaming for incremental repair (CASSANDRA-13226)
 + * Remove unused method (CASSANDRA-13227)
 + * Fix minor bugs related to #9143 (CASSANDRA-13217)
 + * Output warning if user increases RF (CASSANDRA-13079)
 + * Remove pre-3.0 streaming compatibility code for 4.0 (CASSANDRA-13081)
 + * Add support for + and - operations on dates (CASSANDRA-11936)
 + * Fix consistency of incrementally repaired data (CASSANDRA-9143)
 + * Increase commitlog version (CASSANDRA-13161)
 + * Make TableMetadata immutable, optimize Schema (CASSANDRA-9425)
 + * Refactor ColumnCondition (CASSANDRA-12981)
 + * Parallelize streaming of different keyspaces (CASSANDRA-4663)
 + * Improved compactions metrics (CASSANDRA-13015)
 + * Speed-up start-up sequence by avoiding un-needed flushes (CASSANDRA-13031)
 + * Use Caffeine (W-TinyLFU) for on-heap caches (CASSANDRA-10855)
 + * Thrift removal (CASSANDRA-11115)
 + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716)
 + * Add column definition kind to dropped columns in schema (CASSANDRA-12705)
 + * Add (automate) Nodetool Documentation (CASSANDRA-12672)
 + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
 + * Reject invalid replication settings when creating or altering a keyspace (CASSANDRA-12681)
 + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter (CASSANDRA-12422)
 + * Use new token allocation for non bootstrap case as well (CASSANDRA-13080)
 + * Avoid byte-array copy when key cache is disabled (CASSANDRA-13084)
 + * Require forceful decommission if number of nodes is less than replication factor (CASSANDRA-12510)
 + * Allow IN restrictions on column families with collections (CASSANDRA-12654)
 + * Log message size in trace message in OutboundTcpConnection (CASSANDRA-13028)
 + * Add timeUnit Days for cassandra-stress (CASSANDRA-13029)
 + * Add mutation size and batch metrics (CASSANDRA-12649)
 + * Add method to get size of endpoints to TokenMetadata (CASSANDRA-12999)
 + * Expose time spent waiting in thread pool queue (CASSANDRA-8398)
 + * Conditionally update index built status to avoid unnecessary flushes (CASSANDRA-12969)
 + * cqlsh auto completion: refactor definition of compaction strategy options (CASSANDRA-12946)
 + * Add support for arithmetic operators (CASSANDRA-11935)
 + * Add histogram for delay to deliver hints (CASSANDRA-13234)
 + * Fix cqlsh automatic protocol downgrade regression (CASSANDRA-13307)
 + * Changing `max_hint_window_in_ms` at runtime (CASSANDRA-11720)
 + * Trivial format error in StorageProxy (CASSANDRA-13551)
 + * Nodetool repair can hang forever if we lose the notification for the repair completing/failing (CASSANDRA-13480)
 + * Anticompaction can cause noisy log messages (CASSANDRA-13684)
 +
 +
  3.11.1
   * "ignore" option is ignored in sstableloader (CASSANDRA-13721)
   * Deadlock in AbstractCommitLogSegmentManager (CASSANDRA-13652)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/457733bd/NEWS.txt
----------------------------------------------------------------------
diff --cc NEWS.txt
index f933571,10f631f..4d30631
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -29,32 -18,8 +29,37 @@@ New feature
  
  Upgrading
  ---------
 -    - Nothing specific to this version but please see previous upgrading sections,
 -      especially if you are upgrading from 2.2.
++    - Support for legacy auth tables in the system_auth keyspace (users,
++      permissions, credentials) and the migration code has been removed. Migration
++      of these legacy auth tables must have been completed before the upgrade to
++      4.0 and the legacy tables must have been removed. See the 'Upgrading' section
++      for version 2.2 for migration instructions.
 +    - Cassandra 4.0 removed support for the deprecated Thrift interface. Amongst
 +      Tother things, this imply the removal of all yaml option related to thrift
 +      ('start_rpc', rpc_port, ...).
 +    - Cassandra 4.0 removed support for any pre-3.0 format. This means you
 +      cannot upgrade from a 2.x version to 4.0 directly, you have to upgrade to
 +      a 3.0.x/3.x version first (and run upgradesstable). In particular, this
 +      mean Cassandra 4.0 cannot load or read pre-3.0 sstables in any way: you
 +      will need to upgrade those sstable in 3.0.x/3.x first.
 +    - Upgrades from 3.0.x or 3.x are supported since 3.0.13 or 3.11.0, previous
 +      versions will causes issues during rolling upgrades (CASSANDRA-13274).
 +    - Cassandra will no longer allow invalid keyspace replication options, such
 +      as invalid datacenter names for NetworkTopologyStrategy. Operators MUST
 +      add new nodes to a datacenter before they can set set ALTER or CREATE
 +      keyspace replication policies using that datacenter. Existing keyspaces
 +      will continue to operate, but CREATE and ALTER will validate that all
 +      datacenters specified exist in the cluster.
 +    - Cassandra 4.0 fixes a problem with incremental repair which caused repaired
 +      data to be inconsistent between nodes. The fix changes the behavior of both
 +      full and incremental repairs. For full repairs, data is no longer marked
 +      repaired. For incremental repairs, anticompaction is run at the beginning
 +      of the repair, instead of at the end. If incremental repair was being used
 +      prior to upgrading, a full repair should be run after upgrading to resolve
 +      any inconsistencies.
 +    - Config option index_interval has been removed (it was deprecated since 2.0)
 +    - Deprecated repair JMX APIs are removed.
 +    - The version of snappy-java has been upgraded to 1.1.2.6
  
  3.11.0
  ======

http://git-wip-us.apache.org/repos/asf/cassandra/blob/457733bd/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
----------------------------------------------------------------------
diff --cc src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
index 6509798,7f44eef..e95a1fd
--- a/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
+++ b/src/java/org/apache/cassandra/auth/CassandraAuthorizer.java
@@@ -18,9 -18,9 +18,7 @@@
  package org.apache.cassandra.auth;
  
  import java.util.*;
--import java.util.concurrent.TimeUnit;
  
--import com.google.common.base.Predicate;
  import com.google.common.collect.ImmutableSet;
  import com.google.common.collect.Iterables;
  import com.google.common.collect.Lists;
@@@ -28,10 -28,10 +26,8 @@@ import org.apache.commons.lang3.StringU
  import org.slf4j.Logger;
  import org.slf4j.LoggerFactory;
  
--import org.apache.cassandra.concurrent.ScheduledExecutors;
  import org.apache.cassandra.config.DatabaseDescriptor;
- import org.apache.cassandra.schema.Schema;
 -import org.apache.cassandra.config.Schema;
 -import org.apache.cassandra.config.SchemaConstants;
 +import org.apache.cassandra.schema.SchemaConstants;
  import org.apache.cassandra.cql3.*;
  import org.apache.cassandra.cql3.statements.BatchStatement;
  import org.apache.cassandra.cql3.statements.ModificationStatement;
@@@ -39,8 -39,8 +35,6 @@@ import org.apache.cassandra.cql3.statem
  import org.apache.cassandra.db.ConsistencyLevel;
  import org.apache.cassandra.db.marshal.UTF8Type;
  import org.apache.cassandra.exceptions.*;
--import org.apache.cassandra.serializers.SetSerializer;
--import org.apache.cassandra.serializers.UTF8Serializer;
  import org.apache.cassandra.service.ClientState;
  
  import org.apache.cassandra.cql3.QueryOptions;
@@@ -63,12 -63,12 +57,7 @@@ public class CassandraAuthorizer implem
      private static final String RESOURCE = "resource";
      private static final String PERMISSIONS = "permissions";
  
--    // used during upgrades to perform authz on mixed clusters
--    public static final String USERNAME = "username";
--    public static final String USER_PERMISSIONS = "permissions";
--
      private SelectStatement authorizeRoleStatement;
--    private SelectStatement legacyAuthorizeRoleStatement;
  
      public CassandraAuthorizer()
      {
@@@ -204,19 -215,19 +193,7 @@@
                                                               Lists.newArrayList(ByteBufferUtil.bytes(role.getRoleName()),
                                                                                  ByteBufferUtil.bytes(resource.getName())));
  
--        SelectStatement statement;
--        // If it exists, read from the legacy user permissions table to handle the case where the cluster
--        // is being upgraded and so is running with mixed versions of the authz schema
-         if (Schema.instance.getTableMetadata(SchemaConstants.AUTH_KEYSPACE_NAME, USER_PERMISSIONS) == null)
 -        if (Schema.instance.getCFMetaData(SchemaConstants.AUTH_KEYSPACE_NAME, USER_PERMISSIONS) == null)
--            statement = authorizeRoleStatement;
--        else
--        {
--            // If the permissions table was initialised only after the statement got prepared, re-prepare (CASSANDRA-12813)
--            if (legacyAuthorizeRoleStatement == null)
--                legacyAuthorizeRoleStatement = prepare(USERNAME, USER_PERMISSIONS);
--            statement = legacyAuthorizeRoleStatement;
--        }
--        ResultMessage.Rows rows = statement.execute(QueryState.forInternalCalls(), options, System.nanoTime());
++        ResultMessage.Rows rows = authorizeRoleStatement.execute(QueryState.forInternalCalls(), options, System.nanoTime());
          UntypedResultSet result = UntypedResultSet.create(rows.result);
  
          if (!result.isEmpty() && result.one().has(PERMISSIONS))
@@@ -292,11 -303,11 +269,7 @@@
      throws RequestExecutionException
      {
          Set<PermissionDetails> details = new HashSet<>();
--        // If it exists, try the legacy user permissions table first. This is to handle the case
--        // where the cluster is being upgraded and so is running with mixed versions of the perms table
-         boolean useLegacyTable = Schema.instance.getTableMetadata(SchemaConstants.AUTH_KEYSPACE_NAME, USER_PERMISSIONS) != null;
 -        boolean useLegacyTable = Schema.instance.getCFMetaData(SchemaConstants.AUTH_KEYSPACE_NAME, USER_PERMISSIONS) != null;
--        String entityColumnName = useLegacyTable ? USERNAME : ROLE;
--        for (UntypedResultSet.Row row : process(buildListQuery(resource, role, useLegacyTable)))
++        for (UntypedResultSet.Row row : process(buildListQuery(resource, role)))
          {
              if (row.has(PERMISSIONS))
              {
@@@ -304,7 -315,7 +277,7 @@@
                  {
                      Permission permission = Permission.valueOf(p);
                      if (permissions.contains(permission))
--                        details.add(new PermissionDetails(row.getString(entityColumnName),
++                        details.add(new PermissionDetails(row.getString(ROLE),
                                                            Resources.fromName(row.getString(RESOURCE)),
                                                            permission));
                  }
@@@ -313,11 -324,11 +286,9 @@@
          return details;
      }
  
--    private String buildListQuery(IResource resource, RoleResource grantee, boolean useLegacyTable)
++    private String buildListQuery(IResource resource, RoleResource grantee)
      {
--        String tableName = useLegacyTable ? USER_PERMISSIONS : AuthKeyspace.ROLE_PERMISSIONS;
--        String entityName = useLegacyTable ? USERNAME : ROLE;
--        List<String> vars = Lists.newArrayList(SchemaConstants.AUTH_KEYSPACE_NAME, tableName);
++        List<String> vars = Lists.newArrayList(SchemaConstants.AUTH_KEYSPACE_NAME, AuthKeyspace.ROLE_PERMISSIONS);
          List<String> conditions = new ArrayList<>();
  
          if (resource != null)
@@@ -328,11 -339,11 +299,11 @@@
  
          if (grantee != null)
          {
--            conditions.add(entityName + " = '%s'");
++            conditions.add(ROLE + " = '%s'");
              vars.add(escape(grantee.getRoleName()));
          }
  
--        String query = "SELECT " + entityName + ", resource, permissions FROM %s.%s";
++        String query = "SELECT " + ROLE + ", resource, permissions FROM %s.%s";
  
          if (!conditions.isEmpty())
              query += " WHERE " + StringUtils.join(conditions, " AND ");
@@@ -356,21 -367,21 +327,6 @@@
      public void setup()
      {
          authorizeRoleStatement = prepare(ROLE, AuthKeyspace.ROLE_PERMISSIONS);
--
--        // If old user permissions table exists, migrate the legacy authz data to the new table
--        // The delay is to give the node a chance to see its peers before attempting the conversion
-         if (Schema.instance.getTableMetadata(SchemaConstants.AUTH_KEYSPACE_NAME, "permissions") != null)
 -        if (Schema.instance.getCFMetaData(SchemaConstants.AUTH_KEYSPACE_NAME, "permissions") != null)
--        {
--            legacyAuthorizeRoleStatement = prepare(USERNAME, USER_PERMISSIONS);
--
--            ScheduledExecutors.optionalTasks.schedule(new Runnable()
--            {
--                public void run()
--                {
--                    convertLegacyData();
--                }
--            }, AuthKeyspace.SUPERUSER_SETUP_DELAY, TimeUnit.MILLISECONDS);
--        }
      }
  
      private SelectStatement prepare(String entityname, String permissionsTable)
@@@ -382,71 -393,71 +338,6 @@@
          return (SelectStatement) QueryProcessor.getStatement(query, ClientState.forInternalCalls()).statement;
      }
  
--    /**
--     * Copy legacy authz data from the system_auth.permissions table to the new system_auth.role_permissions table and
--     * also insert entries into the reverse lookup table.
--     * In theory, we could simply rename the existing table as the schema is structurally the same, but this would
--     * break mixed clusters during a rolling upgrade.
--     * This setup is not performed if AllowAllAuthenticator is configured (see Auth#setup).
--     */
--    private void convertLegacyData()
--    {
--        try
--        {
-             if (Schema.instance.getTableMetadata("system_auth", "permissions") != null)
 -            if (Schema.instance.getCFMetaData("system_auth", "permissions") != null)
--            {
--                logger.info("Converting legacy permissions data");
--                CQLStatement insertStatement =
--                    QueryProcessor.getStatement(String.format("INSERT INTO %s.%s (role, resource, permissions) " +
--                                                              "VALUES (?, ?, ?)",
--                                                              SchemaConstants.AUTH_KEYSPACE_NAME,
--                                                              AuthKeyspace.ROLE_PERMISSIONS),
--                                                ClientState.forInternalCalls()).statement;
--                CQLStatement indexStatement =
--                    QueryProcessor.getStatement(String.format("INSERT INTO %s.%s (resource, role) VALUES (?,?)",
--                                                              SchemaConstants.AUTH_KEYSPACE_NAME,
--                                                              AuthKeyspace.RESOURCE_ROLE_INDEX),
--                                                ClientState.forInternalCalls()).statement;
--
--                UntypedResultSet permissions = process("SELECT * FROM system_auth.permissions");
--                for (UntypedResultSet.Row row : permissions)
--                {
--                    final IResource resource = Resources.fromName(row.getString("resource"));
--                    Predicate<String> isApplicable = new Predicate<String>()
--                    {
--                        public boolean apply(String s)
--                        {
--                            return resource.applicablePermissions().contains(Permission.valueOf(s));
--                        }
--                    };
--                    SetSerializer<String> serializer = SetSerializer.getInstance(UTF8Serializer.instance, UTF8Type.instance);
--                    Set<String> originalPerms = serializer.deserialize(row.getBytes("permissions"));
--                    Set<String> filteredPerms = ImmutableSet.copyOf(Iterables.filter(originalPerms, isApplicable));
--                    insertStatement.execute(QueryState.forInternalCalls(),
--                                            QueryOptions.forInternalCalls(ConsistencyLevel.ONE,
--                                                                          Lists.newArrayList(row.getBytes("username"),
--                                                                                             row.getBytes("resource"),
--                                                                                             serializer.serialize(filteredPerms))),
--                                            System.nanoTime());
--
--                    indexStatement.execute(QueryState.forInternalCalls(),
--                                           QueryOptions.forInternalCalls(ConsistencyLevel.ONE,
--                                                                         Lists.newArrayList(row.getBytes("resource"),
--                                                                                            row.getBytes("username"))),
--                                           System.nanoTime());
--
--                }
--                logger.info("Completed conversion of legacy permissions");
--            }
--        }
--        catch (Exception e)
--        {
--            logger.info("Unable to complete conversion of legacy permissions data (perhaps not enough nodes are upgraded yet). " +
--                        "Conversion should not be considered complete");
--            logger.trace("Conversion error", e);
--        }
--    }
--
      // We only worry about one character ('). Make sure it's properly escaped.
      private String escape(String name)
      {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/457733bd/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
----------------------------------------------------------------------
diff --cc src/java/org/apache/cassandra/auth/CassandraRoleManager.java
index 433290e,777ed05..7333310
--- a/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
+++ b/src/java/org/apache/cassandra/auth/CassandraRoleManager.java
@@@ -32,8 -32,8 +32,7 @@@ import org.slf4j.LoggerFactory
  import org.apache.cassandra.concurrent.ScheduledExecutors;
  import org.apache.cassandra.config.Config;
  import org.apache.cassandra.config.DatabaseDescriptor;
- import org.apache.cassandra.schema.Schema;
 -import org.apache.cassandra.config.Schema;
 -import org.apache.cassandra.config.SchemaConstants;
 +import org.apache.cassandra.schema.SchemaConstants;
  import org.apache.cassandra.cql3.*;
  import org.apache.cassandra.cql3.statements.SelectStatement;
  import org.apache.cassandra.db.ConsistencyLevel;
@@@ -102,20 -103,20 +101,6 @@@ public class CassandraRoleManager imple
          }
      };
  
--    public static final String LEGACY_USERS_TABLE = "users";
--    // Transform a row in the legacy system_auth.users table to a Role instance,
--    // used to fallback to previous schema on a mixed cluster during an upgrade
--    private static final Function<UntypedResultSet.Row, Role> LEGACY_ROW_TO_ROLE = new Function<UntypedResultSet.Row, Role>()
--    {
--        public Role apply(UntypedResultSet.Row row)
--        {
--            return new Role(row.getString("name"),
--                            row.getBoolean("super"),
--                            true,
--                            Collections.<String>emptySet());
--        }
--    };
--
      // 2 ** GENSALT_LOG2_ROUNDS rounds of hashing will be performed.
      private static final String GENSALT_LOG2_ROUNDS_PROPERTY = Config.PROPERTY_PREFIX + "auth_bcrypt_gensalt_log2_rounds";
      private static final int GENSALT_LOG2_ROUNDS = getGensaltLogRounds();
@@@ -134,7 -135,7 +119,6 @@@
      private static final Role NULL_ROLE = new Role(null, false, false, Collections.<String>emptySet());
  
      private SelectStatement loadRoleStatement;
--    private SelectStatement legacySelectUserStatement;
  
      private final Set<Option> supportedOptions;
      private final Set<Option> alterableOptions;
@@@ -157,27 -158,27 +141,10 @@@
          loadRoleStatement = (SelectStatement) prepare("SELECT * from %s.%s WHERE role = ?",
                                                        SchemaConstants.AUTH_KEYSPACE_NAME,
                                                        AuthKeyspace.ROLES);
--        // If the old users table exists, we may need to migrate the legacy authn
--        // data to the new table. We also need to prepare a statement to read from
--        // it, so we can continue to use the old tables while the cluster is upgraded.
--        // Otherwise, we may need to create a default superuser role to enable others
--        // to be added.
-         if (Schema.instance.getTableMetadata(SchemaConstants.AUTH_KEYSPACE_NAME, "users") != null)
 -        if (Schema.instance.getCFMetaData(SchemaConstants.AUTH_KEYSPACE_NAME, "users") != null)
--        {
--            legacySelectUserStatement = prepareLegacySelectUserStatement();
--
--            scheduleSetupTask(() -> {
--                convertLegacyData();
--                return null;
--            });
--        }
--        else
--        {
--            scheduleSetupTask(() -> {
--                setupDefaultRole();
--                return null;
--            });
--        }
++        scheduleSetupTask(() -> {
++            setupDefaultRole();
++            return null;
++        });
      }
  
      public Set<Option> supportedOptions()
@@@ -392,65 -403,65 +359,6 @@@
          }, AuthKeyspace.SUPERUSER_SETUP_DELAY, TimeUnit.MILLISECONDS);
      }
  
--    /*
--     * Copy legacy auth data from the system_auth.users & system_auth.credentials tables to
--     * the new system_auth.roles table. This setup is not performed if AllowAllAuthenticator
--     * is configured (see Auth#setup).
--     */
--    private void convertLegacyData() throws Exception
--    {
--        try
--        {
--            // read old data at QUORUM as it may contain the data for the default superuser
-             if (Schema.instance.getTableMetadata("system_auth", "users") != null)
 -            if (Schema.instance.getCFMetaData("system_auth", "users") != null)
--            {
--                logger.info("Converting legacy users");
--                UntypedResultSet users = QueryProcessor.process("SELECT * FROM system_auth.users",
--                                                                ConsistencyLevel.QUORUM);
--                for (UntypedResultSet.Row row : users)
--                {
--                    RoleOptions options = new RoleOptions();
--                    options.setOption(Option.SUPERUSER, row.getBoolean("super"));
--                    options.setOption(Option.LOGIN, true);
--                    createRole(null, RoleResource.role(row.getString("name")), options);
--                }
--                logger.info("Completed conversion of legacy users");
--            }
--
-             if (Schema.instance.getTableMetadata("system_auth", "credentials") != null)
 -            if (Schema.instance.getCFMetaData("system_auth", "credentials") != null)
--            {
--                logger.info("Migrating legacy credentials data to new system table");
--                UntypedResultSet credentials = QueryProcessor.process("SELECT * FROM system_auth.credentials",
--                                                                      ConsistencyLevel.QUORUM);
--                for (UntypedResultSet.Row row : credentials)
--                {
--                    // Write the password directly into the table to avoid doubly encrypting it
--                    QueryProcessor.process(String.format("UPDATE %s.%s SET salted_hash = '%s' WHERE role = '%s'",
--                                                         SchemaConstants.AUTH_KEYSPACE_NAME,
--                                                         AuthKeyspace.ROLES,
--                                                         row.getString("salted_hash"),
--                                                         row.getString("username")),
--                                           consistencyForRole(row.getString("username")));
--                }
--                logger.info("Completed conversion of legacy credentials");
--            }
--        }
--        catch (Exception e)
--        {
--            logger.info("Unable to complete conversion of legacy auth data (perhaps not enough nodes are upgraded yet). " +
--                        "Conversion should not be considered complete");
--            logger.trace("Conversion error", e);
--            throw e;
--        }
--    }
--
--    private SelectStatement prepareLegacySelectUserStatement()
--    {
--        return (SelectStatement) prepare("SELECT * FROM %s.%s WHERE name = ?",
--                                         SchemaConstants.AUTH_KEYSPACE_NAME,
--                                         LEGACY_USERS_TABLE);
--    }
--
      private CQLStatement prepare(String template, String keyspace, String table)
      {
          try
@@@ -487,31 -498,38 +395,15 @@@
       */
      private Role getRole(String name)
      {
-         // If it exists, try the legacy users table in case the cluster
-         // is in the process of being upgraded and so is running with mixed
-         // versions of the authn schema.
-         if (Schema.instance.getTableMetadata(SchemaConstants.AUTH_KEYSPACE_NAME, "users") == null)
-             return getRoleFromTable(name, loadRoleStatement, ROW_TO_ROLE);
-         else
 -        try
--        {
-             if (legacySelectUserStatement == null)
-                 legacySelectUserStatement = prepareLegacySelectUserStatement();
-             return getRoleFromTable(name, legacySelectUserStatement, LEGACY_ROW_TO_ROLE);
 -            // If it exists, try the legacy users table in case the cluster
 -            // is in the process of being upgraded and so is running with mixed
 -            // versions of the authn schema.
 -            if (Schema.instance.getCFMetaData(SchemaConstants.AUTH_KEYSPACE_NAME, "users") == null)
 -                return getRoleFromTable(name, loadRoleStatement, ROW_TO_ROLE);
 -            else
 -            {
 -                if (legacySelectUserStatement == null)
 -                    legacySelectUserStatement = prepareLegacySelectUserStatement();
 -                return getRoleFromTable(name, legacySelectUserStatement, LEGACY_ROW_TO_ROLE);
 -            }
 -        }
 -        catch (RequestExecutionException | RequestValidationException e)
 -        {
 -            throw new RuntimeException(e);
--        }
--    }
--
--    private Role getRoleFromTable(String name, SelectStatement statement, Function<UntypedResultSet.Row, Role> function)
--    throws RequestExecutionException, RequestValidationException
--    {
          ResultMessage.Rows rows =
--            statement.execute(QueryState.forInternalCalls(),
++            loadRoleStatement.execute(QueryState.forInternalCalls(),
                                QueryOptions.forInternalCalls(consistencyForRole(name),
                                                              Collections.singletonList(ByteBufferUtil.bytes(name))),
                                System.nanoTime());
          if (rows.result.isEmpty())
              return NULL_ROLE;
  
--        return function.apply(UntypedResultSet.create(rows.result).one());
++        return ROW_TO_ROLE.apply(UntypedResultSet.create(rows.result).one());
      }
  
      /*

http://git-wip-us.apache.org/repos/asf/cassandra/blob/457733bd/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java
----------------------------------------------------------------------
diff --cc src/java/org/apache/cassandra/auth/PasswordAuthenticator.java
index 126c04d,4b667ae..b82b0ad
--- a/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java
+++ b/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java
@@@ -29,8 -31,8 +29,7 @@@ import org.slf4j.Logger
  import org.slf4j.LoggerFactory;
  
  import org.apache.cassandra.config.DatabaseDescriptor;
- import org.apache.cassandra.schema.Schema;
 -import org.apache.cassandra.config.Schema;
 -import org.apache.cassandra.config.SchemaConstants;
 +import org.apache.cassandra.schema.SchemaConstants;
  import org.apache.cassandra.cql3.QueryOptions;
  import org.apache.cassandra.cql3.QueryProcessor;
  import org.apache.cassandra.cql3.UntypedResultSet;
@@@ -68,9 -71,9 +67,6 @@@ public class PasswordAuthenticator impl
      private static final byte NUL = 0;
      private SelectStatement authenticateStatement;
  
--    public static final String LEGACY_CREDENTIALS_TABLE = "credentials";
--    private SelectStatement legacyAuthenticateStatement;
--
      private CredentialsCache cache;
  
      // No anonymous access.
@@@ -81,55 -84,81 +77,34 @@@
  
      private AuthenticatedUser authenticate(String username, String password) throws AuthenticationException
      {
 -        try
 -        {
 -            String hash = cache.get(username);
 -            if (!BCrypt.checkpw(password, hash))
 -                throw new AuthenticationException(String.format("Provided username %s and/or password are incorrect", username));
 -
 -            return new AuthenticatedUser(username);
 -        }
 -        catch (ExecutionException | UncheckedExecutionException e)
 -        {
 -            // the credentials were somehow invalid - either a non-existent role, or one without a defined password
 -            if (e.getCause() instanceof NoSuchCredentialsException)
 -                throw new AuthenticationException(String.format("Provided username %s and/or password are incorrect", username));
 -
 -            // an unanticipated exception occured whilst querying the credentials table
 -            if (e.getCause() instanceof RequestExecutionException)
 -            {
 -                logger.trace("Error performing internal authentication", e);
 -                throw new AuthenticationException(String.format("Error during authentication of user %s : %s", username, e.getMessage()));
 -            }
 -
 -            throw new RuntimeException(e);
 -        }
 -    }
 -
 -    private String queryHashedPassword(String username) throws NoSuchCredentialsException
 -    {
 -        try
 -        {
 -            SelectStatement authenticationStatement = authenticationStatement();
 -
 -            ResultMessage.Rows rows =
 -                authenticationStatement.execute(QueryState.forInternalCalls(),
 -                                                QueryOptions.forInternalCalls(consistencyForRole(username),
 -                                                                              Lists.newArrayList(ByteBufferUtil.bytes(username))),
 -                                                System.nanoTime());
 -
 -            // If either a non-existent role name was supplied, or no credentials
 -            // were found for that role we don't want to cache the result so we throw
 -            // a specific, but unchecked, exception to keep LoadingCache happy.
 -            if (rows.result.isEmpty())
 -                throw new NoSuchCredentialsException();
 +        String hash = cache.get(username);
 +        if (!BCrypt.checkpw(password, hash))
 +            throw new AuthenticationException(String.format("Provided username %s and/or password are incorrect", username));
  
 -            UntypedResultSet result = UntypedResultSet.create(rows.result);
 -            if (!result.one().has(SALTED_HASH))
 -                throw new NoSuchCredentialsException();
 -
 -            return result.one().getString(SALTED_HASH);
 -        }
 -        catch (RequestExecutionException e)
 -        {
 -            logger.trace("Error performing internal authentication", e);
 -            throw e;
 -        }
 +        return new AuthenticatedUser(username);
      }
  
 -    /**
 -     * If the legacy users table exists try to verify credentials there. This is to handle the case
 -     * where the cluster is being upgraded and so is running with mixed versions of the authn tables
 -     */
 -    private SelectStatement authenticationStatement()
 +    private String queryHashedPassword(String username)
      {
-         SelectStatement authenticationStatement = authenticationStatement();
- 
 -        if (Schema.instance.getCFMetaData(SchemaConstants.AUTH_KEYSPACE_NAME, LEGACY_CREDENTIALS_TABLE) == null)
 -            return authenticateStatement;
 -        else
 -        {
 -            // the statement got prepared, we to try preparing it again.
 -            // If the credentials was initialised only after statement got prepared, re-prepare (CASSANDRA-12813).
 -            if (legacyAuthenticateStatement == null)
 -                prepareLegacyAuthenticateStatement();
 -            return legacyAuthenticateStatement;
 -        }
 +        ResultMessage.Rows rows =
-         authenticationStatement.execute(QueryState.forInternalCalls(),
++        authenticateStatement.execute(QueryState.forInternalCalls(),
 +                                        QueryOptions.forInternalCalls(consistencyForRole(username),
 +                                                                      Lists.newArrayList(ByteBufferUtil.bytes(username))),
 +                                        System.nanoTime());
 +
 +        // If either a non-existent role name was supplied, or no credentials
 +        // were found for that role we don't want to cache the result so we throw
 +        // a specific, but unchecked, exception to keep LoadingCache happy.
 +        if (rows.result.isEmpty())
 +            throw new AuthenticationException(String.format("Provided username %s and/or password are incorrect", username));
 +
 +        UntypedResultSet result = UntypedResultSet.create(rows.result);
 +        if (!result.one().has(SALTED_HASH))
 +            throw new AuthenticationException(String.format("Provided username %s and/or password are incorrect", username));
 +
 +        return result.one().getString(SALTED_HASH);
      }
  
-     /**
-      * If the legacy users table exists try to verify credentials there. This is to handle the case
-      * where the cluster is being upgraded and so is running with mixed versions of the authn tables
-      */
-     private SelectStatement authenticationStatement()
-     {
-         if (Schema.instance.getTableMetadata(SchemaConstants.AUTH_KEYSPACE_NAME, LEGACY_CREDENTIALS_TABLE) == null)
-             return authenticateStatement;
-         else
-         {
-             // the statement got prepared, we to try preparing it again.
-             // If the credentials was initialised only after statement got prepared, re-prepare (CASSANDRA-12813).
-             if (legacyAuthenticateStatement == null)
-                 prepareLegacyAuthenticateStatement();
-             return legacyAuthenticateStatement;
-         }
-     }
- 
--
      public Set<DataResource> protectedResources()
      {
          // Also protected by CassandraRoleManager, but the duplication doesn't hurt and is more explicit
@@@ -148,21 -177,21 +123,9 @@@
                                       AuthKeyspace.ROLES);
          authenticateStatement = prepare(query);
  
-         if (Schema.instance.getTableMetadata(SchemaConstants.AUTH_KEYSPACE_NAME, LEGACY_CREDENTIALS_TABLE) != null)
 -        if (Schema.instance.getCFMetaData(SchemaConstants.AUTH_KEYSPACE_NAME, LEGACY_CREDENTIALS_TABLE) != null)
--            prepareLegacyAuthenticateStatement();
--
          cache = new CredentialsCache(this);
      }
  
--    private void prepareLegacyAuthenticateStatement()
--    {
--        String query = String.format("SELECT %s from %s.%s WHERE username = ?",
--                                     SALTED_HASH,
--                                     SchemaConstants.AUTH_KEYSPACE_NAME,
--                                     LEGACY_CREDENTIALS_TABLE);
--        legacyAuthenticateStatement = prepare(query);
--    }
--
      public AuthenticatedUser legacyAuthenticate(Map<String, String> credentials) throws AuthenticationException
      {
          String username = credentials.get(USERNAME_KEY);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/457733bd/src/java/org/apache/cassandra/schema/SchemaConstants.java
----------------------------------------------------------------------
diff --cc src/java/org/apache/cassandra/schema/SchemaConstants.java
index 5340f69,0000000..818c371
mode 100644,000000..100644
--- a/src/java/org/apache/cassandra/schema/SchemaConstants.java
+++ b/src/java/org/apache/cassandra/schema/SchemaConstants.java
@@@ -1,82 -1,0 +1,83 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.cassandra.schema;
 +
 +import java.security.MessageDigest;
 +import java.security.NoSuchAlgorithmException;
- import java.util.Set;
- import java.util.UUID;
++import java.util.*;
 +import java.util.regex.Pattern;
 +
 +import com.google.common.collect.ImmutableSet;
 +
 +public final class SchemaConstants
 +{
 +    public static final Pattern PATTERN_WORD_CHARS = Pattern.compile("\\w+");
 +
 +    public static final String SYSTEM_KEYSPACE_NAME = "system";
 +    public static final String SCHEMA_KEYSPACE_NAME = "system_schema";
 +
 +    public static final String TRACE_KEYSPACE_NAME = "system_traces";
 +    public static final String AUTH_KEYSPACE_NAME = "system_auth";
 +    public static final String DISTRIBUTED_KEYSPACE_NAME = "system_distributed";
 +
 +    /* system keyspace names (the ones with LocalStrategy replication strategy) */
 +    public static final Set<String> SYSTEM_KEYSPACE_NAMES = ImmutableSet.of(SYSTEM_KEYSPACE_NAME, SCHEMA_KEYSPACE_NAME);
 +
 +    /* replicate system keyspace names (the ones with a "true" replication strategy) */
 +    public static final Set<String> REPLICATED_SYSTEM_KEYSPACE_NAMES = ImmutableSet.of(TRACE_KEYSPACE_NAME,
 +                                                                                       AUTH_KEYSPACE_NAME,
 +                                                                                       DISTRIBUTED_KEYSPACE_NAME);
 +    /**
 +     * longest permissible KS or CF name.  Our main concern is that filename not be more than 255 characters;
 +     * the filename will contain both the KS and CF names. Since non-schema-name components only take up
 +     * ~64 characters, we could allow longer names than this, but on Windows, the entire path should be not greater than
 +     * 255 characters, so a lower limit here helps avoid problems.  See CASSANDRA-4110.
 +     */
 +    public static final int NAME_LENGTH = 48;
 +
 +    // 59adb24e-f3cd-3e02-97f0-5b395827453f
 +    public static final UUID emptyVersion;
 +
++    public static final List<String> LEGACY_AUTH_TABLES = Arrays.asList("credentials", "users", "permissions");
++
 +    public static boolean isValidName(String name)
 +    {
 +        return name != null && !name.isEmpty() && name.length() <= NAME_LENGTH && PATTERN_WORD_CHARS.matcher(name).matches();
 +    }
 +
 +    static
 +    {
 +        try
 +        {
 +            emptyVersion = UUID.nameUUIDFromBytes(MessageDigest.getInstance("MD5").digest());
 +        }
 +        catch (NoSuchAlgorithmException e)
 +        {
 +            throw new AssertionError();
 +        }
 +    }
 +
 +    /**
 +     * @return whether or not the keyspace is a really system one (w/ LocalStrategy, unmodifiable, hardcoded)
 +     */
 +    public static boolean isSystemKeyspace(String keyspaceName)
 +    {
 +        return SYSTEM_KEYSPACE_NAMES.contains(keyspaceName.toLowerCase());
 +    }
 +}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/457733bd/src/java/org/apache/cassandra/service/ClientState.java
----------------------------------------------------------------------
diff --cc src/java/org/apache/cassandra/service/ClientState.java
index dfddccd,5f01702..80fcd33
--- a/src/java/org/apache/cassandra/service/ClientState.java
+++ b/src/java/org/apache/cassandra/service/ClientState.java
@@@ -56,7 -56,7 +56,6 @@@ public class ClientStat
      private static final Set<IResource> READABLE_SYSTEM_RESOURCES = new HashSet<>();
      private static final Set<IResource> PROTECTED_AUTH_RESOURCES = new HashSet<>();
      private static final Set<String> ALTERABLE_SYSTEM_KEYSPACES = new HashSet<>();
--    private static final Set<IResource> DROPPABLE_SYSTEM_TABLES = new HashSet<>();
      static
      {
          // We want these system cfs to be always readable to authenticated users since many tools rely on them
@@@ -74,14 -74,14 +73,9 @@@
              PROTECTED_AUTH_RESOURCES.addAll(DatabaseDescriptor.getRoleManager().protectedResources());
          }
  
--        // allow users with sufficient privileges to alter KS level options on AUTH_KS and
--        // TRACING_KS, and also to drop legacy tables (users, credentials, permissions) from
--        // AUTH_KS
++        // allow users with sufficient privileges to alter KS level options on AUTH_KS and TRACING_KS
          ALTERABLE_SYSTEM_KEYSPACES.add(SchemaConstants.AUTH_KEYSPACE_NAME);
          ALTERABLE_SYSTEM_KEYSPACES.add(SchemaConstants.TRACE_KEYSPACE_NAME);
--        DROPPABLE_SYSTEM_TABLES.add(DataResource.table(SchemaConstants.AUTH_KEYSPACE_NAME, PasswordAuthenticator.LEGACY_CREDENTIALS_TABLE));
--        DROPPABLE_SYSTEM_TABLES.add(DataResource.table(SchemaConstants.AUTH_KEYSPACE_NAME, CassandraRoleManager.LEGACY_USERS_TABLE));
--        DROPPABLE_SYSTEM_TABLES.add(DataResource.table(SchemaConstants.AUTH_KEYSPACE_NAME, CassandraAuthorizer.USER_PERMISSIONS));
      }
  
      // Current user for the session
@@@ -399,11 -369,11 +393,10 @@@
              throw new UnauthorizedException(keyspace + " keyspace is not user-modifiable.");
  
          // allow users with sufficient privileges to alter KS level options on AUTH_KS and
--        // TRACING_KS, and also to drop legacy tables (users, credentials, permissions) from
--        // AUTH_KS
++        // TRACING_KS, but not to drop any tables
          if (ALTERABLE_SYSTEM_KEYSPACES.contains(resource.getKeyspace().toLowerCase())
             && ((perm == Permission.ALTER && !resource.isKeyspaceLevel())
--               || (perm == Permission.DROP && !DROPPABLE_SYSTEM_TABLES.contains(resource))))
++               || perm == Permission.DROP))
          {
              throw new UnauthorizedException(String.format("Cannot %s %s", perm, resource));
          }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/457733bd/src/java/org/apache/cassandra/service/StartupChecks.java
----------------------------------------------------------------------
diff --cc src/java/org/apache/cassandra/service/StartupChecks.java
index 60acc1d,372f5f9..2774dee
--- a/src/java/org/apache/cassandra/service/StartupChecks.java
+++ b/src/java/org/apache/cassandra/service/StartupChecks.java
@@@ -23,18 -23,21 +23,22 @@@ import java.io.IOException
  import java.nio.file.*;
  import java.nio.file.attribute.BasicFileAttributes;
  import java.util.*;
+ import java.util.stream.Collectors;
  
+ import com.google.common.annotations.VisibleForTesting;
  import com.google.common.base.Joiner;
  import com.google.common.collect.ImmutableList;
 -import com.google.common.collect.ImmutableSet;
  import com.google.common.collect.Iterables;
  import org.slf4j.Logger;
  import org.slf4j.LoggerFactory;
  
 -import org.apache.cassandra.config.CFMetaData;
++import org.apache.cassandra.cql3.QueryProcessor;
++import org.apache.cassandra.cql3.UntypedResultSet;
 +import org.apache.cassandra.schema.TableMetadata;
  import org.apache.cassandra.config.Config;
  import org.apache.cassandra.config.DatabaseDescriptor;
 -import org.apache.cassandra.config.Schema;
 -import org.apache.cassandra.config.SchemaConstants;
 +import org.apache.cassandra.schema.Schema;
 +import org.apache.cassandra.schema.SchemaConstants;
  import org.apache.cassandra.db.ColumnFamilyStore;
  import org.apache.cassandra.db.Directories;
  import org.apache.cassandra.db.SystemKeyspace;
@@@ -378,7 -383,7 +383,7 @@@ public class StartupCheck
              }
              catch (ConfigurationException e)
              {
--                throw new StartupException(100, "Fatal exception during initialization", e);
++                throw new StartupException(StartupException.ERR_WRONG_CONFIG, "Fatal exception during initialization", e);
              }
          }
      };
@@@ -426,4 -431,28 +431,31 @@@
              }
          }
      };
+ 
 -    public static final StartupCheck checkLegacyAuthTables = () -> checkLegacyAuthTablesMessage().ifPresent(logger::warn);
 -
 -    static final Set<String> LEGACY_AUTH_TABLES = ImmutableSet.of("credentials", "users", "permissions");
++    public static final StartupCheck checkLegacyAuthTables = () ->
++    {
++        Optional<String> errMsg = checkLegacyAuthTablesMessage();
++        if (errMsg.isPresent())
++            throw new StartupException(StartupException.ERR_WRONG_CONFIG, errMsg.get());
++    };
+ 
+     @VisibleForTesting
+     static Optional<String> checkLegacyAuthTablesMessage()
+     {
 -        List<String> existing = new ArrayList<>(LEGACY_AUTH_TABLES).stream().filter((legacyAuthTable) ->
++        List<String> existing = new ArrayList<>(SchemaConstants.LEGACY_AUTH_TABLES).stream().filter((legacyAuthTable) ->
+             {
+                 UntypedResultSet result = QueryProcessor.executeOnceInternal(String.format("SELECT table_name FROM %s.%s WHERE keyspace_name='%s' AND table_name='%s'",
+                                                                                            SchemaConstants.SCHEMA_KEYSPACE_NAME,
+                                                                                            "tables",
+                                                                                            SchemaConstants.AUTH_KEYSPACE_NAME,
+                                                                                            legacyAuthTable));
+                 return result != null && !result.isEmpty();
+             }).collect(Collectors.toList());
+ 
+         if (!existing.isEmpty())
+             return Optional.of(String.format("Legacy auth tables %s in keyspace %s still exist and have not been properly migrated.",
+                         Joiner.on(", ").join(existing), SchemaConstants.AUTH_KEYSPACE_NAME));
+         else
+             return Optional.empty();
+     };
  }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/457733bd/test/unit/org/apache/cassandra/service/LegacyAuthFailTest.java
----------------------------------------------------------------------
diff --cc test/unit/org/apache/cassandra/service/LegacyAuthFailTest.java
index 0000000,1e93f31..1d79ecc
mode 000000,100644..100644
--- a/test/unit/org/apache/cassandra/service/LegacyAuthFailTest.java
+++ b/test/unit/org/apache/cassandra/service/LegacyAuthFailTest.java
@@@ -1,0 -1,89 +1,89 @@@
+ /*
+  * Licensed to the Apache Software Foundation (ASF) under one
+  * or more contributor license agreements.  See the NOTICE file
+  * distributed with this work for additional information
+  * regarding copyright ownership.  The ASF licenses this file
+  * to you under the Apache License, Version 2.0 (the
+  * "License"); you may not use this file except in compliance
+  * with the License.  You may obtain a copy of the License at
+  *
+  *     http://www.apache.org/licenses/LICENSE-2.0
+  *
+  * Unless required by applicable law or agreed to in writing, software
+  * distributed under the License is distributed on an "AS IS" BASIS,
+  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  * See the License for the specific language governing permissions and
+  * limitations under the License.
+  */
+ 
+ package org.apache.cassandra.service;
+ 
+ import java.util.ArrayList;
+ import java.util.List;
+ import java.util.Optional;
+ 
+ import com.google.common.base.Joiner;
+ import org.junit.Test;
+ 
 -import org.apache.cassandra.config.SchemaConstants;
+ import org.apache.cassandra.cql3.CQLTester;
++import org.apache.cassandra.schema.SchemaConstants;
+ 
+ import static java.lang.String.format;
+ import static org.junit.Assert.assertEquals;
+ import static org.junit.Assert.assertFalse;
+ 
+ public class LegacyAuthFailTest extends CQLTester
+ {
+     @Test
+     public void testStartupChecks() throws Throwable
+     {
+         createKeyspace();
+ 
 -        List<String> legacyTables = new ArrayList<>(StartupChecks.LEGACY_AUTH_TABLES);
++        List<String> legacyTables = new ArrayList<>(SchemaConstants.LEGACY_AUTH_TABLES);
+ 
+         // test reporting for individual tables
+         for (String legacyTable : legacyTables)
+         {
+             createLegacyTable(legacyTable);
+ 
+             Optional<String> errMsg = StartupChecks.checkLegacyAuthTablesMessage();
+             assertEquals(format("Legacy auth tables %s in keyspace %s still exist and have not been properly migrated.",
+                                 legacyTable,
+                                 SchemaConstants.AUTH_KEYSPACE_NAME), errMsg.get());
+             dropLegacyTable(legacyTable);
+         }
+ 
+         // test reporting of multiple existing tables
+         for (String legacyTable : legacyTables)
+             createLegacyTable(legacyTable);
+ 
+         while (!legacyTables.isEmpty())
+         {
+             Optional<String> errMsg = StartupChecks.checkLegacyAuthTablesMessage();
+             assertEquals(format("Legacy auth tables %s in keyspace %s still exist and have not been properly migrated.",
+                                 Joiner.on(", ").join(legacyTables),
+                                 SchemaConstants.AUTH_KEYSPACE_NAME), errMsg.get());
+ 
+             dropLegacyTable(legacyTables.remove(0));
+         }
+ 
+         // no legacy tables found
+         Optional<String> errMsg = StartupChecks.checkLegacyAuthTablesMessage();
+         assertFalse(errMsg.isPresent());
+     }
+ 
+     private void dropLegacyTable(String legacyTable) throws Throwable
+     {
+         execute(format("DROP TABLE %s.%s", SchemaConstants.AUTH_KEYSPACE_NAME, legacyTable));
+     }
+ 
+     private void createLegacyTable(String legacyTable) throws Throwable
+     {
+         execute(format("CREATE TABLE %s.%s (id int PRIMARY KEY, val text)", SchemaConstants.AUTH_KEYSPACE_NAME, legacyTable));
+     }
+ 
+     private void createKeyspace() throws Throwable
+     {
+         execute(format("CREATE KEYSPACE %s WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}", SchemaConstants.AUTH_KEYSPACE_NAME));
+     }
+ }


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org


Mime
View raw message