Return-Path: X-Original-To: apmail-cassandra-commits-archive@www.apache.org Delivered-To: apmail-cassandra-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0E9A310A89 for ; Thu, 19 Dec 2013 23:44:40 +0000 (UTC) Received: (qmail 7075 invoked by uid 500); 19 Dec 2013 23:44:39 -0000 Delivered-To: apmail-cassandra-commits-archive@cassandra.apache.org Received: (qmail 7015 invoked by uid 500); 19 Dec 2013 23:44:39 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 6974 invoked by uid 99); 19 Dec 2013 23:44:39 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 19 Dec 2013 23:44:39 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id F1AF41EB57; Thu, 19 Dec 2013 23:44:38 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: jbellis@apache.org To: commits@cassandra.apache.org Date: Thu, 19 Dec 2013 23:44:41 -0000 Message-Id: <8ae25fcf523c4f3ea53ae2a9c664d7c7@git.apache.org> In-Reply-To: <883c902d43164b028a7294298aabac18@git.apache.org> References: <883c902d43164b028a7294298aabac18@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [4/6] git commit: merge from 1.2 merge from 1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d8062cd8 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d8062cd8 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d8062cd8 Branch: refs/heads/trunk Commit: d8062cd823bd0552177bd72252b6abbb8f1a7acb Parents: d9dac01 a4895c5 Author: Jonathan Ellis Authored: Thu Dec 19 17:44:21 2013 -0600 Committer: Jonathan Ellis Committed: Thu Dec 19 17:44:21 2013 -0600 ---------------------------------------------------------------------- CHANGES.txt | 1 + src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 5 +++++ 2 files changed, 6 insertions(+) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d8062cd8/CHANGES.txt ---------------------------------------------------------------------- diff --cc CHANGES.txt index 622a1a9,6f6c131..8fd794c --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -30,49 -27,9 +30,50 @@@ Merged from 1.2 (CASSANDRA-6413) * (Hadoop) add describe_local_ring (CASSANDRA-6268) * Fix handling of concurrent directory creation failure (CASSANDRA-6459) + * Allow executing CREATE statements multiple times (CASSANDRA-6471) + * Don't send confusing info with timeouts (CASSANDRA-6491) + * Don't resubmit counter mutation runnables internally (CASSANDRA-6427) + * Don't drop local mutations without a trace (CASSANDRA-6510) ++ * Don't allow null max_hint_window_in_ms (CASSANDRA-6419) -1.2.12 +2.0.3 + * Fix FD leak on slice read path (CASSANDRA-6275) + * Cancel read meter task when closing SSTR (CASSANDRA-6358) + * free off-heap IndexSummary during bulk (CASSANDRA-6359) + * Recover from IOException in accept() thread (CASSANDRA-6349) + * Improve Gossip tolerance of abnormally slow tasks (CASSANDRA-6338) + * Fix trying to hint timed out counter writes (CASSANDRA-6322) + * Allow restoring specific columnfamilies from archived CL (CASSANDRA-4809) + * Avoid flushing compaction_history after each operation (CASSANDRA-6287) + * Fix repair assertion error when tombstones expire (CASSANDRA-6277) + * Skip loading corrupt key cache (CASSANDRA-6260) + * Fixes for compacting larger-than-memory rows (CASSANDRA-6274) + * Compact hottest sstables first and optionally omit coldest from + compaction entirely (CASSANDRA-6109) + * Fix modifying column_metadata from thrift (CASSANDRA-6182) + * cqlsh: fix LIST USERS output (CASSANDRA-6242) + * Add IRequestSink interface (CASSANDRA-6248) + * Update memtable size while flushing (CASSANDRA-6249) + * Provide hooks around CQL2/CQL3 statement execution (CASSANDRA-6252) + * Require Permission.SELECT for CAS updates (CASSANDRA-6247) + * New CQL-aware SSTableWriter (CASSANDRA-5894) + * Reject CAS operation when the protocol v1 is used (CASSANDRA-6270) + * Correctly throw error when frame too large (CASSANDRA-5981) + * Fix serialization bug in PagedRange with 2ndary indexes (CASSANDRA-6299) + * Fix CQL3 table validation in Thrift (CASSANDRA-6140) + * Fix bug missing results with IN clauses (CASSANDRA-6327) + * Fix paging with reversed slices (CASSANDRA-6343) + * Set minTimestamp correctly to be able to drop expired sstables (CASSANDRA-6337) + * Support NaN and Infinity as float literals (CASSANDRA-6003) + * Remove RF from nodetool ring output (CASSANDRA-6289) + * Fix attempting to flush empty rows (CASSANDRA-6374) + * Fix potential out of bounds exception when paging (CASSANDRA-6333) +Merged from 1.2: + * Optimize FD phi calculation (CASSANDRA-6386) + * Improve initial FD phi estimate when starting up (CASSANDRA-6385) + * Don't list CQL3 table in CLI describe even if named explicitely + (CASSANDRA-5750) * Invalidate row cache when dropping CF (CASSANDRA-6351) * add non-jamm path for cached statements (CASSANDRA-6293) * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114) http://git-wip-us.apache.org/repos/asf/cassandra/blob/d8062cd8/src/java/org/apache/cassandra/config/DatabaseDescriptor.java ---------------------------------------------------------------------- diff --cc src/java/org/apache/cassandra/config/DatabaseDescriptor.java index 24c1398,0db2f85..3a6cdb4 --- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java +++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java @@@ -118,378 -117,424 +118,383 @@@ public class DatabaseDescripto } catch (Exception e) { - ClassLoader loader = DatabaseDescriptor.class.getClassLoader(); - url = loader.getResource(configUrl); - if (url == null) - throw new ConfigurationException("Cannot locate " + configUrl); + logger.error("Fatal error during configuration loading", e); + System.err.println(e.getMessage() + "\nFatal error during configuration loading; unable to start. See log for stacktrace."); + System.exit(1); } - - return url; } - static + @VisibleForTesting + static Config loadConfig() throws ConfigurationException { - if (Config.getLoadYaml()) - loadYaml(); - else - conf = new Config(); + String loaderClass = System.getProperty("cassandra.config.loader"); + ConfigurationLoader loader = loaderClass == null + ? new YamlConfigurationLoader() + : FBUtilities.construct(loaderClass, "configuration loading"); + return loader.loadConfig(); } - static void loadYaml() + + private static void applyConfig(Config config) throws ConfigurationException { - try - { - URL url = getStorageConfigURL(); - logger.info("Loading settings from " + url); - InputStream input; - try - { - input = url.openStream(); - } - catch (IOException e) - { - // getStorageConfigURL should have ruled this out - throw new AssertionError(e); - } - org.yaml.snakeyaml.constructor.Constructor constructor = new org.yaml.snakeyaml.constructor.Constructor(Config.class); - TypeDescription seedDesc = new TypeDescription(SeedProviderDef.class); - seedDesc.putMapPropertyType("parameters", String.class, String.class); - constructor.addTypeDescription(seedDesc); - Yaml yaml = new Yaml(new Loader(constructor)); - conf = (Config)yaml.load(input); + conf = config; - logger.info("Data files directories: " + Arrays.toString(conf.data_file_directories)); - logger.info("Commit log directory: " + conf.commitlog_directory); + logger.info("Data files directories: " + Arrays.toString(conf.data_file_directories)); + logger.info("Commit log directory: " + conf.commitlog_directory); - if (conf.commitlog_sync == null) - { - throw new ConfigurationException("Missing required directive CommitLogSync"); - } + if (conf.commitlog_sync == null) + { + throw new ConfigurationException("Missing required directive CommitLogSync"); + } - if (conf.commitlog_sync == Config.CommitLogSync.batch) - { - if (conf.commitlog_sync_batch_window_in_ms == null) - { - throw new ConfigurationException("Missing value for commitlog_sync_batch_window_in_ms: Double expected."); - } - else if (conf.commitlog_sync_period_in_ms != null) - { - throw new ConfigurationException("Batch sync specified, but commitlog_sync_period_in_ms found. Only specify commitlog_sync_batch_window_in_ms when using batch sync"); - } - logger.debug("Syncing log with a batch window of " + conf.commitlog_sync_batch_window_in_ms); - } - else + if (conf.commitlog_sync == Config.CommitLogSync.batch) + { + if (conf.commitlog_sync_batch_window_in_ms == null) { - if (conf.commitlog_sync_period_in_ms == null) - { - throw new ConfigurationException("Missing value for commitlog_sync_period_in_ms: Integer expected"); - } - else if (conf.commitlog_sync_batch_window_in_ms != null) - { - throw new ConfigurationException("commitlog_sync_period_in_ms specified, but commitlog_sync_batch_window_in_ms found. Only specify commitlog_sync_period_in_ms when using periodic sync."); - } - logger.debug("Syncing log with a period of " + conf.commitlog_sync_period_in_ms); + throw new ConfigurationException("Missing value for commitlog_sync_batch_window_in_ms: Double expected."); } - - if (conf.commitlog_total_space_in_mb == null) - conf.commitlog_total_space_in_mb = System.getProperty("os.arch").contains("64") ? 1024 : 32; - - /* evaluate the DiskAccessMode Config directive, which also affects indexAccessMode selection */ - if (conf.disk_access_mode == Config.DiskAccessMode.auto) + else if (conf.commitlog_sync_period_in_ms != null) { - conf.disk_access_mode = System.getProperty("os.arch").contains("64") ? Config.DiskAccessMode.mmap : Config.DiskAccessMode.standard; - indexAccessMode = conf.disk_access_mode; - logger.info("DiskAccessMode 'auto' determined to be " + conf.disk_access_mode + ", indexAccessMode is " + indexAccessMode ); + throw new ConfigurationException("Batch sync specified, but commitlog_sync_period_in_ms found. Only specify commitlog_sync_batch_window_in_ms when using batch sync"); } - else if (conf.disk_access_mode == Config.DiskAccessMode.mmap_index_only) + logger.debug("Syncing log with a batch window of " + conf.commitlog_sync_batch_window_in_ms); + } + else + { + if (conf.commitlog_sync_period_in_ms == null) { - conf.disk_access_mode = Config.DiskAccessMode.standard; - indexAccessMode = Config.DiskAccessMode.mmap; - logger.info("DiskAccessMode is " + conf.disk_access_mode + ", indexAccessMode is " + indexAccessMode ); + throw new ConfigurationException("Missing value for commitlog_sync_period_in_ms: Integer expected"); } - else + else if (conf.commitlog_sync_batch_window_in_ms != null) { - indexAccessMode = conf.disk_access_mode; - logger.info("DiskAccessMode is " + conf.disk_access_mode + ", indexAccessMode is " + indexAccessMode ); + throw new ConfigurationException("commitlog_sync_period_in_ms specified, but commitlog_sync_batch_window_in_ms found. Only specify commitlog_sync_period_in_ms when using periodic sync."); } + logger.debug("Syncing log with a period of " + conf.commitlog_sync_period_in_ms); + } - logger.info("disk_failure_policy is " + conf.disk_failure_policy); + if (conf.commitlog_total_space_in_mb == null) + conf.commitlog_total_space_in_mb = System.getProperty("os.arch").contains("64") ? 1024 : 32; - /* Authentication and authorization backend, implementing IAuthenticator and IAuthorizer */ - if (conf.authenticator != null) - authenticator = FBUtilities.newAuthenticator(conf.authenticator); + /* evaluate the DiskAccessMode Config directive, which also affects indexAccessMode selection */ + if (conf.disk_access_mode == Config.DiskAccessMode.auto) + { + conf.disk_access_mode = System.getProperty("os.arch").contains("64") ? Config.DiskAccessMode.mmap : Config.DiskAccessMode.standard; + indexAccessMode = conf.disk_access_mode; + logger.info("DiskAccessMode 'auto' determined to be " + conf.disk_access_mode + ", indexAccessMode is " + indexAccessMode ); + } + else if (conf.disk_access_mode == Config.DiskAccessMode.mmap_index_only) + { + conf.disk_access_mode = Config.DiskAccessMode.standard; + indexAccessMode = Config.DiskAccessMode.mmap; + logger.info("DiskAccessMode is " + conf.disk_access_mode + ", indexAccessMode is " + indexAccessMode ); + } + else + { + indexAccessMode = conf.disk_access_mode; + logger.info("DiskAccessMode is " + conf.disk_access_mode + ", indexAccessMode is " + indexAccessMode ); + } - if (conf.authority != null) - { - logger.warn("Please rename 'authority' to 'authorizer' in cassandra.yaml"); - if (!conf.authority.equals("org.apache.cassandra.auth.AllowAllAuthority")) - throw new ConfigurationException("IAuthority interface has been deprecated," - + " please implement IAuthorizer instead."); - } + logger.info("disk_failure_policy is " + conf.disk_failure_policy); - if (conf.authorizer != null) - authorizer = FBUtilities.newAuthorizer(conf.authorizer); + /* Authentication and authorization backend, implementing IAuthenticator and IAuthorizer */ + if (conf.authenticator != null) + authenticator = FBUtilities.newAuthenticator(conf.authenticator); - if (authenticator instanceof AllowAllAuthenticator && !(authorizer instanceof AllowAllAuthorizer)) - throw new ConfigurationException("AllowAllAuthenticator can't be used with " + conf.authorizer); + if (conf.authorizer != null) + authorizer = FBUtilities.newAuthorizer(conf.authorizer); - if (conf.internode_authenticator != null) - internodeAuthenticator = FBUtilities.construct(conf.internode_authenticator, "internode_authenticator"); - else - internodeAuthenticator = new AllowAllInternodeAuthenticator(); + if (authenticator instanceof AllowAllAuthenticator && !(authorizer instanceof AllowAllAuthorizer)) + throw new ConfigurationException("AllowAllAuthenticator can't be used with " + conf.authorizer); - authenticator.validateConfiguration(); - authorizer.validateConfiguration(); - internodeAuthenticator.validateConfiguration(); + if (conf.internode_authenticator != null) + internodeAuthenticator = FBUtilities.construct(conf.internode_authenticator, "internode_authenticator"); + else + internodeAuthenticator = new AllowAllInternodeAuthenticator(); - /* Hashing strategy */ - if (conf.partitioner == null) - { - throw new ConfigurationException("Missing directive: partitioner"); - } + authenticator.validateConfiguration(); + authorizer.validateConfiguration(); + internodeAuthenticator.validateConfiguration(); - try - { - partitioner = FBUtilities.newPartitioner(System.getProperty("cassandra.partitioner", conf.partitioner)); - } - catch (Exception e) - { - throw new ConfigurationException("Invalid partitioner class " + conf.partitioner); - } - paritionerName = partitioner.getClass().getCanonicalName(); + /* Hashing strategy */ + if (conf.partitioner == null) + { + throw new ConfigurationException("Missing directive: partitioner"); + } + try + { + partitioner = FBUtilities.newPartitioner(System.getProperty("cassandra.partitioner", conf.partitioner)); + } + catch (Exception e) + { + throw new ConfigurationException("Invalid partitioner class " + conf.partitioner); + } + paritionerName = partitioner.getClass().getCanonicalName(); - if (conf.max_hint_window_in_ms == null) - { - throw new ConfigurationException("max_hint_window_in_ms cannot be set to null"); - } ++ if (conf.max_hint_window_in_ms == null) ++ { ++ throw new ConfigurationException("max_hint_window_in_ms cannot be set to null"); ++ } + - /* phi convict threshold for FailureDetector */ - if (conf.phi_convict_threshold < 5 || conf.phi_convict_threshold > 16) - { - throw new ConfigurationException("phi_convict_threshold must be between 5 and 16"); - } + /* phi convict threshold for FailureDetector */ + if (conf.phi_convict_threshold < 5 || conf.phi_convict_threshold > 16) + { + throw new ConfigurationException("phi_convict_threshold must be between 5 and 16"); + } - /* Thread per pool */ - if (conf.concurrent_reads != null && conf.concurrent_reads < 2) - { - throw new ConfigurationException("concurrent_reads must be at least 2"); - } + /* Thread per pool */ + if (conf.concurrent_reads != null && conf.concurrent_reads < 2) + { + throw new ConfigurationException("concurrent_reads must be at least 2"); + } - if (conf.concurrent_writes != null && conf.concurrent_writes < 2) - { - throw new ConfigurationException("concurrent_writes must be at least 2"); - } + if (conf.concurrent_writes != null && conf.concurrent_writes < 2) + { + throw new ConfigurationException("concurrent_writes must be at least 2"); + } - if (conf.concurrent_replicates != null && conf.concurrent_replicates < 2) - { - throw new ConfigurationException("concurrent_replicates must be at least 2"); - } + if (conf.concurrent_replicates != null && conf.concurrent_replicates < 2) + { + throw new ConfigurationException("concurrent_replicates must be at least 2"); + } + + if (conf.file_cache_size_in_mb == null) + conf.file_cache_size_in_mb = Math.min(512, (int) (Runtime.getRuntime().maxMemory() / (4 * 1048576))); - if (conf.memtable_total_space_in_mb == null) - conf.memtable_total_space_in_mb = (int) (Runtime.getRuntime().maxMemory() / (3 * 1048576)); - if (conf.memtable_total_space_in_mb <= 0) - throw new ConfigurationException("memtable_total_space_in_mb must be positive"); - logger.info("Global memtable threshold is enabled at {}MB", conf.memtable_total_space_in_mb); + if (conf.memtable_total_space_in_mb == null) + conf.memtable_total_space_in_mb = (int) (Runtime.getRuntime().maxMemory() / (4 * 1048576)); + if (conf.memtable_total_space_in_mb <= 0) + throw new ConfigurationException("memtable_total_space_in_mb must be positive"); + logger.info("Global memtable threshold is enabled at {}MB", conf.memtable_total_space_in_mb); - /* Memtable flush writer threads */ - if (conf.memtable_flush_writers != null && conf.memtable_flush_writers < 1) + /* Memtable flush writer threads */ + if (conf.memtable_flush_writers != null && conf.memtable_flush_writers < 1) + { + throw new ConfigurationException("memtable_flush_writers must be at least 1"); + } + else if (conf.memtable_flush_writers == null) + { + conf.memtable_flush_writers = conf.data_file_directories.length; + } + + /* Local IP or hostname to bind services to */ + if (conf.listen_address != null) + { + if (conf.listen_address.equals("0.0.0.0")) + throw new ConfigurationException("listen_address cannot be 0.0.0.0!"); + try { - throw new ConfigurationException("memtable_flush_writers must be at least 1"); + listenAddress = InetAddress.getByName(conf.listen_address); } - else if (conf.memtable_flush_writers == null) + catch (UnknownHostException e) { - conf.memtable_flush_writers = conf.data_file_directories.length; + throw new ConfigurationException("Unknown listen_address '" + conf.listen_address + "'"); } + } - /* Local IP or hostname to bind services to */ - if (conf.listen_address != null) + /* Gossip Address to broadcast */ + if (conf.broadcast_address != null) + { + if (conf.broadcast_address.equals("0.0.0.0")) { - if (conf.listen_address.equals("0.0.0.0")) - throw new ConfigurationException("listen_address cannot be 0.0.0.0!"); - - try - { - listenAddress = InetAddress.getByName(conf.listen_address); - } - catch (UnknownHostException e) - { - throw new ConfigurationException("Unknown listen_address '" + conf.listen_address + "'"); - } + throw new ConfigurationException("broadcast_address cannot be 0.0.0.0!"); } - /* Gossip Address to broadcast */ - if (conf.broadcast_address != null) + try { - if (conf.broadcast_address.equals("0.0.0.0")) - { - throw new ConfigurationException("broadcast_address cannot be 0.0.0.0!"); - } - - try - { - broadcastAddress = InetAddress.getByName(conf.broadcast_address); - } - catch (UnknownHostException e) - { - throw new ConfigurationException("Unknown broadcast_address '" + conf.broadcast_address + "'"); - } + broadcastAddress = InetAddress.getByName(conf.broadcast_address); + } + catch (UnknownHostException e) + { + throw new ConfigurationException("Unknown broadcast_address '" + conf.broadcast_address + "'"); } + } - /* Local IP or hostname to bind RPC server to */ - if (conf.rpc_address != null) + /* Local IP or hostname to bind RPC server to */ + if (conf.rpc_address != null) + { + try { - try - { - rpcAddress = InetAddress.getByName(conf.rpc_address); - } - catch (UnknownHostException e) - { - throw new ConfigurationException("Unknown host in rpc_address " + conf.rpc_address); - } + rpcAddress = InetAddress.getByName(conf.rpc_address); } - else + catch (UnknownHostException e) { - rpcAddress = FBUtilities.getLocalAddress(); + throw new ConfigurationException("Unknown host in rpc_address " + conf.rpc_address); } + } + else + { + rpcAddress = FBUtilities.getLocalAddress(); + } - if (conf.thrift_framed_transport_size_in_mb <= 0) - throw new ConfigurationException("thrift_framed_transport_size_in_mb must be positive"); + if (conf.thrift_framed_transport_size_in_mb <= 0) + throw new ConfigurationException("thrift_framed_transport_size_in_mb must be positive"); - /* end point snitch */ - if (conf.endpoint_snitch == null) - { - throw new ConfigurationException("Missing endpoint_snitch directive"); - } - snitch = createEndpointSnitch(conf.endpoint_snitch); - EndpointSnitchInfo.create(); + if (conf.native_transport_max_frame_size_in_mb <= 0) + throw new ConfigurationException("native_transport_max_frame_size_in_mb must be positive"); - localDC = snitch.getDatacenter(FBUtilities.getBroadcastAddress()); - localComparator = new Comparator() + /* end point snitch */ + if (conf.endpoint_snitch == null) + { + throw new ConfigurationException("Missing endpoint_snitch directive"); + } + snitch = createEndpointSnitch(conf.endpoint_snitch); + EndpointSnitchInfo.create(); + + localDC = snitch.getDatacenter(FBUtilities.getBroadcastAddress()); + localComparator = new Comparator() + { + public int compare(InetAddress endpoint1, InetAddress endpoint2) { - public int compare(InetAddress endpoint1, InetAddress endpoint2) - { - boolean local1 = localDC.equals(snitch.getDatacenter(endpoint1)); - boolean local2 = localDC.equals(snitch.getDatacenter(endpoint2)); - if (local1 && !local2) - return -1; - if (local2 && !local1) - return 1; - return 0; - } - }; + boolean local1 = localDC.equals(snitch.getDatacenter(endpoint1)); + boolean local2 = localDC.equals(snitch.getDatacenter(endpoint2)); + if (local1 && !local2) + return -1; + if (local2 && !local1) + return 1; + return 0; + } + }; - /* Request Scheduler setup */ - requestSchedulerOptions = conf.request_scheduler_options; - if (conf.request_scheduler != null) + /* Request Scheduler setup */ + requestSchedulerOptions = conf.request_scheduler_options; + if (conf.request_scheduler != null) + { + try { - try - { - if (requestSchedulerOptions == null) - { - requestSchedulerOptions = new RequestSchedulerOptions(); - } - Class cls = Class.forName(conf.request_scheduler); - requestScheduler = (IRequestScheduler) cls.getConstructor(RequestSchedulerOptions.class).newInstance(requestSchedulerOptions); - } - catch (ClassNotFoundException e) - { - throw new ConfigurationException("Invalid Request Scheduler class " + conf.request_scheduler); - } - catch (Exception e) + if (requestSchedulerOptions == null) { - throw new ConfigurationException("Unable to instantiate request scheduler", e); + requestSchedulerOptions = new RequestSchedulerOptions(); } + Class cls = Class.forName(conf.request_scheduler); + requestScheduler = (IRequestScheduler) cls.getConstructor(RequestSchedulerOptions.class).newInstance(requestSchedulerOptions); } - else - { - requestScheduler = new NoScheduler(); - } - - if (conf.request_scheduler_id == RequestSchedulerId.keyspace) + catch (ClassNotFoundException e) { - requestSchedulerId = conf.request_scheduler_id; + throw new ConfigurationException("Invalid Request Scheduler class " + conf.request_scheduler); } - else + catch (Exception e) { - // Default to Keyspace - requestSchedulerId = RequestSchedulerId.keyspace; + throw new ConfigurationException("Unable to instantiate request scheduler", e); } + } + else + { + requestScheduler = new NoScheduler(); + } - if (logger.isDebugEnabled() && conf.auto_bootstrap != null) - { - logger.debug("setting auto_bootstrap to " + conf.auto_bootstrap); - } + if (conf.request_scheduler_id == RequestSchedulerId.keyspace) + { + requestSchedulerId = conf.request_scheduler_id; + } + else + { + // Default to Keyspace + requestSchedulerId = RequestSchedulerId.keyspace; + } - logger.info((conf.multithreaded_compaction ? "" : "Not ") + "using multi-threaded compaction"); + if (logger.isDebugEnabled() && conf.auto_bootstrap != null) + { + logger.debug("setting auto_bootstrap to " + conf.auto_bootstrap); + } - if (conf.in_memory_compaction_limit_in_mb != null && conf.in_memory_compaction_limit_in_mb <= 0) - { - throw new ConfigurationException("in_memory_compaction_limit_in_mb must be a positive integer"); - } + logger.info((conf.multithreaded_compaction ? "" : "Not ") + "using multi-threaded compaction"); - if (conf.concurrent_compactors == null) - conf.concurrent_compactors = FBUtilities.getAvailableProcessors(); + if (conf.in_memory_compaction_limit_in_mb != null && conf.in_memory_compaction_limit_in_mb <= 0) + { + throw new ConfigurationException("in_memory_compaction_limit_in_mb must be a positive integer"); + } - if (conf.concurrent_compactors <= 0) - throw new ConfigurationException("concurrent_compactors should be strictly greater than 0"); + if (conf.concurrent_compactors == null) + conf.concurrent_compactors = FBUtilities.getAvailableProcessors(); - /* data file and commit log directories. they get created later, when they're needed. */ - if (conf.commitlog_directory != null && conf.data_file_directories != null && conf.saved_caches_directory != null) - { - for (String datadir : conf.data_file_directories) - { - if (datadir.equals(conf.commitlog_directory)) - throw new ConfigurationException("commitlog_directory must not be the same as any data_file_directories"); - if (datadir.equals(conf.saved_caches_directory)) - throw new ConfigurationException("saved_caches_directory must not be the same as any data_file_directories"); - } + if (conf.concurrent_compactors <= 0) + throw new ConfigurationException("concurrent_compactors should be strictly greater than 0"); - if (conf.commitlog_directory.equals(conf.saved_caches_directory)) - throw new ConfigurationException("saved_caches_directory must not be the same as the commitlog_directory"); - } - else + /* data file and commit log directories. they get created later, when they're needed. */ + if (conf.commitlog_directory != null && conf.data_file_directories != null && conf.saved_caches_directory != null) + { + for (String datadir : conf.data_file_directories) { - if (conf.commitlog_directory == null) - throw new ConfigurationException("commitlog_directory missing"); - if (conf.data_file_directories == null) - throw new ConfigurationException("data_file_directories missing; at least one data directory must be specified"); - if (conf.saved_caches_directory == null) - throw new ConfigurationException("saved_caches_directory missing"); + if (datadir.equals(conf.commitlog_directory)) + throw new ConfigurationException("commitlog_directory must not be the same as any data_file_directories"); + if (datadir.equals(conf.saved_caches_directory)) + throw new ConfigurationException("saved_caches_directory must not be the same as any data_file_directories"); } - if (conf.initial_token != null) - for (String token : tokensFromString(conf.initial_token)) - partitioner.getTokenFactory().validate(token); + if (conf.commitlog_directory.equals(conf.saved_caches_directory)) + throw new ConfigurationException("saved_caches_directory must not be the same as the commitlog_directory"); + } + else + { + if (conf.commitlog_directory == null) + throw new ConfigurationException("commitlog_directory missing"); + if (conf.data_file_directories == null) + throw new ConfigurationException("data_file_directories missing; at least one data directory must be specified"); + if (conf.saved_caches_directory == null) + throw new ConfigurationException("saved_caches_directory missing"); + } - if (conf.num_tokens > MAX_NUM_TOKENS) - throw new ConfigurationException(String.format("A maximum number of %d tokens per node is supported", MAX_NUM_TOKENS)); + if (conf.initial_token != null) + for (String token : tokensFromString(conf.initial_token)) + partitioner.getTokenFactory().validate(token); - try - { - // if key_cache_size_in_mb option was set to "auto" then size of the cache should be "min(5% of Heap (in MB), 100MB) - keyCacheSizeInMB = (conf.key_cache_size_in_mb == null) - ? Math.min(Math.max(1, (int) (Runtime.getRuntime().totalMemory() * 0.05 / 1024 / 1024)), 100) - : conf.key_cache_size_in_mb; + if (conf.num_tokens > MAX_NUM_TOKENS) + throw new ConfigurationException(String.format("A maximum number of %d tokens per node is supported", MAX_NUM_TOKENS)); - if (keyCacheSizeInMB < 0) - throw new NumberFormatException(); // to escape duplicating error message - } - catch (NumberFormatException e) - { - throw new ConfigurationException("key_cache_size_in_mb option was set incorrectly to '" - + conf.key_cache_size_in_mb + "', supported values are >= 0."); - } + try + { + // if key_cache_size_in_mb option was set to "auto" then size of the cache should be "min(5% of Heap (in MB), 100MB) + keyCacheSizeInMB = (conf.key_cache_size_in_mb == null) + ? Math.min(Math.max(1, (int) (Runtime.getRuntime().totalMemory() * 0.05 / 1024 / 1024)), 100) + : conf.key_cache_size_in_mb; - rowCacheProvider = FBUtilities.newCacheProvider(conf.row_cache_provider); + if (keyCacheSizeInMB < 0) + throw new NumberFormatException(); // to escape duplicating error message + } + catch (NumberFormatException e) + { + throw new ConfigurationException("key_cache_size_in_mb option was set incorrectly to '" + + conf.key_cache_size_in_mb + "', supported values are >= 0."); + } - if(conf.encryption_options != null) - { - logger.warn("Please rename encryption_options as server_encryption_options in the yaml"); - //operate under the assumption that server_encryption_options is not set in yaml rather than both - conf.server_encryption_options = conf.encryption_options; - } + memoryAllocator = FBUtilities.newOffHeapAllocator(conf.memory_allocator); - String allocatorClass = conf.memtable_allocator; - if (!allocatorClass.contains(".")) - allocatorClass = "org.apache.cassandra.utils." + allocatorClass; - memtableAllocator = FBUtilities.classForName(allocatorClass, "allocator"); + if(conf.encryption_options != null) + { + logger.warn("Please rename encryption_options as server_encryption_options in the yaml"); + //operate under the assumption that server_encryption_options is not set in yaml rather than both + conf.server_encryption_options = conf.encryption_options; + } - // Hardcoded system tables - List systemKeyspaces = Arrays.asList(KSMetaData.systemKeyspace(), KSMetaData.traceKeyspace()); - assert systemKeyspaces.size() == Schema.systemKeyspaceNames.size(); - for (KSMetaData ksmd : systemKeyspaces) - { - // install the definition - for (CFMetaData cfm : ksmd.cfMetaData().values()) - Schema.instance.load(cfm); - Schema.instance.setTableDefinition(ksmd); - } + String allocatorClass = conf.memtable_allocator; + if (!allocatorClass.contains(".")) + allocatorClass = "org.apache.cassandra.utils." + allocatorClass; + memtableAllocator = FBUtilities.classForName(allocatorClass, "allocator"); - /* Load the seeds for node contact points */ - if (conf.seed_provider == null) - { - throw new ConfigurationException("seeds configuration is missing; a minimum of one seed is required."); - } - try - { - Class seedProviderClass = Class.forName(conf.seed_provider.class_name); - seedProvider = (SeedProvider)seedProviderClass.getConstructor(Map.class).newInstance(conf.seed_provider.parameters); - } - // there are about 5 checked exceptions that could be thrown here. - catch (Exception e) - { - logger.error("Fatal configuration error", e); - System.err.println(e.getMessage() + "\nFatal configuration error; unable to start server. See log for stacktrace."); - System.exit(1); - } - if (seedProvider.getSeeds().size() == 0) - throw new ConfigurationException("The seed provider lists no seeds."); + // Hardcoded system keyspaces + List systemKeyspaces = Arrays.asList(KSMetaData.systemKeyspace()); + assert systemKeyspaces.size() == Schema.systemKeyspaceNames.size(); + for (KSMetaData ksmd : systemKeyspaces) + Schema.instance.load(ksmd); + + /* Load the seeds for node contact points */ + if (conf.seed_provider == null) + { + throw new ConfigurationException("seeds configuration is missing; a minimum of one seed is required."); } - catch (ConfigurationException e) + try { - logger.error("Fatal configuration error", e); - System.err.println(e.getMessage() + "\nFatal configuration error; unable to start server. See log for stacktrace."); - System.exit(1); + Class seedProviderClass = Class.forName(conf.seed_provider.class_name); + seedProvider = (SeedProvider)seedProviderClass.getConstructor(Map.class).newInstance(conf.seed_provider.parameters); } - catch (YAMLException e) + // there are about 5 checked exceptions that could be thrown here. + catch (Exception e) { - logger.error("Fatal configuration error error", e); - System.err.println(e.getMessage() + "\nInvalid yaml; unable to start server. See log for stacktrace."); + logger.error("Fatal configuration error", e); + System.err.println(e.getMessage() + "\nFatal configuration error; unable to start server. See log for stacktrace."); System.exit(1); } + if (seedProvider.getSeeds().size() == 0) + throw new ConfigurationException("The seed provider lists no seeds."); } private static IEndpointSnitch createEndpointSnitch(String snitchClassName) throws ConfigurationException