Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id F11399BCA for ; Fri, 1 Mar 2013 16:59:50 +0000 (UTC) Received: (qmail 39715 invoked by uid 500); 1 Mar 2013 16:59:48 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 39681 invoked by uid 500); 1 Mar 2013 16:59:48 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 39673 invoked by uid 99); 1 Mar 2013 16:59:48 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Mar 2013 16:59:48 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,T_FRT_BELOW2 X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of mvallebr@gmail.com designates 209.85.219.50 as permitted sender) Received: from [209.85.219.50] (HELO mail-oa0-f50.google.com) (209.85.219.50) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Mar 2013 16:59:44 +0000 Received: by mail-oa0-f50.google.com with SMTP id l20so5920523oag.23 for ; Fri, 01 Mar 2013 08:59:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=+d3IbSQyRjheLc2lKeTt5JWbxy0pIIGV2zKoHFc7qKk=; b=LIrlF4qtGuXlfixYH36/vS474EdVJ8oVY+ogqy74W2AljO0ae7DlcRXaP57P0YUCg1 KkKpmPkpVT+UhWqK4cDT5zO3y0h9P14IMWiaJn+s2cjJyObnDKF0Rs7x22gTzU+Uoo+0 snhQf4ZAfzEuFWZG4pXH0xuqH+RR/hQpMc+N2oh85/wWKmnXn8RPVi8z/MrSwnOMksvn 7uua1BBKf6w3HDOA9a2iohMrD1rAomHMaxKBin+mfkXtNlxHLvWVsipn37+PRy35SIs1 mIzgdgiNQaAMdoaYneI6xXZmQscwuUipetPWcEaK75SNdvjEao+jPsaKJLVzfWw5x6h8 wfmg== MIME-Version: 1.0 X-Received: by 10.60.29.161 with SMTP id l1mr4516526oeh.111.1362157163316; Fri, 01 Mar 2013 08:59:23 -0800 (PST) Received: by 10.60.97.165 with HTTP; Fri, 1 Mar 2013 08:59:23 -0800 (PST) In-Reply-To: References: Date: Fri, 1 Mar 2013 13:59:23 -0300 Message-ID: Subject: Re: no other nodes seen on priam cluster From: Marcelo Elias Del Valle To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=e89a8fb200fad90b4604d6dfed38 X-Virus-Checked: Checked by ClamAV on apache.org --e89a8fb200fad90b4604d6dfed38 Content-Type: text/plain; charset=ISO-8859-1 Thanks a lot Ben, actually I managed to make it work erasing the SimpleDB Priam uses to keeps instances... I would pulled the last commit from the repo, not sure if it helped or not. But you message made me curious about something... How do you do to add more Cassandra nodes on the fly? Just update the autoscale properties? I saw instaclustr.com changes the instance type as the number of nodes increase (not sure why the price also becomes higher per instance in this case), I am guessing priam use the data backed up to S3 to restore a node data in another instance, right? []s 2013/2/28 Ben Bromhead > Off the top of my head I would check to make sure the Autoscaling Group > you created is restricted to a single Availability Zone, also Priam sets > the number of EC2 instances it expects based on the maximum instance count > you set on your scaling group (it did this last time i checked a few months > ago, it's behaviour may have changed). > > So I would make your desired, min and max instances for your scaling group > are all the same, make sure your ASG is restricted to a > single availability zone (e.g. us-east-1b) and then (if you are able to > and there is no data in your cluster) delete all the SimpleDB entries Priam > has created and then also possibly clear out the cassandra data directory. > > Other than that I see you've raised it as an issue on the Priam project > page , so see what they say ;) > > Cheers > > Ben > > On Thu, Feb 28, 2013 at 3:40 AM, Marcelo Elias Del Valle < > mvallebr@gmail.com> wrote: > >> One additional important info, I checked here and the seeds seems really >> different on each node. The command >> echo `curl http://127.0.0.1:8080/Priam/REST/v1/cassconfig/get_seeds` >> returns ip2 on first node and ip1,ip1 on second node. >> Any idea why? It's probably what is causing cassandra to die, right? >> >> >> 2013/2/27 Marcelo Elias Del Valle >> >>> Hello Ben, Thanks for the willingness to help, >>> >>> 2013/2/27 Ben Bromhead >>>> >>>> Have your added the priam java agent to cassandras JVM argurments (e.g. >>>> -javaagent:$CASS_HOME/lib/priam-cass-extensions-1.1.15.jar) and does >>>> the web container running priam have permissions to write to the cassandra >>>> config directory? Also what do the priam logs say? >>>> >>> >>> I put the priam log of the first node bellow. Yes, I have added >>> priam-cass-extensions to java args and Priam IS actually writting to >>> cassandra dir. >>> >>> >>>> If you want to get up and running quickly with cassandra, AWS and priam >>>> quickly check out www.instaclustr.comyou. >>>> We deploy Cassandra under your AWS account and you have full root >>>> access to the nodes if you want to explore and play around + there is a >>>> free tier which is great for experimenting and trying Cassandra out. >>>> >>> >>> That sounded really great. I am not sure if it would apply to our case >>> (will consider it though), but some partners would have a great benefit >>> from it, for sure! I will send your link to them. >>> >>> What priam says: >>> >>> 2013-02-27 14:14:58.0614 INFO pool-2-thread-1 >>> com.netflix.priam.utils.SystemUtils Calling URL API: >>> http://169.254.169.254/latest/meta-data/public-hostname returns: >>> ec2-174-129-59-107.compute-1.amazon >>> aws.com >>> 2013-02-27 14:14:58.0615 INFO pool-2-thread-1 >>> com.netflix.priam.utils.SystemUtils Calling URL API: >>> http://169.254.169.254/latest/meta-data/public-ipv4 returns: >>> 174.129.59.107 >>> 2013-02-27 14:14:58.0618 INFO pool-2-thread-1 >>> com.netflix.priam.utils.SystemUtils Calling URL API: >>> http://169.254.169.254/latest/meta-data/instance-id returns: i-88b32bfb >>> 2013-02-27 14:14:58.0618 INFO pool-2-thread-1 >>> com.netflix.priam.utils.SystemUtils Calling URL API: >>> http://169.254.169.254/latest/meta-data/instance-type returns: c1.medium >>> 2013-02-27 14:14:59.0614 INFO pool-2-thread-1 >>> com.netflix.priam.defaultimpl.PriamConfiguration REGION set to us-east-1, >>> ASG Name set to dmp_cluster-useast1b >>> 2013-02-27 14:14:59.0746 INFO pool-2-thread-1 >>> com.netflix.priam.defaultimpl.PriamConfiguration appid used to fetch >>> properties is: dmp_cluster >>> 2013-02-27 14:14:59.0843 INFO pool-2-thread-1 >>> org.quartz.simpl.SimpleThreadPool Job execution threads will use class >>> loader of thread: pool-2-thread-1 >>> 2013-02-27 14:14:59.0861 INFO pool-2-thread-1 >>> org.quartz.core.SchedulerSignalerImpl Initialized Scheduler Signaller of >>> type: class org.quartz.core.SchedulerSignalerImpl >>> 2013-02-27 14:14:59.0862 INFO pool-2-thread-1 >>> org.quartz.core.QuartzScheduler Quartz Scheduler v.1.7.3 created. >>> 2013-02-27 14:14:59.0864 INFO pool-2-thread-1 >>> org.quartz.simpl.RAMJobStore RAMJobStore initialized. >>> 2013-02-27 14:14:59.0864 INFO pool-2-thread-1 >>> org.quartz.impl.StdSchedulerFactory Quartz scheduler >>> 'DefaultQuartzScheduler' initialized from default resource file in Quartz >>> package: 'quartz.propertie >>> s' >>> 2013-02-27 14:14:59.0864 INFO pool-2-thread-1 >>> org.quartz.impl.StdSchedulerFactory Quartz scheduler version: 1.7.3 >>> 2013-02-27 14:14:59.0864 INFO pool-2-thread-1 >>> org.quartz.core.QuartzScheduler JobFactory set to: >>> com.netflix.priam.scheduler.GuiceJobFactory@1b6a1c4 >>> 2013-02-27 14:15:00.0239 INFO pool-2-thread-1 >>> com.netflix.priam.aws.AWSMembership Querying Amazon returned following >>> instance in the ASG: us-east-1b --> i-8eb32bfd,i-88b32bfb >>> 2013-02-27 14:15:01.0470 INFO Timer-0 org.quartz.utils.UpdateChecker New >>> update(s) found: 1.8.5 [ >>> http://www.terracotta.org/kit/reflector?kitID=default&pageID=QuartzChangeLog >>> ] >>> 2013-02-27 14:15:10.0925 INFO pool-2-thread-1 >>> com.netflix.priam.identity.InstanceIdentity Found dead instances: i-d49a0da7 >>> 2013-02-27 14:15:11.0397 ERROR pool-2-thread-1 >>> com.netflix.priam.aws.SDBInstanceFactory Conditional check failed. >>> Attribute (instanceId) value exists >>> 2013-02-27 14:15:11.0398 ERROR pool-2-thread-1 >>> com.netflix.priam.utils.RetryableCallable Retry #1 for: Status Code: 409, >>> AWS Service: AmazonSimpleDB, AWS Request ID: >>> 96ca7ae5-f352-b13a-febd-8801d46fe >>> e83, AWS Error Code: ConditionalCheckFailed, AWS Error Message: >>> Conditional check failed. Attribute (instanceId) value exists >>> 2013-02-27 14:15:11.0686 INFO pool-2-thread-1 >>> com.netflix.priam.aws.AWSMembership Querying Amazon returned following >>> instance in the ASG: us-east-1b --> i-8eb32bfd,i-88b32bfb >>> 2013-02-27 14:15:25.0258 INFO pool-2-thread-1 >>> com.netflix.priam.identity.InstanceIdentity Found dead instances: i-d89a0dab >>> 2013-02-27 14:15:25.0588 INFO pool-2-thread-1 >>> com.netflix.priam.identity.InstanceIdentity Trying to grab slot 1808575601 >>> with availability zone us-east-1b >>> 2013-02-27 14:15:25.0732 INFO pool-2-thread-1 >>> com.netflix.priam.identity.InstanceIdentity My token: >>> 56713727820156410577229101240436610842 >>> 2013-02-27 14:15:25.0732 INFO pool-2-thread-1 >>> org.quartz.core.QuartzScheduler Scheduler >>> DefaultQuartzScheduler_$_NON_CLUSTERED started. >>> 2013-02-27 14:15:25.0878 INFO pool-2-thread-1 >>> org.apache.cassandra.db.HintedHandOffManager cluster_name: dmp_cluster >>> initial_token: null >>> hinted_handoff_enabled: true >>> max_hint_window_in_ms: 8 >>> hinted_handoff_throttle_in_kb: 1024 >>> max_hints_delivery_threads: 2 >>> authenticator: org.apache.cassandra.auth.AllowAllAuthenticator >>> authorizer: org.apache.cassandra.auth.AllowAllAuthorizer >>> partitioner: org.apache.cassandra.dht.RandomPartitioner >>> data_file_directories: >>> - /var/lib/cassandra/data >>> commitlog_directory: /var/lib/cassandra/commitlog >>> disk_failure_policy: stop >>> key_cache_size_in_mb: null >>> key_cache_save_period: 14400 >>> row_cache_size_in_mb: 0 >>> row_cache_save_period: 0 >>> row_cache_provider: SerializingCacheProvider >>> saved_caches_directory: /var/lib/cassandra/saved_caches >>> commitlog_sync: periodic >>> commitlog_sync_period_in_ms: 10000 >>> commitlog_segment_size_in_mb: 32 >>> seed_provider: >>> - class_name: com.netflix.priam.cassandra.extensions.NFSeedProvider >>> parameters: >>> - seeds: 127.0.0.1 >>> flush_largest_memtables_at: 0.75 >>> reduce_cache_sizes_at: 0.85 >>> reduce_cache_capacity_to: 0.6 >>> concurrent_reads: 32 >>> concurrent_writes: 32 >>> memtable_flush_queue_size: 4 >>> trickle_fsync: false >>> trickle_fsync_interval_in_kb: 10240 >>> storage_port: 7000 >>> ssl_storage_port: 7001 >>> listen_address: null >>> start_native_transport: false >>> native_transport_port: 9042 >>> start_rpc: true >>> rpc_address: null >>> rpc_port: 9160 >>> rpc_keepalive: true >>> rpc_server_type: sync >>> thrift_framed_transport_size_in_mb: 15 >>> thrift_max_message_length_in_mb: 16 >>> incremental_backups: true >>> snapshot_before_compaction: false >>> auto_snapshot: true >>> column_index_size_in_kb: 64 >>> in_memory_compaction_limit_in_mb: 128 >>> multithreaded_compaction: false >>> compaction_throughput_mb_per_sec: 8 >>> compaction_preheat_key_cache: true >>> read_request_timeout_in_ms: 10000 >>> range_request_timeout_in_ms: 10000 >>> write_request_timeout_in_ms: 10000 >>> truncate_request_timeout_in_ms: 60000 >>> request_timeout_in_ms: 10000 >>> cross_node_timeout: false >>> endpoint_snitch: org.apache.cassandra.locator.Ec2Snitch >>> dynamic_snitch_update_interval_in_ms: 100 >>> dynamic_snitch_reset_interval_in_ms: 600000 >>> dynamic_snitch_badness_threshold: 0.1 >>> request_scheduler: org.apache.cassandra.scheduler.NoScheduler >>> index_interval: 128 >>> server_encryption_options: >>> internode_encryption: none >>> keystore: conf/.keystore >>> keystore_password: cassandra >>> truststore: conf/.truststore >>> truststore_password: cassandra >>> client_encryption_options: >>> enabled: false >>> keystore: conf/.keystore >>> keystore_password: cassandra >>> internode_compression: all >>> inter_dc_tcp_nodelay: true >>> auto_bootstrap: true >>> memtable_total_space_in_mb: 1024 >>> stream_throughput_outbound_megabits_per_sec: 400 >>> num_tokens: 1 >>> >>> 2013-02-27 14:15:25.0884 INFO pool-2-thread-1 >>> com.netflix.priam.utils.SystemUtils Starting cassandra server ....Join >>> ring=true >>> 2013-02-27 14:15:25.0915 INFO pool-2-thread-1 >>> com.netflix.priam.utils.SystemUtils Starting cassandra server .... >>> 2013-02-27 14:15:30.0013 INFO http-bio-8080-exec-1 >>> com.netflix.priam.aws.AWSMembership Query on ASG returning 3 instances >>> 2013-02-27 14:15:31.0726 INFO http-bio-8080-exec-2 >>> com.netflix.priam.aws.AWSMembership Query on ASG returning 3 instances >>> 2013-02-27 14:15:37.0360 INFO DefaultQuartzScheduler_Worker-5 >>> com.netflix.priam.aws.S3FileSystem Uploading to >>> backup/us-east-1/dmp_cluster/56713727820156410577229101240436610842/201302271415/SST/system/local/system-local-ib-1-CompressionInfo.db >>> with chunk size 10485760 >>> >>> >>> >>> Best regards, >>> -- >>> Marcelo Elias Del Valle >>> http://mvalle.com - @mvallebr >>> >> >> >> >> -- >> Marcelo Elias Del Valle >> http://mvalle.com - @mvallebr >> > > > > -- > Ben Bromhead > > Co-founder > *relational.io* | @benbromhead | ph: +61 > 415 936 359 > -- Marcelo Elias Del Valle http://mvalle.com - @mvallebr --e89a8fb200fad90b4604d6dfed38 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Thanks a lot Ben, actually I managed to make it work erasi= ng the SimpleDB Priam uses to keeps instances... I would pulled the last co= mmit from the repo, not sure if it helped or not.

But you message made me curious about something... =A0How do you do to add = more Cassandra nodes on the fly? Just update the autoscale properties? I sa= w instaclustr.com changes the instan= ce type as the number of nodes increase (not sure why the price also become= s higher per instance in this case), I am guessing priam use the data backe= d up to S3 to restore a node data in another instance, right?

[]s



2013/2/28 Ben Bromhe= ad <ben@relational.io>
Off the top of my head I would check to make= sure the Autoscaling Group you created is restricted to a single=A0Availab= ility=A0Zone, also Priam sets the number of EC2 instances it expects based = on the maximum instance count you set on your scaling group (it did this la= st time i checked a few months ago, it's behaviour may have changed).= =A0

So I would make your desired, min and max instances for your= scaling group are all the same, make sure your ASG is restricted to a sing= le=A0availability=A0zone (e.g.=A0us-east-1b) and then (if you are able= to and there is no data in your cluster) delete all the SimpleDB entries P= riam has created and then also possibly clear out the cassandra data direct= ory.=A0

Other than that I see you've raise= d it as an issue on the Priam project page , so see what they say ;)=

Cheers

Ben

On Thu, Feb 28= , 2013 at 3:40 AM, Marcelo Elias Del Valle <mvallebr@gmail.com> wrote:
One additional important info, I checked here and the seed= s seems really different on each node. The command
returns ip2 on first node and ip1,ip1 on second node.
= Any idea why? It's probably what is causing cassandra to die, right?


2013/2/27 Marcelo Elias Del Valle <mvallebr@gmail.com>
<= blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px= #ccc solid;padding-left:1ex">
Hello Ben, Thanks for the willingness to help,=A0

2013/2/27 Ben Bromhea= d <ben@instaclustr.com>
Have your added the priam = java agent to cassandras JVM argurments (e.g.=A0-javaagent:$CASS_HOME/lib/priam-cass-extensions-1.1.15.jar)=A0 and does the web container runnin= g priam have permissions to write to the cassandra config directory? Also w= hat do the priam logs say?

I put the priam log of the first nod= e bellow. Yes, I have added priam-cass-extensions to java args and Priam IS= actually writting to cassandra dir.
=A0
If you want to get up and runni= ng quickly with cassandra, AWS and priam quickly check out www.instaclustr.c= om you.=A0
We deploy Cassandra under your = AWS account and you have full root access to the nodes if you want to explo= re and play around + there is a free tier which is great for experimenting = and trying Cassandra out.

That sounded really great. I am not = sure if it would apply to our case (will consider it though), but some part= ners would have a great benefit from it, for sure! I will send your link to= them.

What priam says:

2013-02-= 27 14:14:58.0614 INFO pool-2-thread-1 com.netflix.priam.utils.SystemUtils C= alling URL API: http://169.254.169.254/latest/meta-data/public-h= ostname returns: ec2-174-129-59-107.compute-1.amazon
201= 3-02-27 14:14:58.0615 INFO pool-2-thread-1 com.netflix.priam.utils.SystemUt= ils Calling URL API: http://169.254.169.254/latest/meta-data/public-= ipv4 returns: 174.129.59.107
2013-02-27 14:14:58.0618 INFO pool-2-thread-1 com.netflix.priam.utils.= SystemUtils Calling URL API: http://169.254.169.254/latest/meta-data= /instance-id returns: i-88b32bfb
2013-02-27 14:14:58.0618 INFO pool-2-thread-1 com.netflix.priam.utils.= SystemUtils Calling URL API: http://169.254.169.254/latest/meta-da= ta/instance-type returns: c1.medium
2013-02-27 14:14:59.0614 INFO pool-2-thread-1 com.netflix.priam.defaul= timpl.PriamConfiguration REGION set to us-east-1, ASG Name set to dmp_clust= er-useast1b
2013-02-27 14:14:59.0746 INFO pool-2-thread-1 com.net= flix.priam.defaultimpl.PriamConfiguration appid used to fetch properties is= : dmp_cluster
2013-02-27 14:14:59.0843 INFO pool-2-thread-1 org.quartz.simpl.SimpleT= hreadPool Job execution threads will use class loader of thread: pool-2-thr= ead-1
2013-02-27 14:14:59.0861 INFO pool-2-thread-1 org.quartz.co= re.SchedulerSignalerImpl Initialized Scheduler Signaller of type: class org= .quartz.core.SchedulerSignalerImpl
2013-02-27 14:14:59.0862 INFO pool-2-thread-1 org.quartz.core.QuartzSc= heduler Quartz Scheduler v.1.7.3 created.
2013-02-27 14:14:59.086= 4 INFO pool-2-thread-1 org.quartz.simpl.RAMJobStore RAMJobStore initialized= .
2013-02-27 14:14:59.0864 INFO pool-2-thread-1 org.quartz.impl.StdSched= ulerFactory Quartz scheduler 'DefaultQuartzScheduler' initialized f= rom default resource file in Quartz package: 'quartz.propertie
s'
2013-02-27 14:14:59.0864 INFO pool-2-thread-1 org.qua= rtz.impl.StdSchedulerFactory Quartz scheduler version: 1.7.3
2013= -02-27 14:14:59.0864 INFO pool-2-thread-1 org.quartz.core.QuartzScheduler J= obFactory set to: com.netflix.priam.scheduler.GuiceJobFactory@1b6a1c4
2013-02-27 14:15:00.0239 INFO pool-2-thread-1 com.netflix.priam.aws.AW= SMembership Querying Amazon returned following instance in the ASG: us-east= -1b --> i-8eb32bfd,i-88b32bfb
2013-02-27 14:15:01.0470 INFO Ti= mer-0 org.quartz.utils.UpdateChecker New update(s) found: 1.8.5 [http://www.terracotta.org/kit/reflector?kitID= =3Ddefault&pageID=3DQuartzChangeLog]
2013-02-27 14:15:10.0925 INFO pool-2-thread-1 com.netflix.priam.identi= ty.InstanceIdentity Found dead instances: i-d49a0da7
2013-02-27 1= 4:15:11.0397 ERROR pool-2-thread-1 com.netflix.priam.aws.SDBInstanceFactory= Conditional check failed. Attribute (instanceId) value exists
2013-02-27 14:15:11.0398 ERROR pool-2-thread-1 com.netflix.priam.utils= .RetryableCallable Retry #1 for: Status Code: 409, AWS Service: AmazonSimpl= eDB, AWS Request ID: 96ca7ae5-f352-b13a-febd-8801d46fe
e83, AWS E= rror Code: ConditionalCheckFailed, AWS Error Message: Conditional check fai= led. Attribute (instanceId) value exists
2013-02-27 14:15:11.0686 INFO pool-2-thread-1 com.netflix.priam.aws.AW= SMembership Querying Amazon returned following instance in the ASG: us-east= -1b --> i-8eb32bfd,i-88b32bfb
2013-02-27 14:15:25.0258 INFO po= ol-2-thread-1 com.netflix.priam.identity.InstanceIdentity Found dead instan= ces: i-d89a0dab
2013-02-27 14:15:25.0588 INFO pool-2-thread-1 com.netflix.priam.identi= ty.InstanceIdentity Trying to grab slot 1808575601 with availability zone u= s-east-1b
2013-02-27 14:15:25.0732 INFO pool-2-thread-1 com.netfl= ix.priam.identity.InstanceIdentity My token: 567137278201564105772291012404= 36610842
2013-02-27 14:15:25.0732 INFO pool-2-thread-1 org.quartz.core.QuartzSc= heduler Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
2013-02-27 14:15:25.0878 INFO pool-2-thread-1 org.apache.cassandra.db.Hint= edHandOffManager cluster_name: dmp_cluster
initial_token: null
hinted_handoff_enabled: true
m= ax_hint_window_in_ms: 8
hinted_handoff_throttle_in_kb: 1024
=
max_hints_delivery_threads: 2
authenticator: org.apache.cass= andra.auth.AllowAllAuthenticator
authorizer: org.apache.cassandra.auth.AllowAllAuthorizer
par= titioner: org.apache.cassandra.dht.RandomPartitioner
data_file_di= rectories:
- /var/lib/cassandra/data
commitlog_director= y: /var/lib/cassandra/commitlog
disk_failure_policy: stop
key_cache_size_in_mb: null
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
row_cache_provider: SerializingCachePro= vider
saved_caches_directory: /var/lib/cassandra/saved_caches
comm= itlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
- class_name: com.netflix.priam.cassandra.extensions.NFSeedProvider
=A0 parameters:
=A0 - seeds: 127.0.0.1
flush_la= rgest_memtables_at: 0.75
reduce_cache_sizes_at: 0.85
re= duce_cache_capacity_to: 0.6
concurrent_reads: 32
concurrent_writes: 32
memtabl= e_flush_queue_size: 4
trickle_fsync: false
trickle_fsyn= c_interval_in_kb: 10240
storage_port: 7000
ssl_storage_= port: 7001
listen_address: null
start_native_transport: false
native_transport_port: 9042
start_rpc: true
rpc_addres= s: null
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
thrift_max_message_length_in_mb: 16
incremental_backups: tr= ue
snapshot_before_compaction: false
auto_snapshot: tru= e
column_index_size_in_kb: 64
in_memory_compaction_limit_in_mb= : 128
multithreaded_compaction: false
compaction_throug= hput_mb_per_sec: 8
compaction_preheat_key_cache: true
read_request_timeout_in_ms: 10000
range_request_timeout_in_ms: 10= 000
write_request_timeout_in_ms: 10000
truncate_request= _timeout_in_ms: 60000
request_timeout_in_ms: 10000
cross_node_timeout: false
endpoint_snitch: org.apache.cassandra.l= ocator.Ec2Snitch
dynamic_snitch_update_interval_in_ms: 100
<= div>dynamic_snitch_reset_interval_in_ms: 600000
dynamic_snitch_ba= dness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
index_interval: 128
server_encryption_options:
=A0 in= ternode_encryption: none
=A0 keystore: conf/.keystore<= /div>
=A0 keystore_password: cassandra
=A0 truststore: conf/.trust= store
=A0 truststore_password: cassandra
client_encrypt= ion_options:
=A0 enabled: false
=A0 keystore: conf/.key= store
=A0 keystore_password: cassandra
internode_compression: all<= /div>
inter_dc_tcp_nodelay: true
auto_bootstrap: true
memtable_total_space_in_mb: 1024
stream_throughput_outbound_me= gabits_per_sec: 400
num_tokens: 1

2013-02-27 14:15:25.0884 INFO p= ool-2-thread-1 com.netflix.priam.utils.SystemUtils Starting cassandra serve= r ....Join ring=3Dtrue
2013-02-27 14:15:25.0915 INFO pool-2-threa= d-1 com.netflix.priam.utils.SystemUtils Starting cassandra server ....
2013-02-27 14:15:30.0013 INFO http-bio-8080-exec-1 com.netflix.priam.a= ws.AWSMembership Query on ASG returning 3 instances
2013-02-27 14= :15:31.0726 INFO http-bio-8080-exec-2 com.netflix.priam.aws.AWSMembership Q= uery on ASG returning 3 instances
2013-02-27 14:15:37.0360 INFO DefaultQuartzScheduler_Worker-5 com.netf= lix.priam.aws.S3FileSystem Uploading to backup/us-east-1/dmp_cluster/567137= 27820156410577229101240436610842/201302271415/SST/system/local/system-local= -ib-1-CompressionInfo.db with chunk size 10485760



Best regards,=A0
--
Marcelo Elias Del Valle
http://mvalle.com=A0- @mvallebr



--
Marcelo Elia= s Del Valle
http://mvall= e.com=A0- @mvallebr



<= /div>--
Ben Bromhea= d

Co-founder



--
Marcelo Elia= s Del Valle
http://mvall= e.com=A0- @mvallebr --e89a8fb200fad90b4604d6dfed38--