Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5E68E116FE for ; Wed, 14 May 2014 00:39:12 +0000 (UTC) Received: (qmail 19812 invoked by uid 500); 13 May 2014 23:37:11 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 19772 invoked by uid 500); 13 May 2014 23:37:11 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 19642 invoked by uid 99); 13 May 2014 23:37:11 -0000 Received: from Unknown (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 13 May 2014 23:37:11 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of graham@vast.com designates 74.125.83.54 as permitted sender) Received: from [74.125.83.54] (HELO mail-ee0-f54.google.com) (74.125.83.54) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 13 May 2014 23:37:06 +0000 Received: by mail-ee0-f54.google.com with SMTP id b57so821487eek.41 for ; Tue, 13 May 2014 16:36:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:content-type:message-id:mime-version :subject:date:references:to:in-reply-to; bh=Tq8m9z/I6u8u7rpV2Kj11/gj3rAxjcVQSCD0RBN081o=; b=FnXpt/cLUIhda/62gHrVGZb5jqs+h9ih0MoREjL7wG2PqXctV+HR/hWollZaKyzuJb oJlqAAsuBOxbYxpqvdiD8PzmJEweYb/D67W4+cODD2MCL3c8zfrXtdZLdbkgGJmsFho5 fGqfgQDCCr18lB2UkJKyjSXN1te2uGRg2iTZXvHwmJPXv/ZR6BQHarPDaZ1aDqvufX0Y rKlkKI/uygy3ygP1ZOyCp7+I3YmZvw8cHykcW9bABaU/bTO1s2/rRpH0ChnJIHo0saaN D8HZm/MzukLTU05vIw6740kxoSrO5sqx0SbDK3k1VW/neXkI+9D/eyw1qO6CrKscBeaZ 9G9Q== X-Gm-Message-State: ALoCoQkM5HzkJoJA8Jxdvu3nPn3LeJTwHHgNh6ywJ9RlTaW/jx9eITMRzdKY5QNJf379drZVpOi2 X-Received: by 10.14.175.200 with SMTP id z48mr24412eel.66.1400024203446; Tue, 13 May 2014 16:36:43 -0700 (PDT) Received: from [192.168.1.112] (cpe-70-113-56-83.austin.res.rr.com. [70.113.56.83]) by mx.google.com with ESMTPSA id l3sm805345eeo.43.2014.05.13.16.36.41 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 13 May 2014 16:36:42 -0700 (PDT) From: graham sanderson Content-Type: multipart/signed; boundary="Apple-Mail=_5AFF2E5D-F0B4-426C-8BE1-C1B1AA0D7A8C"; protocol="application/pkcs7-signature"; micalg=sha1 Message-Id: <89AE4010-BFFD-4A24-99E3-225E79E2AFF2@vast.com> Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\)) Subject: Re: Question about READS in a multi DC environment. Date: Tue, 13 May 2014 18:36:39 -0500 References: <76C92671-E85F-4811-A85D-70462591810C@vast.com> To: user@cassandra.apache.org In-Reply-To: X-Mailer: Apple Mail (2.1874) X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail=_5AFF2E5D-F0B4-426C-8BE1-C1B1AA0D7A8C Content-Type: multipart/alternative; boundary="Apple-Mail=_BE5BA9A4-A096-4730-913B-8052634F4AAC" --Apple-Mail=_BE5BA9A4-A096-4730-913B-8052634F4AAC Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=windows-1252 Yeah, but all the requests for data/digest are sent at the same time=85 = responses that aren=92t =93needed=94 to complete the request are dealt = with asynchronously (possibly causing repair).=20 In the original trace (which is confusing because I don=92t think the = clocks are in sync)=85 I don=92t see anything that makes me believe it = is blocking for all 3 responses - It actually does reads on all 3 nodes = even if only digests are required On May 12, 2014, at 12:37 AM, DuyHai Doan wrote: > Ins't read repair supposed to be done asynchronously in background ? >=20 >=20 > On Mon, May 12, 2014 at 2:07 AM, graham sanderson = wrote: > You have a read_repair_chance of 1.0 which is probably why your query = is hitting all data centers. >=20 > On May 11, 2014, at 3:44 PM, Mark Farnan = wrote: >=20 > > Im trying to understand READ load in Cassandra across a = multi-datacenter cluster. (Specifically why it seems to be hitting = more than one DC) and hope someone can help. > > > > =46rom what I=EDm seeing here, a READ, with Consistency LOCAL_ONE, = seems to be hitting All 3 datacenters, rather than just the one I=EDm = connected to. I see 'Read 101 live and 0 tombstoned cells' from EACH = of the 3 DC"s in the trace, which seems, wrong. > > I have tried every Consistency level, same result. This also is = same from my C# code via the DataStax driver, (where I first noticed the = issue). > > > > Can someone please shed some light on what is occurring ? = Specifically I dont' want a query on one DC, going anywhere near the = other 2 as a rule, as in production, these DC's will be accross slower = links. > > > > > > Query: (NOTE: Whilst this uses a kairosdb table, i'm just playing = with queries against it as it has 100k columns in this key for testing). > > > > cqlsh:kairosdb> consistency local_one > > Consistency level set to LOCAL_ONE. > > > > cqlsh:kairosdb> select * from data_points where key =3D = 0x6d61726c796e2e746573742e74656d70340000000145b514a400726f6f6d3d6f66666963= 653a limit 1000; > > > > ... Some return data rows listed here which I've removed .... > > > > > > Query Respose Trace: > > > > activity = | = timestamp | source | source_elapsed > > = --------------------------------------------------------------------------= ----------------------------------------------------------------+---------= -----+----------------+---------------- > > = execute_cql3_query | = 07:18:12,692 | 192.168.25.111 | 0 > > = Message received from /192.168.25.111 | = 07:18:00,706 | 192.168.25.131 | 50 > > = Executing single-partition query on data_points | = 07:18:00,707 | 192.168.25.131 | 760 > > = Acquiring sstable references | = 07:18:00,707 | 192.168.25.131 | 814 > > = Merging memtable tombstones | = 07:18:00,707 | 192.168.25.131 | 924 > > = Bloom filter allows skipping sstable 191 | = 07:18:00,707 | 192.168.25.131 | 1050 > > = Bloom filter allows skipping sstable 190 | = 07:18:00,707 | 192.168.25.131 | 1166 > > = Key cache hit for sstable 189 | = 07:18:00,707 | 192.168.25.131 | 1275 > > = Seeking to partition beginning in data file | = 07:18:00,707 | 192.168.25.131 | 1293 > > = Skipped 0/3 non-slice-intersecting sstables, included 0 due to = tombstones | 07:18:00,708 | 192.168.25.131 | 2173 > > = Merging data from memtables and 1 sstables | = 07:18:00,708 | 192.168.25.131 | 2195 > > = Read 1001 live and 0 tombstoned cells | = 07:18:00,709 | 192.168.25.131 | 3259 > > = Enqueuing response to /192.168.25.111 | = 07:18:00,710 | 192.168.25.131 | 4006 > > = Sending message to /192.168.25.111 | = 07:18:00,710 | 192.168.25.131 | 4210 > > Parsing select * from data_points where key =3D = 0x6d61726c796e2e746573742e74656d70340000000145b514a400726f6f6d3d6f66666963= 653a limit 1000; | 07:18:12,692 | 192.168.25.111 | 52 > > = Preparing statement | = 07:18:12,692 | 192.168.25.111 | 257 > > = Sending message to /192.168.25.121 | = 07:18:12,693 | 192.168.25.111 | 1099 > > = Sending message to /192.168.25.131 | = 07:18:12,693 | 192.168.25.111 | 1254 > > = Executing single-partition query on data_points | = 07:18:12,693 | 192.168.25.111 | 1269 > > = Acquiring sstable references | = 07:18:12,693 | 192.168.25.111 | 1284 > > = Merging memtable tombstones | = 07:18:12,694 | 192.168.25.111 | 1315 > > = Key cache hit for sstable 205 | = 07:18:12,694 | 192.168.25.111 | 1592 > > = Seeking to partition beginning in data file | = 07:18:12,694 | 192.168.25.111 | 1606 > > = Skipped 0/1 non-slice-intersecting sstables, included 0 due to = tombstones | 07:18:12,695 | 192.168.25.111 | 2423 > > = Merging data from memtables and 1 sstables | = 07:18:12,695 | 192.168.25.111 | 2498 > > = Read 1001 live and 0 tombstoned cells | = 07:18:12,695 | 192.168.25.111 | 3167 > > = Message received from /192.168.25.121 | = 07:18:12,697 | 192.168.25.111 | null > > = Processing response from /192.168.25.121 | = 07:18:12,697 | 192.168.25.111 | null > > = Message received from /192.168.25.131 | = 07:18:12,699 | 192.168.25.111 | null > > = Processing response from /192.168.25.131 | = 07:18:12,699 | 192.168.25.111 | null > > = Message received from /192.168.25.111 | = 07:19:49,432 | 192.168.25.121 | 68 > > = Executing single-partition query on data_points | = 07:19:49,433 | 192.168.25.121 | 824 > > = Acquiring sstable references | = 07:19:49,433 | 192.168.25.121 | 840 > > = Merging memtable tombstones | = 07:19:49,433 | 192.168.25.121 | 898 > > = Bloom filter allows skipping sstable 193 | = 07:19:49,433 | 192.168.25.121 | 983 > > = Key cache hit for sstable 192 | = 07:19:49,433 | 192.168.25.121 | 1055 > > = Seeking to partition beginning in data file | = 07:19:49,433 | 192.168.25.121 | 1073 > > = Skipped 0/2 non-slice-intersecting sstables, included 0 due to = tombstones | 07:19:49,434 | 192.168.25.121 | 1803 > > = Merging data from memtables and 1 sstables | = 07:19:49,434 | 192.168.25.121 | 1839 > > = Read 1001 live and 0 tombstoned cells | = 07:19:49,434 | 192.168.25.121 | 2518 > > = Enqueuing response to /192.168.25.111 | = 07:19:49,435 | 192.168.25.121 | 3026 > > = Sending message to /192.168.25.111 | = 07:19:49,435 | 192.168.25.121 | 3128 > > = Request complete | = 07:18:12,696 | 192.168.25.111 | 4387 > > > > > > Other Stats about the cluster: > > > > [root@cdev101 conf]# nodetool status > > Datacenter: DC3 > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > Status=3DUp/Down > > |/ State=3DNormal/Leaving/Joining/Moving > > -- Address Load Tokens Owns Host ID = Rack > > UN 192.168.25.131 80.67 MB 256 34.2% = 6ec61643-17d4-4a2e-8c44-57e08687a957 RAC1 > > Datacenter: DC2 > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > Status=3DUp/Down > > |/ State=3DNormal/Leaving/Joining/Moving > > -- Address Load Tokens Owns Host ID = Rack > > UN 192.168.25.121 79.46 MB 256 30.6% = 976626fb-ea80-405b-abb0-eae703b0074d RAC1 > > Datacenter: DC1 > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > Status=3DUp/Down > > |/ State=3DNormal/Leaving/Joining/Moving > > -- Address Load Tokens Owns Host ID = Rack > > UN 192.168.25.111 61.82 MB 256 35.2% = 9475e2da-d926-42d0-83fb-0188d0f8f438 RAC1 > > > > > > cqlsh> describe keyspace kairosdb > > > > CREATE KEYSPACE kairosdb WITH replication =3D { > > 'class': 'NetworkTopologyStrategy', > > 'DC2': '1', > > 'DC3': '1', > > 'DC1': '1' > > }; > > > > USE kairosdb; > > > > CREATE TABLE data_points ( > > key blob, > > column1 blob, > > value blob, > > PRIMARY KEY (key, column1) > > ) WITH COMPACT STORAGE AND > > bloom_filter_fp_chance=3D0.010000 AND > > caching=3D'KEYS_ONLY' AND > > comment=3D'' AND > > dclocal_read_repair_chance=3D0.000000 AND > > gc_grace_seconds=3D864000 AND > > index_interval=3D128 AND > > read_repair_chance=3D1.000000 AND > > replicate_on_write=3D'true' AND > > populate_io_cache_on_flush=3D'false' AND > > default_time_to_live=3D0 AND > > speculative_retry=3D'NONE' AND > > memtable_flush_period_in_ms=3D0 AND > > compaction=3D{'class': 'SizeTieredCompactionStrategy'} AND > > compression=3D{'sstable_compression': 'LZ4Compressor'}; > > > > > > >=20 >=20 --Apple-Mail=_BE5BA9A4-A096-4730-913B-8052634F4AAC Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=windows-1252 Yeah, = but all the requests for data/digest are sent at the same time=85 = responses that aren=92t =93needed=94 to complete the request are dealt = with asynchronously (possibly causing = repair). 

In the original trace (which is = confusing because I don=92t think the clocks are in sync)=85 I don=92t = see anything that makes me believe it is blocking for all 3 responses - = It actually does reads on all 3 nodes even if only digests are = required

On May 12, = 2014, at 12:37 AM, DuyHai Doan <doanduyhai@gmail.com> = wrote:

Ins't read repair supposed to be done = asynchronously in background ?


On Mon, May 12, = 2014 at 2:07 AM, graham sanderson <graham@vast.com> wrote:
You have a = read_repair_chance of 1.0 which is probably why your query is hitting = all data centers.

On May 11, 2014, at 3:44 PM, Mark Farnan <devmail@petrolink.com> = wrote:

> Im trying to understand READ load in Cassandra across a = multi-datacenter cluster.   (Specifically why it seems to be = hitting more than one DC) and hope someone can help.
>
> =46rom what I=EDm seeing here, a READ, with Consistency LOCAL_ONE, =   seems to be hitting All 3 datacenters, rather than just the one = I=EDm connected to.   I see  'Read 101 live and 0 tombstoned = cells'  from EACH of the 3 DC"s in the trace, which seems, = wrong.
> I have tried every  Consistency level, same result.   = This also is same from my C# code via the DataStax driver, (where I = first noticed the issue).
>
> Can someone please shed some light on what is occurring ? =  Specifically I dont' want a query on one DC, going anywhere near = the other 2 as a rule, as in production,  these DC's will be = accross slower links.
>
>
> Query:  (NOTE:  Whilst this uses a kairosdb table, =  i'm just playing with queries against it as it has 100k columns in = this key for testing).
>
> cqlsh:kairosdb> consistency local_one
> Consistency level set to LOCAL_ONE.
>
> cqlsh:kairosdb> select * from data_points where key =3D = 0x6d61726c796e2e746573742e74656d70340000000145b514a400726f6f6d3d6f66666963= 653a limit 1000;
>
> ... Some return data  rows listed here which I've removed = ....
>
> <CassandraQuery.txt>
> Query Respose Trace:
>
> activity                 =                     =                     =                     =                     =                     =             | timestamp    | = source         | source_elapsed
> = --------------------------------------------------------------------------= ----------------------------------------------------------------+---------= -----+----------------+----------------
>                   =                     =                     =                     =                     =                     = execute_cql3_query | 07:18:12,692 | 192.168.25.111 |     =          0
>                   =                     =                     =                     =                     =  Message received from /192.168.25.111 | 07:18:00,706 | 192.168.25.131 | =             50
>                   =                     =                     =                     =            Executing single-partition = query on data_points | 07:18:00,707 | 192.168.25.131 |     =        760
>                   =                     =                     =                     =                     =           Acquiring sstable references | = 07:18:00,707 | 192.168.25.131 |           =  814
>                   =                     =                     =                     =                     =            Merging memtable tombstones | = 07:18:00,707 | 192.168.25.131 |           =  924
>                   =                     =                     =                     =                   Bloom = filter allows skipping sstable 191 | 07:18:00,707 | 192.168.25.131 | =           1050
>                   =                     =                     =                     =                   Bloom = filter allows skipping sstable 190 | 07:18:00,707 | 192.168.25.131 | =           1166
>                   =                     =                     =                     =                     =          Key cache hit for sstable 189 | = 07:18:00,707 | 192.168.25.131 |           = 1275
>                   =                     =                     =                     =                Seeking to = partition beginning in data file | 07:18:00,707 | 192.168.25.131 | =           1293
>                   =                     =                     =      Skipped 0/3 non-slice-intersecting sstables, = included 0 due to tombstones | 07:18:00,708 | 192.168.25.131 |   =         2173
>                   =                     =                     =                     =                 Merging data = from memtables and 1 sstables | 07:18:00,708 | 192.168.25.131 |   =         2195
>                   =                     =                     =                     =                     =  Read 1001 live and 0 tombstoned cells | 07:18:00,709 | = 192.168.25.131 |           3259
>                   =                     =                     =                     =                     =  Enqueuing response to /192.168.25.111 | 07:18:00,710 | 192.168.25.131 | =           4006
>                   =                     =                     =                     =                     =     Sending message to /192.168.25.111 | 07:18:00,710 | 192.168.25.131 | =           4210
> Parsing select * from data_points where key =3D = 0x6d61726c796e2e746573742e74656d70340000000145b514a400726f6f6d3d6f66666963= 653a limit 1000; | 07:18:12,692 | 192.168.25.111 |       =       52
>                   =                     =                     =                     =                     =                   =  Preparing statement | 07:18:12,692 | 192.168.25.111 |   =          257
>                   =                     =                     =                     =                     =     Sending message to /192.168.25.121 | 07:18:12,693 | 192.168.25.111 | =           1099
>                   =                     =                     =                     =                     =     Sending message to /192.168.25.131 | 07:18:12,693 | 192.168.25.111 | =           1254
>                   =                     =                     =                     =            Executing single-partition = query on data_points | 07:18:12,693 | 192.168.25.111 |     =       1269
>                   =                     =                     =                     =                     =           Acquiring sstable references | = 07:18:12,693 | 192.168.25.111 |           = 1284
>                   =                     =                     =                     =                     =            Merging memtable tombstones | = 07:18:12,694 | 192.168.25.111 |           = 1315
>                   =                     =                     =                     =                     =          Key cache hit for sstable 205 | = 07:18:12,694 | 192.168.25.111 |           = 1592
>                   =                     =                     =                     =                Seeking to = partition beginning in data file | 07:18:12,694 | 192.168.25.111 | =           1606
>                   =                     =                     =      Skipped 0/1 non-slice-intersecting sstables, = included 0 due to tombstones | 07:18:12,695 | 192.168.25.111 |   =         2423
>                   =                     =                     =                     =                 Merging data = from memtables and 1 sstables | 07:18:12,695 | 192.168.25.111 |   =         2498
>                   =                     =                     =                     =                     =  Read 1001 live and 0 tombstoned cells | 07:18:12,695 | = 192.168.25.111 |           3167
>                   =                     =                     =                     =                     =  Message received from /192.168.25.121 | 07:18:12,697 | 192.168.25.111 | =           null
>                   =                     =                     =                     =                   = Processing response from /192.168.25.121 | 07:18:12,697 | 192.168.25.111 | =           null
>                   =                     =                     =                     =                     =  Message received from /192.168.25.131 | 07:18:12,699 | 192.168.25.111 | =           null
>                   =                     =                     =                     =                   = Processing response from /192.168.25.131 | 07:18:12,699 | 192.168.25.111 | =           null
>                   =                     =                     =                     =                     =  Message received from /192.168.25.111 | 07:19:49,432 | 192.168.25.121 | =             68
>                   =                     =                     =                     =            Executing single-partition = query on data_points | 07:19:49,433 | 192.168.25.121 |     =        824
>                   =                     =                     =                     =                     =           Acquiring sstable references | = 07:19:49,433 | 192.168.25.121 |           =  840
>                   =                     =                     =                     =                     =            Merging memtable tombstones | = 07:19:49,433 | 192.168.25.121 |           =  898
>                   =                     =                     =                     =                   Bloom = filter allows skipping sstable 193 | 07:19:49,433 | 192.168.25.121 | =            983
>                   =                     =                     =                     =                     =          Key cache hit for sstable 192 | = 07:19:49,433 | 192.168.25.121 |           = 1055
>                   =                     =                     =                     =                Seeking to = partition beginning in data file | 07:19:49,433 | 192.168.25.121 | =           1073
>                   =                     =                     =      Skipped 0/2 non-slice-intersecting sstables, = included 0 due to tombstones | 07:19:49,434 | 192.168.25.121 |   =         1803
>                   =                     =                     =                     =                 Merging data = from memtables and 1 sstables | 07:19:49,434 | 192.168.25.121 |   =         1839
>                   =                     =                     =                     =                     =  Read 1001 live and 0 tombstoned cells | 07:19:49,434 | = 192.168.25.121 |           2518
>                   =                     =                     =                     =                     =  Enqueuing response to /192.168.25.111 | 07:19:49,435 | 192.168.25.121 | =           3026
>                   =                     =                     =                     =                     =     Sending message to /192.168.25.111 | 07:19:49,435 | 192.168.25.121 | =           3128
>                   =                     =                     =                     =                     =                     =   Request complete | 07:18:12,696 | 192.168.25.111 |     =       4387
>
>
> Other Stats about the cluster:
>
> [root@cdev101 conf]# nodetool status
> Datacenter: DC3
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> Status=3DUp/Down
> |/ State=3DNormal/Leaving/Joining/Moving
> --  Address         Load     =   Tokens  Owns   Host ID         =                     =   Rack
> UN  192.168.25.131  80.67 MB   256     = 34.2%  6ec61643-17d4-4a2e-8c44-57e08687a957  RAC1
> Datacenter: DC2
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> Status=3DUp/Down
> |/ State=3DNormal/Leaving/Joining/Moving
> --  Address         Load     =   Tokens  Owns   Host ID         =                     =   Rack
> UN  192.168.25.121  79.46 MB   256     = 30.6%  976626fb-ea80-405b-abb0-eae703b0074d  RAC1
> Datacenter: DC1
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> Status=3DUp/Down
> |/ State=3DNormal/Leaving/Joining/Moving
> --  Address         Load     =   Tokens  Owns   Host ID         =                     =   Rack
> UN  192.168.25.111  61.82 MB   256     = 35.2%  9475e2da-d926-42d0-83fb-0188d0f8f438  RAC1
>
>
> cqlsh> describe keyspace kairosdb
>
> CREATE KEYSPACE kairosdb WITH replication =3D {
>  'class': 'NetworkTopologyStrategy',
>  'DC2': '1',
>  'DC3': '1',
>  'DC1': '1'
> };
>
> USE kairosdb;
>
> CREATE TABLE data_points (
>  key blob,
>  column1 blob,
>  value blob,
>  PRIMARY KEY (key, column1)
> ) WITH COMPACT STORAGE AND
>  bloom_filter_fp_chance=3D0.010000 AND
>  caching=3D'KEYS_ONLY' AND
>  comment=3D'' AND
>  dclocal_read_repair_chance=3D0.000000 AND
>  gc_grace_seconds=3D864000 AND
>  index_interval=3D128 AND
>  read_repair_chance=3D1.000000 AND
>  replicate_on_write=3D'true' AND
>  populate_io_cache_on_flush=3D'false' AND
>  default_time_to_live=3D0 AND
>  speculative_retry=3D'NONE' AND
>  memtable_flush_period_in_ms=3D0 AND
>  compaction=3D{'class': 'SizeTieredCompactionStrategy'} = AND
>  compression=3D{'sstable_compression': 'LZ4Compressor'};
>
>
>



= --Apple-Mail=_BE5BA9A4-A096-4730-913B-8052634F4AAC-- --Apple-Mail=_5AFF2E5D-F0B4-426C-8BE1-C1B1AA0D7A8C Content-Disposition: attachment; filename=smime.p7s Content-Type: application/pkcs7-signature; name=smime.p7s Content-Transfer-Encoding: base64 MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIICuzCCArcw ggIgAgIBTDANBgkqhkiG9w0BAQUFADCBojELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk9SMREwDwYD VQQHEwhQb3J0bGFuZDEWMBQGA1UEChMNT21uaS1FeHBsb3JlcjEWMBQGA1UECxMNSVQgRGVwYXJ0 bWVudDEbMBkGA1UEAxMSd3d3LmNvcm5lcmNhc2UuY29tMSYwJAYJKoZIhvcNAQkBFhdibG9ja291 dEBjb3JuZXJjYXNlLmNvbTAeFw0xMTA0MDYxNjE0MzFaFw0yMTA0MDMxNjE0MzFaMIGjMQswCQYD VQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEWMBQG A1UEChMNVmFzdC5jb20gSW5jLjEUMBIGA1UECxMLRW5naW5lZXJpbmcxGTAXBgNVBAMTEEdyYWhh bSBTYW5kZXJzb24xHjAcBgkqhkiG9w0BCQEWD2dyYWhhbUB2YXN0LmNvbTCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEAm4K/W/0VdaOiS6tC1G8tSCAw989XCsJXxVPiny/hND6T0jVv4vP0JRiO vNzH6uoINoKQfgUKa+GCqILdY7Jdx61/WKqxltFTu5D0H8sFFNIKgf9cd3yU6t2susKrxaDXRCul pmcJ3AFg4xuG3ZUZt+XTYhBebQfjwgGQh3/pkQUCAwEAATANBgkqhkiG9w0BAQUFAAOBgQCKW+hQ JqNkPRht5fl8FHku80BLAH9ezEJtZJ6EU9fcK9jNPkAJgSEgPXQ++jE+4iYI2nIb/h5RILUxd1Ht m/yZkNRUVCg0+0Qj6aMT/hfOT0kdP8/9OnbmIp2T6qvNN2rAGU58tt3cbuT2j3LMTS2VOGykK4He iNYYqr+K6sPDHTGCAy0wggMpAgEBMIGpMIGiMQswCQYDVQQGEwJVUzELMAkGA1UECBMCT1IxETAP BgNVBAcTCFBvcnRsYW5kMRYwFAYDVQQKEw1PbW5pLUV4cGxvcmVyMRYwFAYDVQQLEw1JVCBEZXBh cnRtZW50MRswGQYDVQQDExJ3d3cuY29ybmVyY2FzZS5jb20xJjAkBgkqhkiG9w0BCQEWF2Jsb2Nr b3V0QGNvcm5lcmNhc2UuY29tAgIBTDAJBgUrDgMCGgUAoIIB2TAYBgkqhkiG9w0BCQMxCwYJKoZI hvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA1MTMyMzM2MzlaMCMGCSqGSIb3DQEJBDEWBBTnuL+M YrgAK0V9f57fq0AsGPbLxDCBugYJKwYBBAGCNxAEMYGsMIGpMIGiMQswCQYDVQQGEwJVUzELMAkG A1UECBMCT1IxETAPBgNVBAcTCFBvcnRsYW5kMRYwFAYDVQQKEw1PbW5pLUV4cGxvcmVyMRYwFAYD VQQLEw1JVCBEZXBhcnRtZW50MRswGQYDVQQDExJ3d3cuY29ybmVyY2FzZS5jb20xJjAkBgkqhkiG 9w0BCQEWF2Jsb2Nrb3V0QGNvcm5lcmNhc2UuY29tAgIBTDCBvAYLKoZIhvcNAQkQAgsxgayggakw gaIxCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJPUjERMA8GA1UEBxMIUG9ydGxhbmQxFjAUBgNVBAoT DU9tbmktRXhwbG9yZXIxFjAUBgNVBAsTDUlUIERlcGFydG1lbnQxGzAZBgNVBAMTEnd3dy5jb3Ju ZXJjYXNlLmNvbTEmMCQGCSqGSIb3DQEJARYXYmxvY2tvdXRAY29ybmVyY2FzZS5jb20CAgFMMA0G CSqGSIb3DQEBAQUABIGAYHdTzYF9hXjqL53C96t+Gi8OFSgIzlPg9boDywT+2cnCmHmS7y5X87Uo CNTiZ9YvI7c4qivZt5bVDDwNaZQwVoOnFlBZbg8/hqJhiO+ps7edr/tcncWrtZYPgI9dGWIlII0N T5aWUv/Pb/zTaPhXghzwzmx9o6jjUQ14ARvD+aIAAAAAAAA= --Apple-Mail=_5AFF2E5D-F0B4-426C-8BE1-C1B1AA0D7A8C--