Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id AC3D3200B98 for ; Mon, 3 Oct 2016 15:38:13 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id AADD4160ADC; Mon, 3 Oct 2016 13:38:13 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 6DCED160ACC for ; Mon, 3 Oct 2016 15:38:11 +0200 (CEST) Received: (qmail 23681 invoked by uid 500); 3 Oct 2016 13:38:09 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 23671 invoked by uid 99); 3 Oct 2016 13:38:09 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Oct 2016 13:38:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 50AF3C0D64 for ; Mon, 3 Oct 2016 13:38:09 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.379 X-Spam-Level: ** X-Spam-Status: No, score=2.379 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id 1HD8FHssruce for ; Mon, 3 Oct 2016 13:38:04 +0000 (UTC) Received: from mail-wm0-f43.google.com (mail-wm0-f43.google.com [74.125.82.43]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id DB3A65FC19 for ; Mon, 3 Oct 2016 13:38:03 +0000 (UTC) Received: by mail-wm0-f43.google.com with SMTP id p138so152045508wmb.1 for ; Mon, 03 Oct 2016 06:38:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=6uEKgCS2qspoXn2Puk2YPjqZGHXfmxJcEpYGQCYR1wE=; b=tn9IZii8ybOHHsPG6xFq/PbKJbJtGeIdcldBUv4Gpy16Ba/MQCWAD2V1FZBo/veIUG GLDTCPa5aNechCB79kePZITlOrUJGenVaqdFSphw90J/AxOcGN4hl/uaQGxBGRKBUvj6 3gQJatw0Av7PC9EQKypMOcj4zz4FhKE4bFFnzhug+Tn8/Mb7RnbiTXYtia/mFBPT0R9z qm2IPDmiEaDxduNSz7PaczG9GTbTUhkO+DKWBnKC1mEfu04ELHUhr3pRsqF+gAqGwpoh 0N1wZ7sZVKpZ5s8QQpiZfiP67Dl+1kmenibDEx1rH2HSZ0ki6EkOYIGXQsW1ERS+E5mE 1vHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=6uEKgCS2qspoXn2Puk2YPjqZGHXfmxJcEpYGQCYR1wE=; b=bQFAgFQr+5+rslLFM/zxul5acZKIF2CSCDb6AJDX1s6GE4HiE7tQ2WxpwEwFzcikh2 JDqy2k0ihDIRia8LGAH6qU7oVvO7G8yUnK9AHjFd6ibZO4NKhEFxkw0cYbbT0Aodu/+T 4hkfKqa/Bl9s+M5Mq6Wld2XH/aGZxSSu2niSwcgL10SGjXRQoSM6Mfdoeiw+XQjNFyus uKNmmoBjLUfsepGTJUnmEuhaYaGR+aIlwTHNAygOaM5KRbQZPs4tGkW0qIwURDR4YhVv sDEcWGL7au//WxC1iwK2hY3HkEvAIRXyJDKCM+rGZk6nAYzqarlN0F+MgSqpBi6i1XGP hALw== X-Gm-Message-State: AA6/9RkufmTpHoi0iu/zMz/Z21FCeuRnvf9I/xg5NRCrBGeg4HfImhq6vNasN9EE6OfBT9+j1zC2rHHIbaYFuQ== X-Received: by 10.28.8.17 with SMTP id 17mr9039072wmi.81.1475501882023; Mon, 03 Oct 2016 06:38:02 -0700 (PDT) MIME-Version: 1.0 Received: by 10.80.176.3 with HTTP; Mon, 3 Oct 2016 06:38:00 -0700 (PDT) In-Reply-To: References: <7E0D86054CA325B1.1DAF7965-ADCB-4607-9396-50AC3926F952@mail.outlook.com> <1114137226.1184009.1473078942507@mail.yahoo.com> <1313955902.1368282.1473086509708@mail.yahoo.com> <348630283.264961.1473150474071@mail.yahoo.com> From: Joseph Tech Date: Mon, 3 Oct 2016 19:08:00 +0530 Message-ID: Subject: Re: Read timeouts on primary key queries To: user@cassandra.apache.org, Romain Hardouin Content-Type: multipart/mixed; boundary=001a114324ba8b407a053df60dd0 archived-at: Mon, 03 Oct 2016 13:38:13 -0000 --001a114324ba8b407a053df60dd0 Content-Type: multipart/alternative; boundary=001a114324ba8b4075053df60dce --001a114324ba8b4075053df60dce Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi All, Managed to capture the trace for a timed out query using probabilistic tracing (attached). Seems the timeout is caused by DigestMismatchException causing a global read repair. I guess this is due to dclocal_read_repair_chance =3D 0.0 for the table. Can someone please confirm this, and that dclocal_read_repair_chance =3D 0.1 will prevent this= ? Thanks, Joseph On Thu, Sep 15, 2016 at 6:52 PM, Joseph Tech wrote: > I added the error logs and see that the timeouts are in a range b/n 2 to > 7s. Samples below: > > Query error after 5354 ms: [4 bound values] > Query error after 6658 ms: [4 bound values] > Query error after 4596 ms: [4 bound values] > Query error after 2068 ms: [4 bound values] > Query error after 2904 ms: [4 bound values] > > There is no specific socket timeout set on the client side, so it would > take the default of 12s. The read_request_timeout_in_ms is set to 5s. In > this case, how do the errors happen in <5s ? . Is there any other factor > that would cause a fail-fast scenario during the read? > > Thanks, > Joseph > > > > > On Wed, Sep 7, 2016 at 5:26 PM, Joseph Tech wrote= : > >> Thanks, Romain for the detailed explanation. We use log4j 2 and i have >> added the driver logging for slow/error queries, will see if it helps to >> provide any pattern once in Prod. >> >> I tried getendpoints and getsstables for some of the timed out keys and >> most of them listed only 1 SSTable .There were a few which showed 2 >> SSTables. There is no specific trend on the keys, it's completely based = on >> the user access, and the same keys return results instantly from cqlsh >> >> On Tue, Sep 6, 2016 at 1:57 PM, Romain Hardouin >> wrote: >> >>> There is nothing special in the two sstablemetadata outuputs but if the >>> timeouts are due to a network split or overwhelmed node or something li= ke >>> that you won't see anything here. That said, if you have the keys which >>> produced the timeouts then, yes, you can look for a regular pattern (i.= e. >>> always the same keys?). >>> >>> You can find sstables for a given key with nodetool: >>> nodetool getendpoints >>> Then you can run the following command on one/each node of the enpoints= : >>> nodetool getsstables >>> >>> If many sstables are shown in the previous command it means that your >>> data is fragmented but thanks to LCS this number should be low. >>> >>> I think the most usefull actions now would be: >>> >>> * 1) *Enable DEBUG for o.a.c.db.ConsistencyLevel, it won't spam your >>> log file, you will see the following when errors will occur: >>> - Local replicas [, ...] are insufficient to satisfy >>> LOCAL_QUORUM requirement of X live nodes in '' >>> >>> You are using C* 2.1 but you can have a look at the C* 2.2 >>> logback.xml: https://github.com/apache/cassandra/blob/cassan >>> dra-2.2/conf/logback.xml >>> I'm using it on production, it's better because it creates a >>> separate debug.log file with a asynchronous appender. >>> >>> Watch out when enabling: >>> >>> >>> >>> Because the default logback configuration set all o.a.c in DEBUG: >>> >>> >>> >>> Instead you can set: >>> >>> >>> >> level=3D"DEBUG"/> >>> >>> Also, if you want to restrict debug.log to DEBUG level only (instea= d >>> of DEBUG+INFO+...) you can add a LevelFilter to ASYNCDEBUGLOG in >>> logback.xml: >>> >>> >>> DEBUG >>> ACCEPT >>> DENY >>> >>> >>> Thus, the debug.log file will be empty unless some Consistency issues >>> happen. >>> >>> * 2) *Enable slow queries log at the driver level with a QueryLogger: >>> >>> Cluster cluster =3D ... >>> // log queries longer than 1 second, see also withDynamicThreshold >>> QueryLogger queryLogger =3D QueryLogger.builder(cluster).w >>> ithConstantThreshold(1000).build(); >>> cluster.register(queryLogger); >>> >>> Then in your driver logback file: >>> >>> >> level=3D"DEBUG" /> >>> >>> *3) *And/or: you mentioned that you use DSE so you can enable slow >>> queries logging in dse.yaml (cql_slow_log_options) >>> >>> Best, >>> >>> Romain >>> >>> >>> Le Lundi 5 septembre 2016 20h05, Joseph Tech a >>> =C3=A9crit : >>> >>> >>> Attached are the sstablemeta outputs from 2 SSTables of size 28 MB and >>> 52 MB (out2). The records are inserted with different TTLs based on the= ir >>> nature ; test records with 1 day, typeA records with 6 months, typeB >>> records with 1 year etc. There are also explicit DELETEs from this tabl= e, >>> though it's much lower than the rate of inserts. >>> >>> I am not sure how to interpret this output, or if it's the right >>> SSTables that were picked. Please advise. Is there a way to get the >>> sstables corresponding to the keys that timed out, though they are >>> accessible later. >>> >>> On Mon, Sep 5, 2016 at 10:58 PM, Anshu Vajpayee < >>> anshu.vajpayee@gmail.com> wrote: >>> >>> We have seen read time out issue in cassandra due to high droppable >>> tombstone ratio for repository. >>> >>> Please check for high droppable tombstone ratio for your repo. >>> >>> On Mon, Sep 5, 2016 at 8:11 PM, Romain Hardouin >>> wrote: >>> >>> Yes dclocal_read_repair_chance will reduce the cross-DC traffic and >>> latency, so you can swap the values ( https://issues.apache.org/ji >>> ra/browse/CASSANDRA-7320 >>> ). I guess the >>> sstable_size_in_mb was set to 50 because back in the day (C* 1.0) the >>> default size was way too small: 5 MB. So maybe someone in your company >>> tried "10 * the default" i.e. 50 MB. Now the default is 160 MB. I don't= say >>> to change the value but just keep in mind that you're using a small val= ue >>> here, it could help you someday. >>> >>> Regarding the cells, the histograms shows an *estimation* of the min, >>> p50, ..., p99, max of cells based on SSTables metadata. On your screens= hot, >>> the Max is 4768. So you have a partition key with ~ 4768 cells. The p99= is >>> 1109, so 99% of your partition keys have less than (or equal to) 1109 >>> cells. >>> You can see these data of a given sstable with the tool sstablemetadata= . >>> >>> Best, >>> >>> Romain >>> >>> >>> >>> Le Lundi 5 septembre 2016 15h17, Joseph Tech a >>> =C3=A9crit : >>> >>> >>> Thanks, Romain . We will try to enable the DEBUG logging (assuming it >>> won't clog the logs much) . Regarding the table configs, read_repair_ch= ance >>> must be carried over from older versions - mostly defaults. I think sst= able_size_in_mb >>> was set to limit the max SSTable size, though i am not sure on the reas= on >>> for the 50 MB value. >>> >>> Does setting dclocal_read_repair_chance help in reducing cross-DC >>> traffic (haven't looked into this parameter, just going by the name). >>> >>> By the cell count definition : is it incremented based on the number of >>> writes for a given name(key?) and value. This table is heavy on reads a= nd >>> writes. If so, the value should be much higher? >>> >>> On Mon, Sep 5, 2016 at 7:35 AM, Romain Hardouin >>> wrote: >>> >>> Hi, >>> >>> Try to put org.apache.cassandra.db. ConsistencyLevel at DEBUG level, it >>> could help to find a regular pattern. By the way, I see that you have s= et a >>> global read repair chance: >>> read_repair_chance =3D 0.1 >>> And not the local read repair: >>> dclocal_read_repair_chance =3D 0.0 >>> Is there any reason to do that or is it just the old (pre 2.0.9) defaul= t >>> configuration? >>> >>> The cell count is the number of triplets: (name, value, timestamp) >>> >>> Also, I see that you have set sstable_size_in_mb at 50 MB. What is the >>> rational behind this? (Yes I'm curious :-) ). Anyway your "SSTables per >>> read" are good. >>> >>> Best, >>> >>> Romain >>> >>> Le Lundi 5 septembre 2016 13h32, Joseph Tech a >>> =C3=A9crit : >>> >>> >>> Hi Ryan, >>> >>> Attached are the cfhistograms run within few mins of each other. On the >>> surface, don't see anything which indicates too much skewing (assuming >>> skewing =3D=3Dkeys spread across many SSTables) . Please confirm. Relat= ed to >>> this, what does the "cell count" metric indicate ; didn't find a clear >>> explanation in the documents. >>> >>> Thanks, >>> Joseph >>> >>> >>> On Thu, Sep 1, 2016 at 6:30 PM, Ryan Svihla wrote: >>> >>> Have you looked at cfhistograms/tablehistograms your data maybe just >>> skewed (most likely explanation is probably the correct one here) >>> >>> Regard, >>> >>> Ryan Svihla >>> >>> _____________________________ >>> From: Joseph Tech >>> Sent: Wednesday, August 31, 2016 11:16 PM >>> Subject: Re: Read timeouts on primary key queries >>> To: >>> >>> >>> >>> Patrick, >>> >>> The desc table is below (only col names changed) : >>> >>> CREATE TABLE db.tbl ( >>> id1 text, >>> id2 text, >>> id3 text, >>> id4 text, >>> f1 text, >>> f2 map, >>> f3 map, >>> created timestamp, >>> updated timestamp, >>> PRIMARY KEY (id1, id2, id3, id4) >>> ) WITH CLUSTERING ORDER BY (id2 ASC, id3 ASC, id4 ASC) >>> AND bloom_filter_fp_chance =3D 0.01 >>> AND caching =3D '{"keys":"ALL", "rows_per_partition":"NONE"}' >>> AND comment =3D '' >>> AND compaction =3D {'sstable_size_in_mb': '50', 'class': >>> 'org.apache.cassandra.db. compaction. LeveledCompactionStrategy'} >>> AND compression =3D {'sstable_compression': 'org.apache.cassandra.i= o. >>> compress.LZ4Compressor'} >>> AND dclocal_read_repair_chance =3D 0.0 >>> AND default_time_to_live =3D 0 >>> AND gc_grace_seconds =3D 864000 >>> AND max_index_interval =3D 2048 >>> AND memtable_flush_period_in_ms =3D 0 >>> AND min_index_interval =3D 128 >>> AND read_repair_chance =3D 0.1 >>> AND speculative_retry =3D '99.0PERCENTILE'; >>> >>> and the query is select * from tbl where id1=3D? and id2=3D? and id3=3D= ? and >>> id4=3D? >>> >>> The timeouts happen within ~2s to ~5s, while the successful calls have >>> avg of 8ms and p99 of 15s. These times are seen from app side, the actu= al >>> query times would be slightly lower. >>> >>> Is there a way to capture traces only when queries take longer than a >>> specified duration? . We can't enable tracing in production given the >>> volume of traffic. We see that the same query which timed out works fin= e >>> later, so not sure if the trace of a successful run would help. >>> >>> Thanks, >>> Joseph >>> >>> >>> On Wed, Aug 31, 2016 at 8:05 PM, Patrick McFadin >>> wrote: >>> >>> If you are getting a timeout on one table, then a mismatch of RF and >>> node count doesn't seem as likely. >>> >>> Time to look at your query. You said it was a 'select * from table wher= e >>> key=3D?' type query. I would next use the trace facility in cqlsh to >>> investigate further. That's a good way to find hard to find issues. You >>> should be looking for clear ledge where you go from single digit ms to = 4 or >>> 5 digit ms times. >>> >>> The other place to look is your data model for that table if you want t= o >>> post the output from a desc table. >>> >>> Patrick >>> >>> >>> >>> On Tue, Aug 30, 2016 at 11:07 AM, Joseph Tech >>> wrote: >>> >>> On further analysis, this issue happens only on 1 table in the KS which >>> has the max reads. >>> >>> @Atul, I will look at system health, but didnt see anything standing ou= t >>> from GC logs. (using JDK 1.8_92 with G1GC). >>> >>> @Patrick , could you please elaborate the "mismatch on node count + RF" >>> part. >>> >>> On Tue, Aug 30, 2016 at 5:35 PM, Atul Saroha >>> wrote: >>> >>> There could be many reasons for this if it is intermittent. CPU usage + >>> I/O wait status. As read are I/O intensive, your IOPS requirement shoul= d be >>> met that time load. Heap issue if CPU is busy for GC only. Network heal= th >>> could be the reason. So better to look system health during that time w= hen >>> it comes. >>> >>> ------------------------------ ------------------------------ >>> ------------------------------ --------------------------- >>> Atul Saroha >>> *Lead Software Engineer* >>> *M*: +91 8447784271 *T*: +91 124-415-6069 *EXT*: 12369 >>> Plot # 362, ASF Centre - Tower A, Udyog Vihar, >>> Phase -4, Sector 18, Gurgaon, Haryana 122016, INDIA >>> >>> On Tue, Aug 30, 2016 at 5:10 PM, Joseph Tech >>> wrote: >>> >>> Hi Patrick, >>> >>> The nodetool status shows all nodes up and normal now. From OpsCenter >>> "Event Log" , there are some nodes reported as being down/up etc. durin= g >>> the timeframe of timeout, but these are Search workload nodes from the >>> remote (non-local) DC. The RF is 3 and there are 9 nodes per DC. >>> >>> Thanks, >>> Joseph >>> >>> On Mon, Aug 29, 2016 at 11:07 PM, Patrick McFadin >>> wrote: >>> >>> You aren't achieving quorum on your reads as the error is explains. Tha= t >>> means you either have some nodes down or your topology is not matching = up. >>> The fact you are using LOCAL_QUORUM might point to a datacenter mis-mat= ch >>> on node count + RF. >>> >>> What does your nodetool status look like? >>> >>> Patrick >>> >>> On Mon, Aug 29, 2016 at 10:14 AM, Joseph Tech >>> wrote: >>> >>> Hi, >>> >>> We recently started getting intermittent timeouts on primary key querie= s >>> (select * from table where key=3D) >>> >>> The error is : com.datastax.driver.core.excep >>> tions.ReadTimeoutException: Cassandra timeout during read query at >>> consistency LOCAL_QUORUM (2 responses were required but only 1 replica >>> a responded) >>> >>> The same query would work fine when tried directly from cqlsh. There ar= e >>> no indications in system.log for the table in question, though there we= re >>> compactions in progress for tables in another keyspace which is more >>> frequently accessed. >>> >>> My understanding is that the chances of primary key queries timing out >>> is very minimal. Please share the possible reasons / ways to debug this >>> issue. >>> >>> We are using Cassandra 2.1 (DSE 4.8.7). >>> >>> Thanks, >>> Joseph >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> -- >>> *Regards,* >>> *Anshu * >>> >>> >>> >>> >>> >>> >> > --001a114324ba8b4075053df60dce Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi All,

Managed to capture the trace fo= r a timed out query using probabilistic tracing (attached). Seems the timeo= ut is caused by=C2=A0DigestMismatchException causing a global read repair. = I guess this is due to=C2=A0dclocal_read_repair_chance =3D 0.0 for the tabl= e. Can someone please confirm this, and that dclocal_read_repair_chance =3D= 0.1 will prevent this?=C2=A0

Thanks,
Jo= seph

=C2=A0 =C2=A0=C2=A0


On Thu, Sep 15, 2016 at 6:52 PM, Josep= h Tech <jaalex.tech@gmail.com> wrote:
I added the error logs and see that the ti= meouts are in a range b/n 2 to 7s. Samples below:=C2=A0

=
=C2=A0Query error after 5354 ms: [4 bound values] <query>
<= div>=C2=A0Query error after 6658 ms: [4 bound values] <query>=C2=A0
=C2=A0Query error after 4596 ms: [4 bound values] <query>=C2= =A0
=C2=A0Query error after 2068 ms: [4 bound values] <query&g= t;=C2=A0
=C2=A0Query error after 2904 ms: [4 bound values] <qu= ery>=C2=A0

There is no specific socket ti= meout set on the client side, so it would take the default of 12s. The=C2= =A0read_request_timeout_in_ms is set to 5s. In this case, how do the= errors happen in <5s ? . Is there any other factor that would cause a f= ail-fast scenario during the read?=C2=A0

Thanks,
Joseph

=C2=A0
<= div>

On Wed, Sep 7, 2016 at 5:26 PM, = Joseph Tech <jaalex.tech@gmail.com> wrote:
Thanks, Romain for the detailed expla= nation. We use log4j 2 and i have added the driver logging for slow/error q= ueries, will see if it helps to provide any pattern once in Prod.=C2=A0
I tried getendpoints and getsstables for some of the timed = out keys and most of them listed only 1 SSTable .There were a few which sho= wed 2 SSTables. There is no specific trend on the keys, it's completely= based on the user access, and the same keys return results instantly from = cqlsh

On Tue, Sep 6, 2016 at 1:57 PM, Romain Hardouin &l= t;romainh_ml@yahoo= .fr> wrote:
There is nothing special in the two sstablemetadata outuputs bu= t if the timeouts are due to a network split or overwhelmed node or somethi= ng like that you won't see anything here. That said, if you have the ke= ys which produced the timeouts then, yes, you can look for a regular patter= n (i.e. always the same keys?).=C2=A0

You can find sstables for a g= iven key with nodetool:
=C2=A0 =C2=A0 nodetool ge= tendpoints <keyspace> <cf> <key>
= Then you can run the following command on one/each node of= the enpoints:
=C2=A0 =C2=A0 nodetool getsstabl= es <keyspace> <cf> <key>

If many sstables are shown = in the previous command it means that your data is fragmented but thanks to= LCS this number should be low.

I think the most usefull actions now would be= :

= =C2=A01) Enable DEBUG for o.a.c.db.ConsistencyLevel, it won't sp= am your log file, you will see the following when errors will occur:=
=C2=A0 =C2=A0 - Local replicas [<endpoint1>, ...]= are insufficient to satisfy LOCAL_QUORUM requirement of X live nodes in &#= 39;<dc>'

=C2=A0 =C2=A0 You are using C* 2.1 but = you can have a look at the C* 2.2 logback.xml:=C2=A0https://github.com/apache/cassandra/blob/cassandra-2.2/conf/l= ogback.xml
=C2=A0 =C2=A0 I'm usin= g it on production, it's better because it creates a separate debug.log= file with a asynchronous appender.

<= /font>
=C2=A0 =C2=A0Watch out when enabling:
=C2=A0 =C2=A0
=C2=A0 =C2= =A0 <appender-ref ref=3D"ASYNCDEBUGLOG" />
=C2=A0 =C2=A0=C2=A0
=C2= =A0 =C2=A0Because the default logback configuration set all o.a.c in DEBUG:=
=C2=A0 =C2=A0
=C2=A0 =C2=A0 =C2=A0 =C2=A0 <logger name=3D"org.apache.cassandra&qu= ot; level=3D"DEBUG"/>
=C2=A0 =C2=A0=C2=A0
=C2=A0 =C2=A0Instead= you can set:=C2=A0
=C2=A0 =C2=A0=C2=A0
=C2=A0 =C2=A0<logger name=3D"org.apache.cassandr= a" level=3D"INFO"/>
=C2=A0 =C2=A0<= logger name=3D"org.apache.cassandra.db.ConsistencyLevel" lev= el=3D"DEBUG"/>

=C2=A0 =C2=A0 Also, if you want to restrict debug.log to DEBUG l= evel only (instead of DEBUG+INFO+...) you can add a LevelFilter to ASYNCDEB= UGLOG in logback.xml:
=C2=A0 =C2=A0=C2=A0=
=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0<filter clas= s=3D"ch.qos.logback.classic.filter.LevelFilter">
=C2=A0 =C2=A0 =C2=A0 <level>DEBUG</level>
=C2=A0 =C2=A0 =C2=A0 <onMatch>ACCEPT</onMatch>= ;
=C2=A0 =C2=A0 =C2=A0 <onMismatch>DENY</onMis= match>
=C2=A0 =C2=A0 </filter>

=C2=A0 Thus, the debug.log f= ile will be empty unless some Consistency issues happen.
=
=C2=A0 =C2=A0
= =C2=A02) Enable slow queries log at the driver level with a QueryLogger= :
=C2=A0
=C2=A0 =C2= =A0Cluster cluster =3D ...
=C2=A0 =C2=A0// log que= ries longer than 1 second, see also withDynamicThreshold
=
=C2=A0 =C2=A0QueryLogger queryLogger =3D QueryLogger.builder(cluste= r).withConstantThreshold(1000).build();
=C2=A0 =C2=A0cluster.register(queryLogger);<= /div>
=C2=A0 =C2=A0=C2=A0
=C2=A0 =C2=A0 Then in your driver logback file:
=C2=A0 =C2=A0=C2=A0
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 <logger name=3D"com.datastax.driver.core.QueryLog= ger.SLOW" level=3D"DEBUG" />
=C2=A0
=C2=A03) And/o= r: you mentioned that you use DSE so you can enable slow queries logging in= dse.yaml (cql_slow_log_options)

Best,

=
Romain


Le Lundi 5 septembre 2016 20h05, Joseph Tech <= ;jaalex.tech@gma= il.com> a =C3=A9crit :


Att= ached are the sstablemeta outputs from 2 SSTables of size 28 MB and 52 MB (= out2). The records are inserted with different TTLs based on their nature ;= test records with 1 day, typeA records with 6 months, typeB records with 1= year etc. There are also explicit DELETEs from this table, though it's= much lower than the rate of inserts.

I a= m not sure how to interpret this output, or if it's the right SSTables = that were picked. Please advise. Is there a way to get the sstables corresp= onding to the keys that timed out, though they are accessible later.
<= /div>

On Mon, Sep 5, 2016 at = 10:58 PM, Anshu Vajpayee <anshu.vaj= payee@gmail.com> wrote:
We have s= een read time out issue in cassandra due to high droppable tombstone ratio = for repository.=C2=A0
Please= check for high droppable tombstone ratio for your repo.=C2=A0
<= /span>

On Mon, Sep 5, 2016 at 8= :11 PM, Romain Hardouin <romainh_ml@yaho= o.fr> wrote:
Yes=C2=A0dclocal_read_repair_chance will reduce the cross-DC traffic and latenc= y, so you can swap the values (=C2=A0= https://issues.apache.org/ji ra/bro= wse/CASSANDRA-7320=C2=A0). I gues= s the sstable_size_in_mb was set to 50 because back in the day (C* 1.0) the= default size was way too small:=C2=A05 MB. So maybe someone in your compa= ny tried "10 * the default" i.e. 50 MB. Now the default is 160 MB= . I don't say to change the value but just keep in mind that you're= using a small value here, it could help you someday.

=
Regarding the cells, = the histograms shows an *estimation* of the min, p50, ..., p99, max of cell= s based on SSTables metadata. On your screenshot, the Max is 4768. So you h= ave a partition key with ~ 4768 cells. The p99 is 1109, so 99% of your part= ition keys have less than (or equal to) 1109 cells.=C2=A0
You can see these data of a give= n sstable with the tool sstablemetadata.

= Best,

Romain



Le Lundi 5 septembre 2016 15h= 17, Joseph Tech <jaalex.tech@gmail.com> a =C3=A9c= rit :


Thanks, Romain . We will tr= y to enable the DEBUG logging (assuming it won't clog the logs much) . = Regarding the table configs,=C2=A0read_repair_chance must be=C2=A0carried over from older versions - mostly defaults. I think=C2=A0= sstable_size_in_mb was set to limit the max SSTable size, though i am not s= ure on the reason for the 50 MB value.

Does setting dclocal_read_repair_chance help in reducing = cross-DC traffic (haven't looked into this parameter, just going by the= name).

By the cell count defin= ition : is it incremented based on the number of writes for a given name(ke= y?) and value. This table is heavy on reads and writes. If so, the value sh= ould be much higher?

<= div>On Mon, Sep 5, 2016 at 7:35 AM, Romain Hardouin <romainh_ml@yahoo.fr> wrote:
Hi,

Try to put=C2=A0= org.apache.cassandra.db. ConsistencyLevel at DEBUG level, it could help to = find a regular pattern. By the way, I see that you have set a global read r= epair chance:
=C2= =A0 =C2=A0=C2=A0read_repair_chance = =3D 0.1
And not= the local read repair:
=C2=A0 =C2=A0 dclocal_read_repair_chance =3D 0.0
Is there any reason to do that or is it just = the old (pre 2.0.9) default configuration?=C2=A0

The cell count is the number of triplets: (name, value, timestam= p)

=
Also, I see that you have set=C2=A0= sstable_size_in_mb at 50 MB. What is = the rational behind this? (Yes I'm curious :-) ). Anyway your "SST= ables per read" are good.

Best,

Romain

=
Le= Lundi 5 septembre 2016 13h32, Joseph Tech <jaalex.tech@= gmail.com> a =C3=A9crit :


Hi Ryan,

Attached = are the cfhistograms run within few mins of each other. On the surface, don= 't see anything which indicates too much skewing (assuming skewing =3D= =3Dkeys spread across many SSTables) . Please confirm. Related to this, wha= t does the "cell count" metric indicate ; didn't find a clear= explanation in the documents.

Thanks,
Joseph


<= /font>
On Thu, Sep 1, 2016 at 6:30 PM, Ryan = Svihla <rs@foundev.pro> wrote:<= br clear=3D"none">
Have you looked at cfhistograms/tablehistograms your data ma= ybe just skewed (most likely explanation is probably the correct one here)<= br clear=3D"none">
Regard,

Ryan Svihla

_____________________________=
From: Joseph Tech <jaalex= .tech@gmail.com>
Sent: Wednesday, August 31, 2016 = 11:16 PM
Subject: Re: Read timeouts on primary key querie= s
To: <user@cassandra= .apache.org>
=


Patrick,

The desc table is below (only col na= mes changed) :=C2=A0

<= /font>
CREATE TABLE db.tbl (
=C2=A0 =C2=A0 id1 text,
=C2=A0 =C2=A0 id2 text,
=C2=A0 =C2=A0 = id3 text,
=C2=A0 =C2=A0 id4 text,<= /div>
=C2=A0 =C2=A0 f1 text,
=C2=A0 =C2=A0 f2 map<text, text>,
=C2=A0 =C2=A0 f3 map<text, text>,
=C2=A0 =C2=A0 created timestamp,
= =C2=A0 =C2=A0 updated timestamp,
=C2=A0 = =C2=A0 PRIMARY KEY (id1, id2, id3, id4)
)= WITH CLUSTERING ORDER BY (id2 ASC, id3 ASC, id4 ASC)
=C2=A0 =C2=A0 AND bloom_filter_fp_chance =3D 0.01
=
=C2=A0 =C2=A0 AND caching =3D '{"keys":= "ALL", "rows_per_partition":"NONE"}'
=C2=A0 =C2=A0 AND comment =3D ''
=C2=A0 =C2=A0 AND compaction =3D {'sstabl= e_size_in_mb': '50', 'class': 'org.apache.cassandra= .db. compaction. LeveledCompactionStrategy'}
=C2=A0 =C2=A0 AND compression =3D {'sstable_compression': &= #39;org.apache.cassandra.io. compress.LZ4Compressor&= #39;}
=C2=A0 =C2=A0 AND dclocal_read_repa= ir_chance =3D 0.0
=C2=A0 =C2=A0 AND defau= lt_time_to_live =3D 0
=C2=A0 =C2=A0 AND g= c_grace_seconds =3D 864000
=C2=A0 =C2=A0 = AND max_index_interval =3D 2048
=C2=A0 = =C2=A0 AND memtable_flush_period_in_ms =3D 0
=C2=A0 =C2=A0 AND min_index_interval =3D 128
=C2=A0 =C2=A0 AND read_repair_chance =3D 0.1
=C2=A0 =C2=A0 AND speculative_retry =3D '99.0PERCENTILE'= ;;

=
and the query is=C2=A0select * from tbl where id1=3D?= and id2=3D? and id3=3D? and id4=3D?

The timeouts happen withi= n ~2s to ~5s, while the successful calls have avg of 8ms and p99 of 15s. Th= ese times are seen from app side, the actual query times would be slightly = lower.=C2=A0

Is there a way to capture traces only when querie= s take longer than a specified duration? . We can't enable tracing in p= roduction given the volume of traffic. We see that the same query which tim= ed out works fine later, so not sure if the trace of a successful run would= help.

Thanks,
Joseph=


On Wed, Aug 31, 2016 at 8:05 PM, Patrick McFadin = <pmcfadin@gmail.com> wrote:
If= you are getting a timeout on one table, then a mismatch of RF and node cou= nt doesn't seem as likely.=C2=A0

Time to look at your query. Yo= u said it was a 'select * from table where key=3D?' type query. I w= ould next use the trace facility in cqlsh to investigate further. That'= s a good way to find hard to find issues. You should be looking for clear l= edge where you go from single digit ms to 4 or 5 digit ms times.=C2=A0

The other place to look is your data model for that table if you w= ant to post the output from a desc table.

=
Patrick


=

On Tue, Aug 30, 2016 at 11:07= AM, Joseph Tech <jaalex.tech@gmail.co= m> wrote:
=
On further analysis, this issue happ= ens only on 1 table in the KS which has the max reads.=C2=A0

@Atul, = I will look at system health, but didnt see anything standing out from GC l= ogs. (using JDK 1.8_92 with G1GC).=C2=A0

@Patrick , could you please= elaborate the "mismatch on node count + RF" part.

On Tue, Aug 30, 2016 at 5:35 PM, Atul = Saroha <atul.saroha@snapdeal.com> wrote:

<= font size=3D"2">On Tue, Aug 30, 2016 at 5:10 PM, Joseph Tech <jaalex.tech@gmail.com> wrote:
Hi Patrick,

<= /div>
The nodetool status shows all nodes up and norma= l now. From OpsCenter "Event Log" , there are some nodes reported= as being down/up etc. during the timeframe of timeout, but these are Searc= h workload nodes from the remote (non-local) DC. The RF is 3 and there are = 9 nodes per DC.

Thanks,
Jose= ph

On Mon, Aug 29, 2016 at 11= :07 PM, Patrick McFadin <pmcfadin@gmail.c= om> wrote:
You aren't achieving quorum on = your reads as the error is explains. That means you either have some nodes = down or your topology is not matching up. The fact you are using LOCAL_QUOR= UM might point to a datacenter mis-match on node count + RF.=C2=A0
W= hat does your nodetool status look like?

<= /font>
Patrick

On Mon, Aug 29, 2016 at 10:14 AM, Joseph Tech <jaalex.tech@gmail.com> wrote:
Hi,

We recently started getting intermittent timeouts on prim= ary key queries (select * from table where key=3D<key>)
<= div>
The error is : com.datastax.driver.core.excep tions.ReadTimeout= Exception: Cassandra timeout during read query at consistency LOCAL_QUORUM = (2 responses were required but only 1 replica
a responded)

The same query would work fine when tried directly from cqlsh. There ar= e no indications in system.log for the table in question, though there were= compactions in progress for tables in another keyspace which is more frequ= ently accessed.=C2=A0

My understanding is that the chanc= es of primary key queries timing out is very minimal. Please share the poss= ible reasons / ways to debug this issue.=C2=A0

We are using Cassandra 2.1 (DSE 4.8.7).

Thanks,
Joseph



=





=

<= /div>


=


<= /div>



<= br clear=3D"none">

--
Regards,
Anshu=C2=A0








--001a114324ba8b4075053df60dce-- --001a114324ba8b407a053df60dd0 Content-Type: application/zip; name="trace8-masked.zip" Content-Disposition: attachment; filename="trace8-masked.zip" Content-Transfer-Encoding: base64 X-Attachment-Id: f_itu3oy250 UEsDBBQAAAAIAHKXQ0kG1+TzaRoAAEhwAgARAAAAdHJhY2U4LW1hc2tlZC5vdXTtnW1vHDeSx9/n U/TLDW7HaZL9uO+yiHEIbveSi3O4l0E/TSJElpQZybsB9sMve8Yaa4qcIYssuS+sriC2IzeEqH9T VX9WschsP+33N/d3P92M2WX7VzZ9mO4erz80P9YNjzcfbh5/v/pYwvavbH//tBsm+N8/Tbfdw34a 9Rcef9lN3fjFxsv+w/XA0TwfS9jgGzDeCM0r+iIbu6oY2infNK0SGyGmatMVY7dRY9VtK9VX06Q0 Zf1YKZr85WNN2Yybsc77upJVU3W9fixN++Ht199k73Vg6X6est00TDcf9Ed/u7t/n3317fdC/9zf fi/hT19cfx1/P3y7m7uf3027DzfDtPn2brh/r/97M3/LGC5dlbcaTqmmtlB11U+suSj40zfLcana Td+Pss2npspL4xOTjPlwge4h6s/GRW3FWRyTbbEZlBL5th+0ywjWXEr401eLceGQX97+cxqeHvXr yvb6l9tp89DtHm8etYLNfnuadr9n+g+PXX87/XTXvZ+smaZoA17Mu1+63TR+f39/u/m/+92v024T xaoVfbuR9VaWZSsqOfL2IegywnCqc3s9H2p7/YsUXZfLdiin0iFK/sDmwwWmGOH4mL4el04JuSnH STRVVUjZOILsH9h8uEBJJgzRdm6vyIWDRgvIOTBuBAUSM+dILCvBTR+Y9vXw29PN7kBvf6CkvWo7 7aa7Ydpb9UHpyD5WI9AHgptfmeZkZfhVSCIg8Cu5+pXOKrs5p+hk9f7I6vH+fb9/vL+74FcVTQxE +5Vc/crJCqKpaNZIaL9SLFn99fZey7vtze3jtMu629v7f+yz/a83Dw8vI6FSpb0S1NDkKxyrqpLs akIBOhCiqWjylYpixSZfefqVbK35ql0iXwFWLOoRAX4FKxMFTQyMYsWiRhHAClYrake1wmrE+YpF zy+AFURDFAPLOFarDjzLV0JadWCbB7wYAr/iWWPyYyXbyqYtRE7DCp2veNaY/FjlRWX3q5AXQ+BX PGtMfqxkJe1+FSK6CPwqzRrTu/m9T2OWf6Wyu/u7zf52bh7d3Gk0+2l4fEFk/+fs5m64fRrnp7Px aS4wOWpMIg8pEsT6VTOqNNfCtKygXwnHLjCrxfoVYMViz0SAZochL6hIYLIqYlilEwMd9lxYH7vH 7tilfy6x77PubnawZ6+zx0AaVugYyFKzI1nBGChDPsQEMZCnZn+2H6ZuJnN782H6COmUqrJhur29 4Fc0+QrtVzw1+7P5sTL8ioYV2q/S1Oxe9vZOi4mnORbupv3D/d3+oAEv7TUXRF0rtDcpvt7kIAR9 KKjZQOBDrPr176a7cSbyw9v/+d+373786Ye3777/7r/fvT1t17ziQ8q1gjL2aH739PjzfdAezb4o 2PXmTXPuJTN68zR5CNebh6xSj3IoHwIRRKrFfIhF7cE0pw8ZtQcaH8LVHiArDnsmTHOyMvZMOCYI rRarGQArFnsmTHOyMvZM0OxvwSlwyIrDngnTnKyMPRM0MRC3Z8JglbgWP5nPvJSRASAyYITzUpoL y1qrFxfjU/n5zoAAXFj4S0BvCQIpHJ5jNYI8xLhW92zOPGSwChG+sazGvmS1np0twK/gZ5aIFS4G AlZM/Qo9G1XS7N1D+xVPHXFmzhho+FXIgpLAr9hpi4P57d27NMdGNHeN9ivePdujuWIg/MyWNDoQ 7Vc8daCnX12YCyhp6nxov+LZvfVkdWHmcJEZecBq9Su/uQCiXgfKr6pcVmtv152voJSoafwK1ds1 WK0x8MyvLsxGZRVNvkLFQMiKab/X6VdwOdXQrIVR/V6DFc9+r5MVdKOS5kyXKL9i2u91soJSYokz XQxWPPu9TlZg6Us1G4Xq95qsUtHstHNsxrlWIYmcwK941m49daB9ljerabQFOl+xrN0iZ6MMvwp5 MQR+lWbtljYGGn4VIpAJ/Ipn7fbZ/GajDL9aSAeyrN0iY6DhVyGLmVi/km3Nsx7opS3Es7YAbkQ1 d42rB0JWXPzqZH4xEPoVUY0pyq/41AO9/EoV+bF2C0KekDT5ClcPBKz41Jj8WCmVZ5Ya0yIz8gYr NjUmL1ZCqiKz1ZiI+sK4GpPBipVmx83IL1Ox1YR4VpYOhpyRVyEfWAIdwaqehJrvNU5adHSo6OZ7 Zy5p1o4uGIoL9JwGe9dkOJcyb3j2dr30gVLNUXdDIETZB8eqrtu19nBFyx2GpSy1Bxrdjas9AFbr ehb4VWVfzxJpBtx6FrJaY+A5q49+ZcTAJfa3AFZr7QHuST+uZ2HtwXVjpdViV0uQ1Vp7OM9XH2t6 sPZA1IfH1R6Gsl+1xRVWsrBqC9cNvVaL1RaA1aotwPxAldu1RUhyiNUWkNWqLc73IuWtVVsQ3QGB 0xaA1aotzlmJXNm1Bc38AE5bQFartjjLV+LjvJuhLUICTqy2mIYhzXxFu28M5qsi5MXE5ivAKp0Y SMsKxkAZ8mJiYyBklUwMpGUFY2BDoy1QMbAWakxzfUXLylhf0dSYUOsryCodzU7LytDsC/REatVN afqVw5B7Z6FfyZAXE+tXgFU6OtBhSFaGDqTRFigdCFmlowMdhmRl6ECaXiNKBxqsktGBDkOyMnQg Tb7C6cBKbFnGwJMd9qQL1550GAMrGh2Ii4GQVTI60GFIvzJ0IE0MxOnAps1Z6sCT+fkV1IGuGz6s FqsDASs2fnUyP1bAr6h6+FF+xUYHnsyPFdSBBY22wOlAyIqLDjyZHyvgRnKJ80TqPheMdaBjkgBE FVnTdBhx6g8QYhb5HIRgvCPaC4iLd5AQr3jnIASinMppfAgX5cZaMlblDkJQi9c0fSmcFgeEmGlx ByGowIlOTMcp8O2kWCkF1OQUyMLKFVLoJqcgl+T1AYoL0AdSOF7HK3JJXRWguEBV4Oot0XFpZMnr lmQUF6gF2sW4JJ9fTuZzY56BwVGDo7sxz+CSujJD+QtUZp9vYhpymYPXpunbvi+HPu9lcpPsJ/Px F6MM8Fn9RXDjEnATHiQkQgKJuaapsKwkN1amOW9XM1jRrD9xrIqhZKXbTuYT7wwgWN0WHu8Al+TX nyfz4WK0dBxR7hW5pL7+PJkPF0MOfD59ALls+2kz1m3TyCKfRrllzUUYM+eLgWEqBpzXYEAxIB2E rEYgBtiJ7IP5jV0eu2wmq4KmPYpmxVNk+7GSrbKyCjojkICVWlldHmeuczurhRZERZKsaMfDDFY0 26xwrGrF75rBgKIQXCwSjV2iNlwZrFIveJ/MR6Ab9e3Pp88BFxaFhgAfguGNSPOhtsQZrDgscgNY GcvdkD3AFlaovVcQVjpCwmHI+SLoRxXN+XhoIcFz4ftsfnvrIat6CYGuWfFc+B7MsacREiKa1kMT YrXcRe1tAIRE6Yh3hHsb2o7f7WYB8gEGtaAXE7lzG7JisVwKYAUXToqmbB7lVyx6ggGsIBqi03VR MyuQVfK56WQ+ZQhjTesolhOWITQXdgo8wIcMQjRlCLQPMVbgz4bf30WzskWzYqXFLxi2/U506jGa VZqtJ5f5tQkvtd9ViBiOZTWI85u1OLSeTHPGQNg+kDQxENd6AqxYtDhMc7KCpaIy5EMc2+KArDis pUxzsjLWUjTbWqJiIIt2lGlOVmY7KkC0R7ejAKxVXPjtF1NLCEHNit1i+GB+rC7tF1MBfkXBKs3F 8OvuFyO6/gLNiuViOLIlXywkLnguhp8trCVfLpGvpobp1cVn5hSCRmORxq9wOhCwYlq4cBYEjcIF zUYKXOECsmqrZjP0XV5Pk5QyTzcG+jSrzPYhXBQDI+xWQTAcuvOmOQOe0Z0PeTGx3XnAimn1zxnw jOofjZDAVf8AK2aiD7m3b4mbzmZCgltKCujTG8mJaO4QHfpY1ZGiNmIqx2Zmwo2Ymovk5kSmOeWD 6UQOhWc1AidSKyynfjBgEQ1yoGEVLGH5Vc5lVdth1TSlczSscoV1GdZhHWmBRST80LCqFdZlWOpC VYLoHjo0rDpJWLRNKRMWzaIXDatJEpbDkF0pExZNSRYFq8233drqcKtBo9VB41ioVgdkxcaxTubX QoSOJfIF9AWExXTjn9Ox4Ao4ZC9ZbBHQYMVz45+TlbHxTwY0G2I3/s2w2FVsP5mjpm7EvpCjQihi H6t6IKpOayCqPluhdgbDs/b3/7wBP4NhV+ejaENJGh2OhsWzzndm+HYH0Z4jNCyedb5zc2o7CGuJ kydmWGnW+VzmWZS91O5wXYxmNQJYLOt8vrAutTuKhcJgu8LCtzuIzsVEw+qShPXK7Q6iCV80rD5J WA6LbXcssYNshjVwhHWysBJ65qpRWC0Wlqr6s34Hm9EOr5ylirzKbKMdRPoCNdphsOLlWMiq7CI7 XgxEPIc8nCthWFEimuyNYsVnyMMv9Cl1nJiHka6m2aOJGvIwWLHpHnqxErJRmbV7GBJxoruHGhar 7mFUayprP9tF1W05Djw3Inl6USUz20YkovoEzokAq1WYg+xU2YV5SVNSxwlzwIrPPiQvVkKqY5UW olEhEit2H1JTjDwVuhcrpZqjX8GkRHSvHi5fAVarQgesxNGvDIVOs/LFKXTIao2B5zHwYyHJiIE0 2iIqBq6rKagDy8y+mgrYCRO9mur6s8dW0Q5gFcelL8xPRK2PKFaraD9PWHl1LCkZop1GXOBEO2S1 CsFzcfHRrwwhGBADo4UgYLUKwXNWIld2IRjyIY4Vglo/pBkDaTdWGDFwiY4iYLWKdnBosGwzm2hf 4nBTg9Uq2s9ZVR/r61C0h5wpEi3at12+ivYrol0eKxeGaKep3kaxWoXgubiQeWsXgjQJCycEAat0 hCCtuDCEYMiLiRWCkFUyCYuWlZmwAmDFJqxOCpFmwqKFZSSsAFaxCQuySmeF5TDkbmhjhUVTaket sCCrdBKWw5CsjIRFU2VCJSyDVTKrYdoYaKyGaaq3qNVwVzQyTdFOywq6UchiOFa0Q1bp+JXDkDHQ 8KuQ5EDsV+mIdochWZmiPSBhRYv2aqvSFO0OQ8IyRHsAq2jRDlixEe0n85uKgzGPSAjiRDtklYy4 cBjSrwxxEfIhjhUXgBWbBdbJ/PwKLrBamhYWboEFWXERFyfzY2WIiyagLRItLtqqYJywHNOmIKwI omOXcGkKEGIW+hyEABBBdHEiLuBBQlxWvifzC3gwvIV0QKJXvv1YslxMncyPFVxMVTRVdVxuAqzY CPST+bGCAr0O0BHRAh2wYib6XKdWAKknZECBNlrqTUXFOPQ5EMGA14ZEl9iABwglL8ZRI/VQjLua 8nQT9ZBL8hIcxQVK8MbxMX1FLryEtyOiQRXQLtDA7fO+ZizhHISgcGtpRDbKhyCh5IUbKrYZwq1w aGu64NYr2bCSaygwIJY5BfXrcUk+pKG4gJAmxGc79ghySV4MoLgoyMVRJ31FLqknmJN53ZRixC0s mOCbUvom53dXV8hNKa9zV5eoULCGaWAHyzT3TSkGrICdJdGwtiXTk1fOzXU+rAEr5KaUWFhDodaJ 26BDRwNWINGwWilWWCFnGi3hWX3Lc5exJ6yLs+whB+XEwhrzukgTFu2ohQkrYE9QNKyyrNKE5TDk 1lUTVkCFNRpWPfCqsBrmt43BgBVyrlE0rKFrGcPC7mMQCyyFJ9HwuiUAVegzECmHF5FV+hpVl7Xr MVY2x73NbnrobnbZf/7tu79+/bcjIvAeioAShcWLaiwrwZvVTtM51xL6A99aCRGdMoomJFdCBiEh 7IgCZHg0onZoXI8lbSdANz9P+8dPiHIroipAz1EgYh3nLiCSdkIByyMKQqzj3AVChZUQ0d5uNCG1 EjIIKTuhJdTcoFrXY8nap+XRi3b7xyWRXSvUjjB3fUmEWxNpMmv+MX2ntIJpllAImhDb/HPFd+wK oYlxHcxU5YEL26xzhYt9hdo6XsdVLi2Ky9R1rseStgsRrbFzWSKiaUJrzjEJ1TZCIg9otlIQWnOO Ra9ZCwfCuEwV2HW9lmPJsM06s10q6VjlgHMbsdUinacQonc9lqxdcR6rpBYiRhhgZr0OXNa0Y7pO ZeeyQL16JrSmHdNzrIUcJ6GrnoOZ+z5wKVYuBhdrCTSOC+bopaZQzeB6LFm7wsUupF1HL13lgotj mgvbTHNNPttFmmtX/XX5jCrazGTWDGOSsWsA5Vh6XgWD2TN14MJuWRMy6wUILXAFXFOU29H1GANz j3pBViEvhoDVmonMeGcthooipnnQYLmwzUMvzDmBBwgFnV5A4EPsctPB/MaEVG6XdU3AGXLRrJrK +Via5sdK1vZaQkPTZECz4pmb/FiJwr6ObWg28KBZpZmviKfvICuaDYtoVizzFXb4DrBqHV08q8Wy 6sat67G0zXP27vw9yDxgrouCFZ98dXGYyzgoyKr+Stfk3dVzgnA1Pc2FT7zz52LdiBXHBbURqxjL 3PVYOubPxb7pt3RdB3b9YC3Upt+ZDJ9Idm7f7+4H/SZvXo4QX/eYmqbGiotpmlCaettt1wjZcw3R 6hVNiE/WObdrhC5EN6KLvnCCbTsI12PfHPfCvL/Zv+8eh1/+kt3vfn7TPXTDL9ObodMR927cdW/2 x5D75vj03z8+/Pafw/Qwt2v+kj1/Kdve77Jfp9+zb6bhftc9TuN/Tb//aVM2eZPLQrRFI9uqLMpC /jlTQilVqVYVqta/N6o8/NvqPxcq13+r/77c6j8L/fVa5frP6vB3UpVfZn8aOyl7tZ2KumpEr3+a TuSjKqX+3p0cCpF92GdjIcZmGNtmm+e9zIupbfK8bZtp2DaFrKcvbayqioYVrmWuWbFMSZ9OtNg+ 3d4e167zRq1ru+mraonNWZoQy5TkIGQPeFXI4TCxiEqppOuxP65d4zDs9nO0LkRePrx52N2Pb4Zf ijf7N1ohf3VkYyFEU5tDE2IZ5mbDDg1Vrgsn6YaGZi5rcLMEN+vAQ7XEHGRZdMr1WJLmQGRdElVE jQc0IZbBzUHIuk24ck1EWo2AEMswNxt6Dr9yDUcSzuHPZFhusXf4jrKDoWmB43ynFoXrsWQNPRNZ 51EX3qBmImcya94xfcc6cFfnC1R3ZkIs846DkHUbar3ExHfZNqXrsWQNO+JVu7Yf0I14zVxYxrbZ sKe/1Dm2hxp8+svMJe2IdunlB5Ta6nxejV5++Y5vieXCtS2HHiGuXccj0I0Ql/22cj2WpDkUgHWL r1ZLAa8jVgFoQmum8Ty2oo4a8kZ1sWcuLCOaqzRtlwBiicKnRlSuiAxE1qniWi4R3KaK0dH//nsR 7csbiZ37frkVEbe80Vy4Jp1ru6jsXNQCE3UzobQXOlcMe8aIkxDdGSMzF5ayYDb0GSN11FEWuDNG qnzke+sC9oyRWsUIadQZIzMXrrkGfRZC7Zq0pzsLYebCM8N4XcB+/tM3rvbN9TERLBd2GSb+TJ7G daKl1SKnESpV8L1g4ZNhz+RpiAptaFZs89ALQ5790kiaxjWaFc/c5HdGxYWzX5qCZqWKZsVyC5Un qwtnvzRE55LhWJU900sa/FhdOPulKWjOPUCzSjNfverZL43rNDmrEbBima/izn5pShHwYghYsVtf nVnQ2S/NEvP4VSP53ungvnb9/Kdvc5rxVDSh9LKUz9XqARuyWjkT+iwbsmYufDJSZOe1dd0STdd5 nblwzT7ozmtLdCQCqvNadS2j2x38PccW0XSgivEcXETTXNLLNJcMweUCGOyh2uHnWs1k1lzjdxKc yF0DqFfBoKbrqzFne2PA1bOS7Fxo6gY4z9GE1pjmd7alEOjZxuCzLWcufCLauaFPGRNUd6XiZJpG xLPtgD9MUROiKQ2gZk+qbe18LFXDHqYoqK5MxfmQJrSmH4ukts3Wizp3VKuvS2rUbP1MZk1AlgRk JyND3kesdtOI+FRy/J3Htq9RNBI7uv3Sd1D7GmsxsT3b/5rr2Lm4jr+2WmTamQmtacf0HNveA9G4 TrS86jmo2YaZy5p0TM+xcyE66g21Lp0JrTnH9BxlI9RGeQ5qqVMXFaM7F87tmufYuMg8D3kdkWpt JrTmHNNzbGP2mlCM5+C0gOay5hzTc+xcBM1+KXRsW3OO6Tm2uTpZoidRX3oOaq6urkbnNQup2jXP sXMpaAqguJyjCa05x7O8Nl+7EeE6uPLaTGbNOp7lNSVKmv4OLu1oRGva8SuvKRmVdnDltbZI+LKL 64Ytr81XBgW8jljP0YTWtONXXlOyiWnq4MprmsuadPzKa0q2NJ6DK69pQmvO8SuvKVXG6DVcROt7 nneQZPjymlJEt5ShCa05x6+8pgr0BUvh5TXNZc05fuU1VS6xS6qeJKP7Lfw9x1bGKfICe9J4eHlN c+ET0c4NW14rcqKrgBGeMzZN3fyBCqCHodvHm/fT/dPjhetup+eLbfdv5qd/PD784rrb7x6mXXc4 FGn+RmOm/zbbfHKd+7vb3zMhTsz2byCrMs/zhmi3LmLtc2TFKA/9eMLzj+7mcKCBZjae33msMT10 Nzv9229P+ut7Kyuihg+C1bZo2un8kBBRdJu81P8MYz20bRfwv/THMK+D4Yx9ujG9UkQ/4QhGrGB8 wbiGR+nAtE2bswMTcmQfINIGXGhqKfFIHKw+FzzD25m5z+wzYOG3ipLAYudZFnMe2gdhtfilbDSs TpYFO8+KD4Mid91qYjEbLEQfT8MaRMMO1sG8joJThZR2WBIvyeM9a2panrDODJuzNCz8VF28Z2lY PHOWn2eptr4AC7+jnsSz5ArrIiwplB2Wa7eWxaJh9fmWaWXi3JBqUMPCl/yiw+AMK80wSHx+qQFr gTDYl+PA0rOwB5hCWMUCarDv+oklrJMdmim58wRTA9YCFYxepyyesPwExsV1Vok/czs6Zw1SMK0N +sG6KN1dtXaLkcBKU2C4zA/WRele0QgMHKxqq9L0rFdWg/USYbAfyzRhOSxWDTb4+1eiYY15X7OE dbJANdgukLNmWDxz1sFcB9q/EiKUYB+7umflTz6H3V9EJFxHbxm7LM5ueMWCWX3H13cEURkJFd4m MfCqTMT5Th11O/IXX/xJ1XW20+r/yy/+DVBLAQIUABQAAAAIAHKXQ0kG1+TzaRoAAEhwAgARAAAA AAAAAAEAIAAAAAAAAAB0cmFjZTgtbWFza2VkLm91dFBLBQYAAAAAAQABAD8AAACYGgAAAAA= --001a114324ba8b407a053df60dd0--