Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 7666B200BAA for ; Thu, 27 Oct 2016 23:24:09 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 74D96160AF6; Thu, 27 Oct 2016 21:24:09 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 1CE88160AE6 for ; Thu, 27 Oct 2016 23:24:07 +0200 (CEST) Received: (qmail 96302 invoked by uid 500); 27 Oct 2016 21:24:06 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 96292 invoked by uid 99); 27 Oct 2016 21:24:06 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 27 Oct 2016 21:24:06 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 08D17180606 for ; Thu, 27 Oct 2016 21:24:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2 X-Spam-Level: ** X-Spam-Status: No, score=2 tagged_above=-999 required=6.31 tests=[HTML_MESSAGE=2] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id Ok_e43mbL3dC for ; Thu, 27 Oct 2016 21:24:01 +0000 (UTC) Received: from mx0b-00206401.pphosted.com (mx0a-00206401.pphosted.com [148.163.148.21]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id D701E5FC72 for ; Thu, 27 Oct 2016 21:24:00 +0000 (UTC) Received: from pps.filterd (m0092946.ppops.net [127.0.0.1]) by mx0a-00206401.pphosted.com (8.16.0.17/8.16.0.17) with SMTP id u9RLNnp0026223 for ; Thu, 27 Oct 2016 14:23:49 -0700 Received: from ee01.crowdstrike.sys (dragosx.crowdstrike.com [208.42.231.60]) by mx0a-00206401.pphosted.com with ESMTP id 26br9d01vv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 27 Oct 2016 14:23:49 -0700 Received: from Casmbox03.crowdstrike.sys (10.100.11.66) by ee01.crowdstrike.sys (10.100.0.12) with Microsoft SMTP Server (TLS) id 15.0.1178.4; Thu, 27 Oct 2016 14:23:48 -0700 Received: from Casmbox03.crowdstrike.sys (10.100.11.66) by Casmbox03.crowdstrike.sys (10.100.11.66) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Thu, 27 Oct 2016 14:23:47 -0700 Received: from Casmbox03.crowdstrike.sys ([fe80::15c:fc91:ace1:37ba]) by Casmbox03.crowdstrike.sys ([fe80::15c:fc91:ace1:37ba%25]) with mapi id 15.00.1210.000; Thu, 27 Oct 2016 14:23:46 -0700 From: Jeff Jirsa To: "user@cassandra.apache.org" Subject: Re: Tools to manage repairs Thread-Topic: Tools to manage repairs Thread-Index: AQHSMF5S9+D555mIH0SIfe/czPn4dqC82uAAgAAGHACAAAPIAIAABWIAgAAmP4CAABSPAIAAHXaA//+NpgA= Date: Thu, 27 Oct 2016 21:23:46 +0000 Message-ID: References: <1477578473.1381105.769168361.4694C569@webmail.messagingengine.com> <1477581860.1394804.769235697.661BA26E@webmail.messagingengine.com> <1477583828.1403718.769272953.6AA64BDC@webmail.messagingengine.com> <1477596456.214541.769483409.6159FB92@webmail.messagingengine.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: x-ms-exchange-messagesentrepresentingtype: 1 x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.100.0.9] x-disclaimer: USA Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha256; boundary="B_3560423026_1219072217" MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-10-27_13:,, signatures=0 X-Proofpoint-Spam-Details: rule=russia_temp_notspam policy=russia_temp score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1609300000 definitions=main-1610270350 archived-at: Thu, 27 Oct 2016 21:24:09 -0000 --B_3560423026_1219072217 Content-type: multipart/alternative; boundary="B_3560423026_898175381" --B_3560423026_898175381 Content-transfer-encoding: quoted-printable Content-Type: text/plain; charset="utf-8" If you go above ~1GB, the primary symptom you=E2=80=99ll see is a LOT of ga= rbage created on reads (CASSANDRA-9754 details this). =20 As redesigning data model is often expensive (engineering time, reloading d= ata, etc), one workaround is to tune your JVM to better handle situations w= here you create a lot of trash. One method that can help work around this i= s to use a much larger eden size than default =E2=80=93 up to 50% of your t= otal heap size. =20 For example, if you were using 8G heap and 2G eden, going =C2=A0to 3G or 4G= eden (new heap size in cassandra-env.sh) MAY work better for you if you=E2= =80=99re reading from large partitions (it can also crash your server in so= me cases, so TEST IT IN A LAB FIRST). =20 - Jeff =20 From: Alexander Dejanovski Reply-To: "user@cassandra.apache.org" Date: Thursday, October 27, 2016 at 2:13 PM To: "user@cassandra.apache.org" Subject: Re: Tools to manage repairs =20 The "official" recommendation would be 100MB, but it's hard to give a preci= se answer. Keeping it under the GB seems like a good target. A few patches are pushing the limits of partition sizes so we may soon be m= ore comfortable with big partitions. Cheers =20 Le jeu. 27 oct. 2016 21:28, Vincent Rischmann a =C3=A9cr= it : Yeah that particular table is badly designed, I intend to fix it, when the = roadmap allows us to do it :) What is the recommended maximum partition size ? =20 Thanks for all the information. =20 =20 On Thu, Oct 27, 2016, at 08:14 PM, Alexander Dejanovski wrote: 3.3GB is already too high, and it's surely not good to have well performing= compactions. Still I know changing a data model is no easy thing to do, bu= t you should try to do something here. Anticompaction is a special type of compaction and if an sstable is being a= nticompacted, then any attempt to run validation compaction on it will fail= , telling you that you cannot have an sstable being part of 2 repair sessio= ns at the same time, so incremental repair must be run one node at a time, = waiting for anticompactions to end before moving from one node to the other. Be mindful of running incremental repair on a regular basis once you starte= d as you'll have two separate pools of sstables (repaired and unrepaired) t= hat won't get compacted together, which could be a problem if you want tomb= stones to be purged efficiently. Cheers, =20 Le jeu. 27 oct. 2016 17:57, Vincent Rischmann a =C3=A9cr= it : =20 Ok, I think we'll give incremental repairs a try on a limited number of CFs= first and then if it goes well we'll progressively switch more CFs to incr= emental. =20 I'm not sure I understand the problem with anticompaction and validation ru= nning concurrently. As far as I can tell, right now when a CF is repaired (= either via reaper, or via nodetool) there may be compactions running at the= same time. In fact, it happens very often. Is it a problem ? =20 As far as big partitions, the biggest one we have is around 3.3Gb. Some les= s big partitions are around 500Mb and less. =20 =20 On Thu, Oct 27, 2016, at 05:37 PM, Alexander Dejanovski wrote: Oh right, that's what they advise :) I'd say that you should skip the full repair phase in the migration procedu= re as that will obviously fail, and just mark all sstables as repaired (ski= p 1, 2 and 6). Anyway you can't do better, so take a leap of faith there. =20 Intensity is already very low and 10000 segments is a whole lot for 9 nodes= , you should not need that many. =20 You can definitely pick which CF you'll run incremental repair on, and stil= l run full repair on the rest. If you pick our Reaper fork, watch out for schema changes that add incremen= tal repair fields, and I do not advise to run incremental repair without it= , otherwise you might have issues with anticompaction and validation compac= tions running concurrently from time to time. =20 One last thing : can you check if you have particularly big partitions in t= he CFs that fail to get repaired ? You can run nodetool cfhistograms to che= ck that. =20 Cheers, =20 =20 =20 On Thu, Oct 27, 2016 at 5:24 PM Vincent Rischmann wrote: =20 Thanks for the response. =20 We do break up repairs between tables, we also tried our best to have no ov= erlap between repair runs. Each repair has 10000 segments (purely arbitrary= number, seemed to help at the time). Some runs have an intensity of 0.4, s= ome have as low as 0.05. =20 Still, sometimes one particular app (which does a lot of read/modify/write = batches in quorum) gets slowed down to the point we have to stop the repair= run. =20 But more annoyingly, since 2 to 3 weeks as I said, it looks like runs don't= progress after some time. Every time I restart reaper, it starts to repair= correctly again, up until it gets stuck. I have no idea why that happens n= ow, but it means I have to baby sit reaper, and it's becoming annoying. =20 Thanks for the suggestion about incremental repairs. It would probably be a= good thing but it's a little challenging to setup I think. Right now runni= ng a full repair of all keyspaces (via nodetool repair) is going to take a = lot of time, probably like 5 days or more. We were never able to run one to= completion. I'm not sure it's a good idea to disable autocompaction for th= at long. =20 But maybe I'm wrong. Is it possible to use incremental repairs on some colu= mn family only ? =20 =20 On Thu, Oct 27, 2016, at 05:02 PM, Alexander Dejanovski wrote: Hi Vincent, =20 most people handle repair with :=20 - pain (by hand running nodetool commands) - cassandra range repair : https://github.com/BrianGallew/cassandra_range_r= epair - Spotify Reaper - and OpsCenter repair service for DSE users =20 Reaper is a good option I think and you should stick to it. If it cannot do= the job here then no other tool will. =20 You have several options from here :=20 Try to break up your repair table by table and see which ones actually get = stuck Check your logs for any repair/streaming error Avoid repairing everything :=20 you may have expendable tables=20 you may have TTLed only tables with no deletes, accessed with QUORUM CL only You can try to relieve repair pressure in Reaper by lowering repair intensi= ty (on the tables that get stuck) You can try adding steps to your repair process by putting a higher segment= count in reaper (on the tables that get stuck) And lastly, you can turn to incremental repair. As you're familiar with Rea= per already, you might want to take a look at our Reaper fork that handles = incremental repair : https://github.com/thelastpickle/cassandra-reaper If you go down that way, make sure you first mark all sstables as repaired = before you run your first incremental repair, otherwise you'll end up in an= ticompaction hell (bad bad place) : https://docs.datastax.com/en/cassandra/= 2.1/cassandra/operations/opsRepairNodesMigration.html Even if people say that's not necessary anymore, it'll save you from a very= bad first experience with incremental repair. Furthermore, make sure you run repair daily after your first inc repair run= , in order to work on small sized repairs. =20 Cheers, =20 =20 On Thu, Oct 27, 2016 at 4:27 PM Vincent Rischmann wrote: =20 Hi, =20 we have two Cassandra 2.1.15 clusters at work and are having some trouble w= ith repairs. =20 Each cluster has 9 nodes, and the amount of data is not gigantic but some c= olumn families have 300+Gb of data. We tried to use `nodetool repair` for these tables but at the time we teste= d it, it made the whole cluster load too much and it impacted our productio= n apps. =20 Next we saw https://github.com/spotify/cassandra-reaper , tried it and had = some success until recently. Since 2 to 3 weeks it never completes a repair= run, deadlocking itself somehow. =20 I know DSE includes a repair service but I'm wondering how do other Cassand= ra users manage repairs ? =20 Vincent. --=20 ----------------- Alexander Dejanovski France @alexanderdeja =20 Consultant Apache Cassandra Consulting http://www.thelastpickle.com =20 --=20 ----------------- Alexander Dejanovski France @alexanderdeja =20 Consultant Apache Cassandra Consulting http://www.thelastpickle.com =20 --=20 ----------------- Alexander Dejanovski France @alexanderdeja =20 Consultant Apache Cassandra Consulting http://www.thelastpickle.com =20 --=20 ----------------- Alexander Dejanovski France @alexanderdeja =20 Consultant Apache Cassandra Consulting http://www.thelastpickle.com CONFIDENTIALITY NOTE: This e-mail and any attachments are confidential and = may be legally privileged. If you are not the intended recipient, do not di= sclose, copy, distribute, or use this email or any attachments. If you have= received this in error please let the sender know and then delete the emai= l and all attachments. --B_3560423026_898175381 Content-transfer-encoding: quoted-printable Content-Type: text/html; charset="utf-8"

If you go above ~1GB, the primary symptom you= ’ll see is a LOT of garbage created on reads (CASSANDRA-9754 details = this).

 

As redesigning data m= odel is often expensive (engineering time, reloading data, etc), one workar= ound is to tune your JVM to better handle situations where you create a lot= of trash. One method that can help work around this is to use a much large= r eden size than default – up to 50% of your total heap size.

 

For example, if you were using 8G= heap and 2G eden, going =C2=A0to 3G or 4G eden (new heap size in cassandra= -env.sh) MAY work better for you if you’re reading from large partiti= ons (it can also crash your server in some cases, so TEST IT IN A LAB FIRST= ).

 

-  &nbs= p;       Jeff

<= p class=3DMsoNormal> 

From: Alexander Dejanovski <alex@thelastpickle.com>Reply-To: "user@cassandra.apache.org" <user@cassandra= .apache.org>
Date: Thursday, October 27, 2016 at 2:13 PM
To: "user@cassandra.apache.org" <user@cassandra.apache.or= g>
Subject: Re: Tools to manage repairs

<= /div>

 

The = "official" recommendation would be 100MB, but it's hard to give a= precise answer.
Keeping it under the GB seems like a good target.
A = few patches are pushing the limits of partition sizes so we may soon be mor= e comfortable with big partitions.

Cheers

 

Le j= eu. 27 oct. 2016 21:28, Vincent Rischmann <me@vrischmann.me> a =C3=A9crit :

Yeah that particular table is badly designed, I intend to fix it, when= the roadmap allows us to do it :)

What is the recommended maximum partition size ?

=

 

Thanks for all the information.

 

 

On Thu, Oct 27, 2016, at 0= 8:14 PM, Alexander Dejanovski wrote:

3.3GB is alre= ady too high, and it's surely not good to have well performing compactions.= Still I know changing a data model is no easy thing to do, but you should = try to do something here.

Anticompaction = is a special type of compaction and if an sstable is being anticompacted, t= hen any attempt to run validation compaction on it will fail, telling you t= hat you cannot have an sstable being part of 2 repair sessions at the same = time, so incremental repair must be run one node at a time, waiting for ant= icompactions to end before moving from one node to the other.

Be mindful of running incremental repair on a regular = basis once you started as you'll have two separate pools of sstables (repai= red and unrepaired) that won't get compacted together, which could be a pro= blem if you want tombstones to be purged efficiently.

Cheers,

 

Le jeu. 27 oct. 2016 17:57, Vin= cent Rischmann <me= @vrischmann.me> a =C3=A9crit :

&nbs= p;

Ok, I think we'll give incremental repairs a try on a limite= d number of CFs first and then if it goes well we'll progressively switch m= ore CFs to incremental.

 

I'm no= t sure I understand the problem with anticompaction and validation running = concurrently. As far as I can tell, right now when a CF is repaired (either= via reaper, or via nodetool) there may be compactions running at the same = time. In fact, it happens very often. Is it a problem ?

=  

As far as big partitions, the biggest one we have= is around 3.3Gb. Some less big partitions are around 500Mb and less.<= /o:p>

 

 

On Thu, Oct 27, 2016, at 05:37 PM, Alexander Dejanovski wrot= e:

Oh right, that's what they advise :)<= o:p>

I'd say that you should skip = the full repair phase in the migration procedure as that will obviously fai= l, and just mark all sstables as repaired (skip 1, 2 and 6).

=

Anyway you can't do better, so take a leap = of faith there.

 

Intensity is already very low and 1= 0000 segments is a whole lot for 9 nodes, you should not need that many.

 

You can definitely pick which CF you'll run increment= al repair on, and still run full repair on the rest.

If you pick our Reaper fork, watch out for schema c= hanges that add incremental repair fields, and I do not advise to run incre= mental repair without it, otherwise you might have issues with anticompacti= on and validation compactions running concurrently from time to time.<= /o:p>

 

One last thing : can you check if you have particularly = big partitions in the CFs that fail to get repaired ? You can run nodetool = cfhistograms to check that.

<= o:p> 

Cheers,

<= /div>

 

 

&n= bsp;

On Thu, Oct 27, 2016 at = 5:24 PM Vincent Rischmann <me@vrischmann.me> wrote:

 =

Thanks for the response.

 

=

We do break up repairs between tables, we also tried o= ur best to have no overlap between repair runs. Each repair has 10000 segme= nts (purely arbitrary number, seemed to help at the time). Some runs have a= n intensity of 0.4, some have as low as 0.05.

 

Still= , sometimes one particular app (which does a lot of read/modify/write batch= es in quorum) gets slowed down to the point we have to stop the repair run.=

 

<= div>

But more annoyingly, since 2 to 3 weeks as I said,= it looks like runs don't progress after some time. Every time I restart re= aper, it starts to repair correctly again, up until it gets stuck. I have n= o idea why that happens now, but it means I have to baby sit reaper, and it= 's becoming annoying.

&n= bsp;

Thanks for the suggestion abo= ut incremental repairs. It would probably be a good thing but it's a little= challenging to setup I think. Right now running a full repair of all keysp= aces (via nodetool repair) is going to take a lot of time, probably like 5 = days or more. We were never able to run one to completion. I'm not sure it'= s a good idea to disable autocompaction for that long.

=

 

But maybe I'm wrong. Is it possible to use incremental repairs on some = column family only ?

 

 

On Thu, Oct 27, 2016, at 05:02 PM, Alexan= der Dejanovski wrote:

Hi Vincent,

 

most people handle repair with : 

- pain (by hand running nodetool commands)=

<= div>

- Spotify Reaper

- and OpsCenter repair service for DSE users

 

Reaper is a good option I think and you should stick to it. If it ca= nnot do the job here then no other tool will.

 

= You have several options from here : 

  • Try to break up your repair table by ta= ble and see which ones actually get stuck
  • Check your logs for any repair/streaming error
  • Avoid repairing everything : 
    • you may have expendable tables 
    • you may have TTLed only tables with no deletes, accessed with= QUORUM CL only
  • You can try to relieve repair pressure in Reaper by lowering = repair intensity (on the tables that get stuck)
  • You can try adding steps to your repair process by putt= ing a higher segment count in reaper (on the tables that get stuck)

If you go down that way, make sure you= first mark all sstables as repaired before you run your first incremental = repair, otherwise you'll end up in anticompaction hell (bad bad place) :&nb= sp;https://docs.datastax.com/en/cassandra/2.1/ca= ssandra/operations/opsRepairNodesMigration.html

Even if people say that's not necessary anymore, = it'll save you from a very bad first experience with incremental repair.

Furthermore, make sure you r= un repair daily after your first inc repair run, in order to work on small = sized repairs.

 

Cheers,

 

 

On Thu, O= ct 27, 2016 at 4:27 PM Vincent Rischmann <me@vrischmann.me> wrote:

 

Hi,

 

we have two Cassandra 2.1.15 clusters at work and are having= some trouble with repairs.

<= o:p> 

Each cluster has 9 node= s, and the amount of data is not gigantic but some column families have 300= +Gb of data.

We tried to use = `nodetool repair` for these tables but at the time we tested it, it made th= e whole cluster load too much and it impacted our production apps.

 

Next we saw https://github.com/spotify/cassandra-reaper , tried i= t and had some success until recently. Since 2 to 3 weeks it never complete= s a repair run, deadlocking itself somehow.

 

I know = DSE includes a repair service but I'm wondering how do other Cassandra user= s manage repairs ?

 

Vincent.

--

-----------------

Alexander Dejanovski

= France

@alexanderdeja

 

Consultant

Apa= che Cassandra Consulting

http://www.thelastpickle.com<= o:p>

 

--

---------= --------

Alexander Dej= anovski

France

@alexanderdeja

 

Consultant

Apache Cassandra Consulting

 

--

-----------------

@alexanderdej= a

 

Consultant

Apache Cassandra Consulting<= /p>

 

--

-----------------

<= p class=3DMsoNormal style=3D'line-height:14.65pt'>Alexander Dejanovski

France

@alexanderdeja

 

Consultant

Apache Cassandra Consulting

CONFIDENTIALITY NOTE: This e-mail and any attachments are confidential and = may be legally privileged. If you are not the intended recipient, do not di= sclose, copy, distribute, or use this email or any attachments. If you have= received this in error please let the sender know and then delete the emai= l and all attachments.
--B_3560423026_898175381-- --B_3560423026_1219072217 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" MIIRKwYJKoZIhvcNAQcCoIIRHDCCERgCAQExDzANBglghkgBZQMEAgEFADALBgkqhkiG9w0B BwGggg6dMIIFTDCCBDSgAwIBAgIRAIeX7oQRaz3bAAAAAEw1XuAwDQYJKoZIhvcNAQELBQAw gaUxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1FbnRydXN0LCBJbmMuMTkwNwYDVQQLEzB3d3cu ZW50cnVzdC5uZXQvQ1BTIGlzIGluY29ycG9yYXRlZCBieSByZWZlcmVuY2UxHzAdBgNVBAsT FihjKSAyMDEwIEVudHJ1c3QsIEluYy4xIjAgBgNVBAMTGUVudHJ1c3QgQ2xhc3MgMiBDbGll bnQgQ0EwHhcNMTYwOTIwMjIxMTIwWhcNMTkwOTMwMjI0MTE3WjCBjTELMAkGA1UEBhMCVVMx EzARBgNVBAgTCkNhbGlmb3JuaWExDzANBgNVBAcTBklydmluZTEaMBgGA1UEChMRQ3Jvd2RT dHJpa2UsIEluYy4xPDARBgNVBAMTCkplZmYgSmlyc2EwJwYJKoZIhvcNAQkBFhpqZWZmLmpp cnNhQGNyb3dkc3RyaWtlLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM8/ kM42VwiKXDU7EgjPU7wyr7KRidCCUqlqfSJ9pcvlNzqluaTAYfoAALsc8vYhIxw7h9qJPVC9 xXdgQGXJcHeVfHwslf0jUWezmnk4jXOuhhiGKF8hCDR2OK1vwl495dCVl8ui+Xly59MMxIvc uAVieWJ8+E5JLa0/IQVPHg3OHB4vWfipOnp9ZXyXWvwtbU6px4vV5tG80PXBeMPUO3vT7XTe rQuua+nZTiqh3VnVuOxdxr1ttkxu3Gn5SqBLwbuPlMrBYtJVa5nAMPo+fVgUmV+aSCCjG/x+ Vy6dFutaIyLXyB2jiQx3t9mX0Iu2Nnc2rtpezj+g0FP6dB703nsCAwEAAaOCAYswggGHMA4G A1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwQgYDVR0gBDsw OTA3BgtghkgBhvpsCgEEAjAoMCYGCCsGAQUFBwIBFhpodHRwOi8vd3d3LmVudHJ1c3QubmV0 L3JwYTBqBggrBgEFBQcBAQReMFwwIwYIKwYBBQUHMAGGF2h0dHA6Ly9vY3NwLmVudHJ1c3Qu bmV0MDUGCCsGAQUFBzAChilodHRwOi8vYWlhLmVudHJ1c3QubmV0LzIwNDhjbGFzczJzaGEy LmNlcjA0BgNVHR8ELTArMCmgJ6AlhiNodHRwOi8vY3JsLmVudHJ1c3QubmV0L2NsYXNzMmNh LmNybDAlBgNVHREEHjAcgRpqZWZmLmppcnNhQGNyb3dkc3RyaWtlLmNvbTAfBgNVHSMEGDAW gBQJkaW66fIuKnXfzX7+d8ry3mubJDAdBgNVHQ4EFgQUSw+neOez3ZJWkkEF36O0c2skDkAw CQYDVR0TBAIwADANBgkqhkiG9w0BAQsFAAOCAQEAoQiIaUSkRZecrnLGP6/as+GANvfMnFNL i5wawcZljyeJg8e7p6+ZcXUSI0GOPs/Wl9paitiIIhGuvD2iD3+cvJQlrC+8LT2PFkRUu81B riyF3QzWygI1hCdFQcRY+9Fox1zKT0+5SwfOPstSBLHuYAUfRQrc9WtoqF70xbngPUCfGZVJ +8l9kJgCnXqwmfTu8s2d1Q5MCdz68g8geVU3nYnJ7ONPvvgsdlgywW0sNLLhn4iqGY6y5xSh uR2GYgSwcYrvKfU56sHYc2JLyyUzUm3r3BWE+CedpBg+B4Al6XsgqJPu2t2hgSrcDoHrpEsV +hTUoTgWxZlqHh7bcQdRhjCCBOkwggPRoAMCAQICBEwOjDgwDQYJKoZIhvcNAQEFBQAwgbQx FDASBgNVBAoTC0VudHJ1c3QubmV0MUAwPgYDVQQLFDd3d3cuZW50cnVzdC5uZXQvQ1BTXzIw NDggaW5jb3JwLiBieSByZWYuIChsaW1pdHMgbGlhYi4pMSUwIwYDVQQLExwoYykgMTk5OSBF bnRydXN0Lm5ldCBMaW1pdGVkMTMwMQYDVQQDEypFbnRydXN0Lm5ldCBDZXJ0aWZpY2F0aW9u IEF1dGhvcml0eSAoMjA0OCkwHhcNMTExMTExMTUzODM0WhcNMjExMTEyMDAxNzM0WjCBpTEL MAkGA1UEBhMCVVMxFjAUBgNVBAoTDUVudHJ1c3QsIEluYy4xOTA3BgNVBAsTMHd3dy5lbnRy dXN0Lm5ldC9DUFMgaXMgaW5jb3Jwb3JhdGVkIGJ5IHJlZmVyZW5jZTEfMB0GA1UECxMWKGMp IDIwMTAgRW50cnVzdCwgSW5jLjEiMCAGA1UEAxMZRW50cnVzdCBDbGFzcyAyIENsaWVudCBD QTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMQyjULQnhmdW5BaEEy1EAAhuQdI 3q5ugNb/FFAG6HWva0aO56VPrcOMsPp74BmR/fBjrXFJ86gcH6s0GSBOS1TpAJO+cAgx3olT rFe8JO8qj0LU9+qVJV0UdtLNpxL6G7K0XGFAvV/dV5tEVdjFiRk8ZT256NSlLcIs0+qDMaII PF5ZrhIuKgqMXvOzMa4KrX7ssEkJ/KcuIh5oZDSdFuOmPQMxQBb3lPZLGTTJl+YinEjeZKCD C1gFmMQiRokF/aO+9klMYQMWpPgKmRziwMZ+aQIyV5ADrwCUobnczq/v9HwYzjALyof41V8f WVHYiwu5OMZYwlN82ibU2/K9kM0CAwEAAaOCAQ4wggEKMA4GA1UdDwEB/wQEAwIBBjASBgNV HRMBAf8ECDAGAQH/AgEAMDMGCCsGAQUFBwEBBCcwJTAjBggrBgEFBQcwAYYXaHR0cDovL29j c3AuZW50cnVzdC5uZXQwMgYDVR0fBCswKTAnoCWgI4YhaHR0cDovL2NybC5lbnRydXN0Lm5l dC8yMDQ4Y2EuY3JsMDsGA1UdIAQ0MDIwMAYEVR0gADAoMCYGCCsGAQUFBwIBFhpodHRwOi8v d3d3LmVudHJ1c3QubmV0L3JwYTAdBgNVHQ4EFgQUCZGluunyLip1381+/nfK8t5rmyQwHwYD VR0jBBgwFoAUVeSB0RGAvtiJuQijMfmhJAkWuXAwDQYJKoZIhvcNAQEFBQADggEBAAqJtbEz ORCxLAl57vMbbah2SrTDeOPn/ydhNMxK7NiC7h9jSuF9RXpERqpWxoBM38h1CZxhIdk+Tcug GvSRiiWlem0buWcZPyUz1EEfYT8YIpPIPvfD6Q/nWPSeH07jn+HV3ze6/LHtgDZmZoUmV2K1 4m6wgmrQrCMT0RcVRglZds6ncKeIHnEnPh3e2eqdCIp/K5byi5sUf8pFck8KLVu/zrl76IyI TI/XXgmQoOfI+YA+rcEyskbD/c0MDOXC/U8Jt4IgkrzTZJ8HMU32zzVpN6TvRz8lK3sO35s7 snE9J86ULnsmrUifBH+fG4fMeh2xIJAVCK4CEdPDAD2o60cwggRcMIIDRKADAgECAgQ4Y7lm MA0GCSqGSIb3DQEBBQUAMIG0MRQwEgYDVQQKEwtFbnRydXN0Lm5ldDFAMD4GA1UECxQ3d3d3 LmVudHJ1c3QubmV0L0NQU18yMDQ4IGluY29ycC4gYnkgcmVmLiAobGltaXRzIGxpYWIuKTEl MCMGA1UECxMcKGMpIDE5OTkgRW50cnVzdC5uZXQgTGltaXRlZDEzMDEGA1UEAxMqRW50cnVz dC5uZXQgQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkgKDIwNDgpMB4XDTk5MTIyNDE3NTA1MVoX DTE5MTIyNDE4MjA1MVowgbQxFDASBgNVBAoTC0VudHJ1c3QubmV0MUAwPgYDVQQLFDd3d3cu ZW50cnVzdC5uZXQvQ1BTXzIwNDggaW5jb3JwLiBieSByZWYuIChsaW1pdHMgbGlhYi4pMSUw IwYDVQQLExwoYykgMTk5OSBFbnRydXN0Lm5ldCBMaW1pdGVkMTMwMQYDVQQDEypFbnRydXN0 Lm5ldCBDZXJ0aWZpY2F0aW9uIEF1dGhvcml0eSAoMjA0OCkwggEiMA0GCSqGSIb3DQEBAQUA A4IBDwAwggEKAoIBAQCtTUupEoay6qMgBxUWZCorS9G/C0pNju2AdqVnt3hAwHNCyGjA21Mr 3V64dpg1k4sanXwTOg4fW7cez+UkFB6xgamNfbjMa0sD8QIM3KulQCQAf3SUoZ0IKbOIC/WH d51VzeTDftdqZKuFFIaVW5cyUG89yLpmDOP8vbhJwXaJSRn9wKi9iaNnL8afvHEZYLgt6SzJ kHZme5Tir3jWZVNdPNacss8pA/kvpFCy1EjOBTJViv2yZEwO5JgHddt/37kIVWCFMCn5e0ik aYbjNT8ehl16ehW97wCOFSJUFwCQJpO8Dklokb/4R9OdlULBDk3fbybPwxghYmZDcNbVwAfh AgMBAAGjdDByMBEGCWCGSAGG+EIBAQQEAwIABzAfBgNVHSMEGDAWgBRV5IHREYC+2Im5CKMx +aEkCRa5cDAdBgNVHQ4EFgQUVeSB0RGAvtiJuQijMfmhJAkWuXAwHQYJKoZIhvZ9B0EABBAw DhsIVjUuMDo0LjADAgSQMA0GCSqGSIb3DQEBBQUAA4IBAQBZR6whhIoXyZyJUx66gIUaxjxO PrGctnzGkl0YZALj0wYIEWF8Y+MrnTEDcHbSoyig9LuaY3PtbeUq2+0UqSvGNhHQK+sHi6Xa nlwZnVYS9VQpyAXtshIqjfQDG//nkhCHsDq1w50FNxKjx/QVudWkORabUzojkfGogqJqiGjB eQIivKqm1q7fsBRfuIfQ3Xx/e/+vHM/m2wetXtuFndArDTPbBNHmSUATK3b7PumciQ8Vzhiw hXghT2tPDvo2Z80H8v8I0OLe2b8qr7iHhiE8BMq3lGh/zzzpmNc4/+zA2VDwLktYrkZv0C7D YNpyVXK9TEWeYbq/hIGSA9HSaXzFMYICUjCCAk4CAQEwgbswgaUxCzAJBgNVBAYTAlVTMRYw FAYDVQQKEw1FbnRydXN0LCBJbmMuMTkwNwYDVQQLEzB3d3cuZW50cnVzdC5uZXQvQ1BTIGlz IGluY29ycG9yYXRlZCBieSByZWZlcmVuY2UxHzAdBgNVBAsTFihjKSAyMDEwIEVudHJ1c3Qs IEluYy4xIjAgBgNVBAMTGUVudHJ1c3QgQ2xhc3MgMiBDbGllbnQgQ0ECEQCHl+6EEWs92wAA AABMNV7gMA0GCWCGSAFlAwQCAQUAoGkwLwYJKoZIhvcNAQkEMSIEIO6QnLADboREuqyt2rOP ITNuHm9+WOuN7LpUkWzl8ccEMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcN AQkFMQ8XDTE2MTAyNzIxMjM0NlowDQYJKoZIhvcNAQEBBQAEggEALbDOoaABEBjj2KlS2RXG CtQK4derhHkv7knAmMxnIJ2IM8SEWcPJcpbckukCX7DdxC7pUY+reLgVLLpbmN9tjLVQHp4r Idu30SgnimgFPJU/USU3OHBqx1ldc/ErFkw/kZ9HwT7NLnxrbHUuYzeXvfZE5V/qzlK7czwR LPK/iWtXpNrSsuX/pRQ96q5dIzX5G4MXvEzTT7FneYnwlo9w3LdYC8HO1aaWFyBoUnO992d3 yYx1cm9Y2PsENhH/8NmYSi+UrT518LXkwYoMr45b13XfhpHdVI3bxbFeUuOc8dCSJfE54DCH ryrDmhEClxy2lj5LA71UsNag4q3pBzN8xw== --B_3560423026_1219072217--