Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 736BDD7EF for ; Mon, 11 Feb 2013 09:41:26 +0000 (UTC) Received: (qmail 33243 invoked by uid 500); 11 Feb 2013 09:41:23 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 33184 invoked by uid 500); 11 Feb 2013 09:41:23 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 33169 invoked by uid 99); 11 Feb 2013 09:41:23 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Feb 2013 09:41:23 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: 74.125.83.46 is neither permitted nor denied by domain of tamar@tok-media.com) Received: from [74.125.83.46] (HELO mail-ee0-f46.google.com) (74.125.83.46) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Feb 2013 09:41:18 +0000 Received: by mail-ee0-f46.google.com with SMTP id e49so3046941eek.19 for ; Mon, 11 Feb 2013 01:40:57 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:content-type:x-gm-message-state; bh=R6nD9Nz8U1BNkQom2HJR/cj9fLE+JPvXu4VHb8Wdn5c=; b=EmVaW1zXDRCSLeiR0uf1QztKUu1OCmlOVD4esMSTM7HBHUiqoHzJv7i0VOrz5EQ5X2 2Qy4gQyYWWpphUQJmekXJer5McVB6zA7vwlIJ/2cKQMAUcql2ITYBrPv//JMrNWAhsW0 wNPEJbWVdd0qP6pF24NI+XV6duGVjbVZC3PAs/p2PK5urYeTb9N8K5eyhqfbQL9VQzVe z5Y7Jc3hOG1MD3TKGjkK467bGIcNiwAV1Y0A+dJrk+kKjrkaC8FmSWt60M7lt7OpHsVR 4frZYdH/Yp20YRkw5xu99KuArPivh5FQc9zblNqOAcYaP2Q1ZBGSLbWg5doZt96J6njL zxUQ== X-Received: by 10.14.179.5 with SMTP id g5mr48384527eem.41.1360575656777; Mon, 11 Feb 2013 01:40:56 -0800 (PST) MIME-Version: 1.0 Received: by 10.223.87.205 with HTTP; Mon, 11 Feb 2013 01:40:36 -0800 (PST) In-Reply-To: References: <0F08A429-8C01-4CB9-A913-2F8D9CC7743D@thelastpickle.com> From: Tamar Fraenkel Date: Mon, 11 Feb 2013 11:40:36 +0200 Message-ID: Subject: Re: High CPU usage during repair To: "user@cassandra.apache.org" Content-Type: multipart/related; boundary=047d7b603fe6b6588704d56fb4f7 X-Gm-Message-State: ALoCoQk2JdClEjvqzoK5mAFoFQf4ISiNBrs9SIluP29KXfQIra11Y8VGYL8PGNTEN0WWO+cQJ6fk X-Virus-Checked: Checked by ClamAV on apache.org --047d7b603fe6b6588704d56fb4f7 Content-Type: multipart/alternative; boundary=047d7b603fe6b6588504d56fb4f6 --047d7b603fe6b6588504d56fb4f6 Content-Type: text/plain; charset=ISO-8859-1 Thank you very much! Due to monetary limitations I will keep the m1.large for now, but try the throughput modification. Tamar *Tamar Fraenkel * Senior Software Engineer, TOK Media [image: Inline image 1] tamar@tok-media.com Tel: +972 2 6409736 Mob: +972 54 8356490 Fax: +972 2 5612956 On Mon, Feb 11, 2013 at 11:30 AM, aaron morton wrote: > What machine size? >> > m1.large > > If you are seeing high CPU move to an m1.xlarge, that's the sweet spot. > > That's normally ok. How many are waiting? >> >> I have seen 4 this morning > > That's not really abnormal. > The pending task count goes when when a file *may* be eligible for > compaction, not when there is a compaction task waiting. > > If you suddenly create a number of new SSTables for a CF the pending count > will rise, however one of the tasks may compact all the sstables waiting > for compaction. So the count will suddenly drop as well. > > Just to make sure I understand you correctly, you suggest that I change > throughput to 12 regardless of whether repair is ongoing or not. I will do > it using nodetool and change the yaml file in case a restart will occur in > the future? > > Yes. > If you are seeing performance degrade during compaction or repair try > reducing the throughput. > > I would attribute most of the problems you have described to using > m1.large. > > Cheers > > > ----------------- > Aaron Morton > Freelance Cassandra Developer > New Zealand > > @aaronmorton > http://www.thelastpickle.com > > On 11/02/2013, at 9:16 AM, Tamar Fraenkel wrote: > > Hi! > Thanks for the response. > See my answers and questions below. > Thanks! > Tamar > > *Tamar Fraenkel * > Senior Software Engineer, TOK Media > > > > tamar@tok-media.com > Tel: +972 2 6409736 > Mob: +972 54 8356490 > Fax: +972 2 5612956 > > > > > On Sun, Feb 10, 2013 at 10:04 PM, aaron morton wrote: > >> During repair I see high CPU consumption, >> >> Repair reads the data and computes a hash, this is a CPU intensive >> operation. >> Is the CPU over loaded or is just under load? >> > Usually just load, but in the past two weeks I have seen CPU of over 90%! > >> I run Cassandra version 1.0.11, on 3 node setup on EC2 instances. >> >> What machine size? >> > m1.large > >> >> there are compactions waiting. >> >> That's normally ok. How many are waiting? >> >> I have seen 4 this morning > >> I thought of adding a call to my repair script, before repair starts to >> do: >> nodetool setcompactionthroughput 0 >> and then when repair finishes call >> nodetool setcompactionthroughput 16 >> >> That will remove throttling on compaction and the validation compaction >> used for the repair. Which may in turn add additional IO load, CPU load and >> GC pressure. You probably do not want to do this. >> >> Try reducing the compaction throughput to say 12 normally and see the >> effect. >> >> Just to make sure I understand you correctly, you suggest that I change > throughput to 12 regardless of whether repair is ongoing or not. I will do > it using nodetool and change the yaml file in case a restart will occur in > the future? > >> Cheers >> >> >> ----------------- >> Aaron Morton >> Freelance Cassandra Developer >> New Zealand >> >> @aaronmorton >> http://www.thelastpickle.com >> >> On 11/02/2013, at 1:01 AM, Tamar Fraenkel wrote: >> >> Hi! >> I run repair weekly, using a scheduled cron job. >> During repair I see high CPU consumption, and messages in the log file >> "INFO [ScheduledTasks:1] 2013-02-10 11:48:06,396 GCInspector.java (line >> 122) GC for ParNew: 208 ms for 1 collections, 1704786200 used; max is >> 3894411264" >> From time to time, there are also messages of the form >> "INFO [ScheduledTasks:1] 2012-12-04 13:34:52,406 MessagingService.java >> (line 607) 1 READ messages dropped in last 5000ms" >> >> Using opscenter, jmx and nodetool compactionstats I can see that during >> the time the CPU consumption is high, there are compactions waiting. >> >> I run Cassandra version 1.0.11, on 3 node setup on EC2 instances. >> I have the default settings: >> compaction_throughput_mb_per_sec: 16 >> in_memory_compaction_limit_in_mb: 64 >> multithreaded_compaction: false >> compaction_preheat_key_cache: true >> >> I am thinking on the following solution, and wanted to ask if I am on the >> right track: >> I thought of adding a call to my repair script, before repair starts to >> do: >> nodetool setcompactionthroughput 0 >> and then when repair finishes call >> nodetool setcompactionthroughput 16 >> >> Is this a right solution? >> Thanks, >> Tamar >> >> *Tamar Fraenkel * >> Senior Software Engineer, TOK Media >> >> >> >> >> tamar@tok-media.com >> Tel: +972 2 6409736 >> Mob: +972 54 8356490 >> Fax: +972 2 5612956 >> >> >> >> > > --047d7b603fe6b6588504d56fb4f6 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Thank you very much! Due to monetary limitations I wi= ll keep the m1.large for now, but try the throughput modification.
Tamar

Tamar Fraenkel=A0
Senior Software Engineer, TOK = Media=A0

=3D"Inline

tamar@tok-media.com
Tel:=A0=A0=A0+972 2 6409736=A0
Mob:=A0=A0<= a value=3D"+972548356490">+972 54 8356490=A0
Fax:=A0=A0=A0+972 2 5612956=A0




On Mon, Feb 11, 2013 at 11:30 AM, aaron = morton <aaron@thelastpickle.com> wrote:
What machine size?
m1.large=A0
If you are seeing high CPU move to= an m1.xlarge, that's the sweet spot.=A0

That's normally ok= . How many are waiting?

=
I have seen 4 this morning= =A0
That's not really abnormal.=A0
The pending task count goes when when a file *may* be eligible for c= ompaction, not when there is a compaction task waiting.=A0

If you suddenly create a number of new SSTables for a CF the pending c= ount will rise, however one of the tasks may compact all the sstables waiti= ng for compaction. So the count will suddenly drop as well.=A0

Just to make sure I understand you correctly, you suggest th= at I change throughput to 12 regardless of whether repair is ongoing or not= . I will do it using nodetool and change the yaml file in case a restart wi= ll occur in the future?=A0
Yes.=A0
If you are seeing performa= nce degrade during compaction or repair try reducing the throughput.=A0

I would attribute most of the problems you have descr= ibed to using m1.large.=A0

Cheers
=A0

-----------------
Aaron Morton
Freelance Cassandra= Developer
New Zealand


On 11/02/2013, at 9:16 AM, Tamar Frae= nkel <tamar@tok= -media.com> wrote:

Hi!
Thanks for the response.
See my answers and questions below.
Thanks!
<= /span>Tamar

Tamar Fraenkel=A0
Senior Software Engineer, TOK Media=A0

<= /div>
<tokLogo.png>

tam= ar@tok-media.com
Tel:=A0=A0=A0+972 2 6409736=A0
Mob:=A0=A0<= a value=3D"+972548356490">+972 54 8356490=A0
Fax:=A0=A0=A0+972 2 5612956=A0



On Sun, Feb 10, 2013 at 10:04 PM, aaron = morton <aaron@thelastpickle.com> wrote:
During repair I see high CPU consumption,=A0
Repair reads the data and computes a hash, this is a CPU intensive opera= tion.
Is the CPU over loaded or is just under load?
= =A0Usually just load, but in the past tw= o weeks I have seen CPU of over 90%!
I run Cassandra=A0 version 1.0.11, on 3 node setup on EC2 instance= s.
What machine size?
m1.large

=
there are compactions waiting.
That'= s normally ok. How many are waiting?

=
I have seen 4 this mor= ning
I thought of adding a call to my repair script, befo= re repair starts to do:
= nodetool setcompactionthroughput 0
and then when repair finishes call
nodetool setcompactionthroughput 16
That will remove throttling on compaction and the valid= ation compaction used for the repair. Which may in turn add additional IO l= oad, CPU load and GC pressure. You probably do not want to do this.=A0

Try reducing the compaction thr= oughput to say 12 normally and see the effect.

Just t= o make sure I understand you correctly, you suggest that I change throughpu= t to 12 regardless of whether repair is ongoing or not. I will do it using = nodetool and change the yaml file in case a restart will occur in the futur= e?
Cheers


-----------------
Aaron Morton
Freelance Cassandra= Developer
New Zealand


On 11/02/2013, at 1:01 AM, Tamar Fraenkel <tamar@tok-media.com= > wrote:

Hi!
I run repair weekly, using a scheduled cron= job.
During repair I see high CPU consumption, and messages in th= e log file
"INFO [ScheduledTasks:1] 2013-02-10 11:48:06,396 GCInspe= ctor.java (line 122) GC for ParNew: 208 ms for 1 collections, 1704786200 us= ed; max is 3894411264"
From time to time, there are also messages of the form
"INFO = [ScheduledTasks:1] 2012-12-04 13:34:52,406 MessagingService.java (line 607)= 1 READ messages dropped in last 5000ms"

Using opsce= nter, jmx and nodetool compactionstats I can see that during the time the C= PU consumption is high, there are compactions waiting.

I run Cassandra=A0 version 1.0.11, on 3 node setu= p on EC2 instances.
I have the default settings:
compaction_thr= oughput_mb_per_sec: 16
in_memory_compaction_limit_in_mb= : 64
multithreaded_compaction: false
compaction_preheat_key_cache: true
I am thinking on the following solution, and wanted to ask if = I am on the right track:
I thought of adding a call to my rep= air script, before repair starts to do:
nodetool setcompactionthrou= ghput 0
and then when repair finishes call
nodetool setcompactionthroughput 16=

Is this a right solution?
Thanks,
Tamar
=

Tamar Fraenkel=A0
Senior Software Engineer, T= OK Media=A0

<tokLogo.png>

<= div>
tamar@tok-= media.com
Tel:=A0=A0=A0+972 2 6409736= =A0
Mob:=A0=A0+972 54 8356490=A0
Fax:=A0=A0= =A0+972 2 5612956=A0






--047d7b603fe6b6588504d56fb4f6-- --047d7b603fe6b6588704d56fb4f7 Content-Type: image/png; name="tokLogo.png" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: ii_135b91fb888fa9ff iVBORw0KGgoAAAANSUhEUgAAAF0AAAAnCAYAAABtyERkAAAABGdBTUEAALGPC/xhBQAACkNpQ0NQ SUNDIFByb2ZpbGUAAHgBnZZ3VFNZE8Dvey+90BJCkRJ6DU1KAJESepFeRSUkAUIJGBKwV0QFVxQV aYoiiyIuuLoUWSuiWFgUFLAvyCKgrIuriIplX/QcZf/Y/b6z88ec35s7c+/cmbnnPAAovoFCUSas AECGSCIO8/FgxsTGMfHdAAZEgAPWAHB52VlB4d4RABU/Lw4zG3WSsUygz/p1/xe4xfINYTI/m/5/ pcjLEkvQnULQkLl8QTYP5TyU03MlWTL7JMr0xDQZwxgZi9EEUVaVcfIXNv/s84XdZMzPEPFRH1nO WfwMvow7UN6SIxWgjASinJ8jFOSifBtl/XRphhDlNyjTMwTcbAAwFJldIuCloGyFMkUcEcZBeR4A BEryLE6cxRLBMjRPADiZWcvFwuQUCdOYZ8K0dnRkM30FuekCiYQVwuWlccV8JiczI4srWg7AlzvL ooCSrLZMtMj21o729iwbC7T8X+VfF796/TvIevvF42Xo555BjK5vtm+x32yZ1QCwp9Da7PhmSywD oGUTAKr3vtn0DwAgnwdA841Z92HI5iVFIslysrTMzc21EAp4FrKCfpX/6fDV859h1nkWsvO+1o7p KUjiStMlTFlReZnpmVIxMzuLyxMwWX8bYnTr/xw4K61ZeZiHCZIEYoEIPSoKnTKhKBltt4gvlAgz RUyh6J86/B/DZuUgwy9zjQKt5iOgL7EACjfoAPm9C2BoZIDE70dXoK99CyRGAdnLi9Ye/TL3KKPr n/XfFFyEfsLZwmSmzMwJi2DypOIcGaNvQqawgATkAR2oAS2gB4wBC9gAB+AM3IAX8AfBIALEgsWA B1JABhCDXLAKrAf5oBDsAHtAOagCNaAONIAToAWcBhfAZXAd3AR94D4YBCPgGZgEr8EMBEF4iArR IDVIGzKAzCAbiA3Nh7ygQCgMioUSoGRIBEmhVdBGqBAqhsqhg1Ad9CN0CroAXYV6oLvQEDQO/Qm9 gxGYAtNhTdgQtoTZsDscAEfAi+BkeCm8As6Dt8OlcDV8DG6GL8DX4T54EH4GTyEAISMMRAdhIWyE gwQjcUgSIkbWIAVICVKNNCBtSCdyCxlEJpC3GByGhmFiWBhnjC8mEsPDLMWswWzDlGOOYJoxHZhb mCHMJOYjlorVwJphnbB+2BhsMjYXm48twdZim7CXsH3YEexrHA7HwBnhHHC+uFhcKm4lbhtuH64R dx7XgxvGTeHxeDW8Gd4FH4zn4iX4fHwZ/hj+HL4XP4J/QyATtAk2BG9CHEFE2EAoIRwlnCX0EkYJ M0QFogHRiRhM5BOXE4uINcQ24g3iCHGGpEgyIrmQIkippPWkUlID6RLpAeklmUzWJTuSQ8lC8jpy Kfk4+Qp5iPyWokQxpXAo8RQpZTvlMOU85S7lJZVKNaS6UeOoEup2ah31IvUR9Y0cTc5Czk+OL7dW rkKuWa5X7rk8Ud5A3l1+sfwK+RL5k/I35CcUiAqGChwFrsIahQqFUwoDClOKNEVrxWDFDMVtikcV ryqOKeGVDJW8lPhKeUqHlC4qDdMQmh6NQ+PRNtJqaJdoI3Qc3YjuR0+lF9J/oHfTJ5WVlG2Vo5SX KVcon1EeZCAMQ4YfI51RxDjB6Ge8U9FUcVcRqGxVaVDpVZlWnaPqpipQLVBtVO1TfafGVPNSS1Pb qdai9lAdo26qHqqeq75f/ZL6xBz6HOc5vDkFc07MuacBa5hqhGms1Dik0aUxpaml6aOZpVmmeVFz Qouh5aaVqrVb66zWuDZNe762UHu39jntp0xlpjsznVnK7GBO6mjo+OpIdQ7qdOvM6BrpRupu0G3U fahH0mPrJent1mvXm9TX1g/SX6Vfr3/PgGjANkgx2GvQaTBtaGQYbbjZsMVwzEjVyM9ohVG90QNj qrGr8VLjauPbJjgTtkmayT6Tm6awqZ1pimmF6Q0z2MzeTGi2z6zHHGvuaC4yrzYfYFFY7qwcVj1r yIJhEWixwaLF4rmlvmWc5U7LTsuPVnZW6VY1Vvetlaz9rTdYt1n/aWNqw7OpsLk9lzrXe+7aua1z X9ia2Qps99vesaPZBdlttmu3+2DvYC+2b7Afd9B3SHCodBhg09kh7G3sK45YRw/HtY6nHd862TtJ nE44/eHMck5zPuo8Ns9onmBezbxhF10XrstBl8H5zPkJ8w/MH3TVceW6Vrs+dtNz47vVuo26m7in uh9zf+5h5SH2aPKY5jhxVnPOeyKePp4Fnt1eSl6RXuVej7x1vZO9670nfex8Vvqc98X6Bvju9B3w 0/Tj+dX5Tfo7+K/27wigBIQHlAc8DjQNFAe2BcFB/kG7gh4sMFggWtASDIL9gncFPwwxClka8nMo LjQktCL0SZh12KqwznBa+JLwo+GvIzwiiiLuRxpHSiPbo+Sj4qPqoqajPaOLowdjLGNWx1yPVY8V xrbG4eOi4mrjphZ6LdyzcCTeLj4/vn+R0aJli64uVl+cvvjMEvkl3CUnE7AJ0QlHE95zg7nV3KlE v8TKxEkeh7eX94zvxt/NHxe4CIoFo0kuScVJY8kuybuSx1NcU0pSJoQcYbnwRapvalXqdFpw2uG0 T+nR6Y0ZhIyEjFMiJVGaqCNTK3NZZk+WWVZ+1uBSp6V7lk6KA8S12VD2ouxWCR39meqSGks3SYdy 5udU5LzJjco9uUxxmWhZ13LT5VuXj67wXvH9SsxK3sr2VTqr1q8aWu2++uAaaE3imva1emvz1o6s 81l3ZD1pfdr6XzZYbSje8Gpj9Ma2PM28dXnDm3w21efL5YvzBzY7b67agtki3NK9de7Wsq0fC/gF 1wqtCksK32/jbbv2nfV3pd992p60vbvIvmj/DtwO0Y7+na47jxQrFq8oHt4VtKt5N3N3we5Xe5bs uVpiW1K1l7RXunewNLC0tUy/bEfZ+/KU8r4Kj4rGSo3KrZXT+/j7eve77W+o0qwqrHp3QHjgzkGf g83VhtUlh3CHcg49qYmq6fye/X1drXptYe2Hw6LDg0fCjnTUOdTVHdU4WlQP10vrx4/FH7v5g+cP rQ2shoONjMbC4+C49PjTHxN+7D8RcKL9JPtkw08GP1U20ZoKmqHm5c2TLSktg62xrT2n/E+1tzm3 Nf1s8fPh0zqnK84onyk6Szqbd/bTuRXnps5nnZ+4kHxhuH1J+/2LMRdvd4R2dF8KuHTlsvfli53u neeuuFw5fdXp6qlr7Gst1+2vN3fZdTX9YvdLU7d9d/MNhxutNx1vtvXM6znb69p74Zbnrcu3/W5f 71vQ19Mf2X9nIH5g8A7/ztjd9Lsv7uXcm7m/7gH2QcFDhYcljzQeVf9q8mvjoP3gmSHPoa7H4Y/v D/OGn/2W/dv7kbwn1Cclo9qjdWM2Y6fHvcdvPl34dORZ1rOZifzfFX+vfG78/Kc/3P7omoyZHHkh fvHpz20v1V4efmX7qn0qZOrR64zXM9MFb9TeHHnLftv5Lvrd6Ezue/z70g8mH9o+Bnx88Cnj06e/ AAOb8/zszueKAAAACXBIWXMAAA7DAAAOwwHHb6hkAAAHmklEQVRoBeVaa0wUVxQ+uzPLLqj1lbRW IkKKYgNq0WpRq0mjEWJRqVaLGmoCFdpCpYhGLP6hlvpCWl+NmGATFYH+qPhqq9W0wSqgLYhPFCxI Q2L7oyjCPthX7xkY2MedYRaGWbAnmcy959x77plvzp69c+5R2QnBICLLn7UAqg6D2aAJg8jyblPZ 7ubAatma/wVzVSVYblaB7U4l2OquAfgQGxlyodV4qQn+rAqYKZGgDp4M7MSpwASFgHr4KCIcuKRC T3/8+G8oLi7uVyvnzZsL4eHhomvYDQYw37gBphMFYC09D6AhP0ICrkpDprGkLQB614vAF0Iudnos aGa9DZqps0TXcxWePn0G6uvrXdkQFBQES5YsduPTGFKw5DzdaDRAeXkFTYdsvNDQUFFdpp8vgiF7 O9ibGjtARqB7SZbKIsDL5D8DdKsyuF+AFFUIeF9wMBqN8NVXX0NjY6PgckuXLsYfqHfJ2tQELR+l gD7+Y7A/+ktWY2xN10Gfsxz0x3aD3WSQVberMgR8z57cHgGPjY31Lujm6mp4tup9sJ694PoMsvYt l/OgbUc8WP9pklUvr4wH/PbtOzzL7Y4ejoAjceHFbYQCjParZdD23lpJK7FRq4Cd9jq3a2FCOsMU 2cFYa++A3dAGtkf3wPJ7oaguzut3xoHf5mPAvOgvOtYToaeAo26vgI4hRZ+xVfTZ2HdiQbt8BWgm TxEcxwYGd8nspkyw1NwAc9k5Es/pL8D+rBH0BPihX54Dlda3a25fGgUFJ0Cqh/PrcKAHBgZCYWEB z6PeGxoaYMuWTKps+/ZsQB1SCHcorWkbwV5Pj99MVBT4paYCG9wNqBS9Kq2O7FYiuMvyYAUYC3eA rYlsM10IgTd8mw1+H37hIvG8W1RUBBcvXhKc6BhSHAcp/kdqOn8BbGV/ONrQ1dZuTocX9u/3GPAu BZ0NduIUGJJxBNi5Sa4iro87G3N1GVUmlYmAnzp1RnC4EOA4QVHQ0cuNufuohmoz04n30UGiTuiB iZ7vF7cJNJGbqCNNJXQ7qINdmH0BHFUpCnr7b1eoYUXzSaKsgDtipIteC0xIlCOLa+Mfq+VBtRu/ J8aVK1d77eG8bmVB/+VXfl2nu9+6BKe+nB30eN2KT6kqzbc8CzFVVVVw4MBBqi5kioUUx0mKgm45 9p3j2lwbvVw9cqQbX04GExAMav+ZbiptDbfceEIMBHzXrhwhsWTAUQG3exHUJKMAt4k00kwTz8fQ 5vSGx86MhvaTzrsZ632S35FANTU1oukBqR7OL6WYp9v1en5NpzsTON6p318dle/QXqt+8uSp6Ny2 NvqzCU1SDHQhA54HPu7V8Q9WKikGunr0aKpN1oZHVL7cTLuhVW6VTvqOHz8OmNaVQsqBPop+sGCu rJJiZ5/HWK6dddPBhES68XrLwBCUn58PmIvpiRQDHQ3RpKxzs8e8/zDYmpvd+HIyzDfLqSkB5tXZ Hi8zYsRwwTmYgykpKRGU8wJFQfeZ+ya/rtO9LSfXqS9nx24ygukk/etTE+a+jRRbOywslBxS5ALe hQhTA7jbESNFQdeEvwaqoHFu9liOFoP+UJ4bv68MBFx/YCPVy9XkVIkJmCB5CQQ6PX0D6HQ6SEgQ /5jbu3cfiO14FAVd5esLug3rqQ9qyt4DrbtzwC4hJlIVuDB5wK33f3KRdHTxGE8qBQQEdAGOc8aM eQlSUpIFpyPgBw8Kf7kqCjpaqYtZCkz0QqrB5m8OQ0t8PFjq6qhyqcz2ikvQmhkNQoBj9hErB6TS 2LEvcx7uOH7OnNkQEfGGI8upjfEdD7pppDjoaMTQ7M+pYQZltuskEXX3LjY9ItvTZmgvJ2BnrQZj fhLYWxqo83HH4rsyhSrzlJmUlAhif6yFhUWA5xCupFgawHFhNdk+Dis8yp2PCh1mOI7Htu1JM5jL S7kjO67YiBzXIVkbH4DtIamNwQMLhjDw6pSh3JFUwwJAF/eZbKdGGN9TU9dDVtY2x2Wc2jt37uL+ fHEsT14BHRdn/P054Fsi3uJtod4tD+ug/YeTYP7+UM91L1QNHUzOwz/YJnsh0qRJk7hkl9CBBsb3 vLzD3MvhzfMa6GgAAi9E7RXlYDy0D2z3rnUWGwmN7JnvE5MF2vnLZPNw1xVjYmKgqqpasPwCa2lC Qy/BggXzualeiemuRrv2jRnpoE+MA9st56yg67ie+uy0WBiy9UfQLVrTb4CjDRg60tLoOXvexvz8 I11pggEJOm9ob++ayI0c2Hj47MlevLfr4TzcRiYkxIuqwOovTBN4NbyIWihRqB4/gxxQvAJMGPnY 8Q8ENkT6VlDiEpKHYfioqKgQLMnAcjss2Rg0oKtDZoA2IZXUwZCvWp08NSuS0fRgYHJyMilV2SL4 RYppYMmg68iDCn0MoKy/iF1Eio6WkaKjsP734CBSnUsjIT5tLO7bExPXQWnpZZqY43Gl0oJSBQTN 4yZ2r4IuQEqisTTaJy0ddO+uJOen9JRw96TB15Ls6Uo8mnrmdNCuJp69cCFgnuZ5pQEBOuZidHFr wGe2Z0X8g/WleD28WGprgZ0gPcU6WIF2tNvroDsa839pP5cfRwP95f0HN8yA0D1xOrwAAAAASUVO RK5CYII= --047d7b603fe6b6588704d56fb4f7--