Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8EEB69C82 for ; Wed, 4 Apr 2012 05:39:37 +0000 (UTC) Received: (qmail 39190 invoked by uid 500); 4 Apr 2012 05:39:34 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 39162 invoked by uid 500); 4 Apr 2012 05:39:34 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 39146 invoked by uid 99); 4 Apr 2012 05:39:34 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 04 Apr 2012 05:39:34 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL,TO_NO_BRKTS_PCNT X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: 209.85.210.43 is neither permitted nor denied by domain of tamar@tok-media.com) Received: from [209.85.210.43] (HELO mail-pz0-f43.google.com) (209.85.210.43) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 04 Apr 2012 05:39:27 +0000 Received: by dadn15 with SMTP id n15so595829dad.30 for ; Tue, 03 Apr 2012 22:39:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:in-reply-to:references:from:date :message-id:subject:to:content-type:x-gm-message-state; bh=3yUl/BLA1RLopYIpL8oO0Gsf4r96GtpUvg/OCwHdWxA=; b=LiQZlpQEQlakB4fpplU9ygeHykpgLEwktHrYSqLI0UTpGpKXCQS4sW4iWy1m20tmDF enpec3nKS69XuohQptgeou2O6tXYG/DePZgSPufa2r8QFGCBtJ+vvqKst/47I2RAbEOZ cxK9EURDFd6UydNbmGFfDjx7ktOE96DYQbxRbgdbBOILajSUE2gRpS5GoK+K7VUmPPB4 qxvPdKECTP/UxbBnU1C+VBE6wZ4simQ1vB+TOFqaAx1lqx2IBD/yp7n+cu2YNYwrLyc/ RvM+X7zwDjsXsNPGCOqYhfuxMy+DenM5tzPiojjHklX3D6+k52Bd8o0FhpA35KK+4iYS /opA== Received: by 10.68.216.98 with SMTP id op2mr34623959pbc.93.1333517944845; Tue, 03 Apr 2012 22:39:04 -0700 (PDT) MIME-Version: 1.0 Received: by 10.68.222.3 with HTTP; Tue, 3 Apr 2012 22:38:43 -0700 (PDT) X-Originating-IP: [80.179.241.40] In-Reply-To: References: <5459C0DB-DB2E-4A6F-B6CD-1ECF12B04ACF@thelastpickle.com> From: Tamar Fraenkel Date: Wed, 4 Apr 2012 08:38:43 +0300 Message-ID: Subject: Re: data size difference between supercolumn and regular column To: user@cassandra.apache.org Content-Type: multipart/related; boundary=e89a8ff254e067747d04bcd3d7ea X-Gm-Message-State: ALoCoQk49C2rSE9CsSp7Ald4TTtJR8+zPfol+UN6sLOEjaNU93G9Z0iewW1Hxe/Sgbudqkggkwm9 --e89a8ff254e067747d04bcd3d7ea Content-Type: multipart/alternative; boundary=e89a8ff254e067747a04bcd3d7e9 --e89a8ff254e067747a04bcd3d7e9 Content-Type: text/plain; charset=ISO-8859-1 Do you have a good reference for maintenance scripts for Cassandra ring? Thanks, *Tamar Fraenkel * Senior Software Engineer, TOK Media [image: Inline image 1] tamar@tok-media.com Tel: +972 2 6409736 Mob: +972 54 8356490 Fax: +972 2 5612956 On Tue, Apr 3, 2012 at 4:37 AM, aaron morton wrote: > If you have a workload with overwrites you will end up with some data > needing compaction. Running a nightly manual compaction would remove this, > but it will also soak up some IO so it may not be the best solution. > > I do not know if Leveled compaction would result in a smaller disk load > for the same workload. > > I agree with other people, turn on compaction. > > Cheers > > ----------------- > Aaron Morton > Freelance Developer > @aaronmorton > http://www.thelastpickle.com > > On 3/04/2012, at 9:19 AM, Yiming Sun wrote: > > Yup Jeremiah, I learned a hard lesson on how cassandra behaves when it > runs out of disk space :-S. I didn't try the compression, but when it > ran out of disk space, or near running out, compaction would fail because > it needs space to create some tmp data files. > > I shall get a tatoo that says keep it around 50% -- this is valuable tip. > > -- Y. > > On Sun, Apr 1, 2012 at 11:25 PM, Jeremiah Jordan < > JEREMIAH.JORDAN@morningstar.com> wrote: > >> Is that 80% with compression? If not, the first thing to do is turn on >> compression. Cassandra doesn't behave well when it runs out of disk space. >> You really want to try and stay around 50%, 60-70% works, but only if it >> is spread across multiple column families, and even then you can run into >> issues when doing repairs. >> >> -Jeremiah >> >> >> >> On Apr 1, 2012, at 9:44 PM, Yiming Sun wrote: >> >> Thanks Aaron. Well I guess it is possible the data files from >> sueprcolumns could've been reduced in size after compaction. >> >> This bring yet another question. Say I am on a shoestring budget and >> can only put together a cluster with very limited storage space. The first >> iteration of pushing data into cassandra would drive the disk usage up into >> the 80% range. As time goes by, there will be updates to the data, and >> many columns will be overwritten. If I just push the updates in, the disks >> will run out of space on all of the cluster nodes. What would be the best >> way to handle such a situation if I cannot to buy larger disks? Do I need >> to delete the rows/columns that are going to be updated, do a compaction, >> and then insert the updates? Or is there a better way? Thanks >> >> -- Y. >> >> On Sat, Mar 31, 2012 at 3:28 AM, aaron morton wrote: >> >>> does cassandra 1.0 perform some default compression? >>> >>> No. >>> >>> The on disk size depends to some degree on the work load. >>> >>> If there are a lot of overwrites or deleted you may have rows/columns >>> that need to be compacted. You may have some big old SSTables that have not >>> been compacted for a while. >>> >>> There is some overhead involved in the super columns: the super col >>> name, length of the name and the number of columns. >>> >>> Cheers >>> >>> ----------------- >>> Aaron Morton >>> Freelance Developer >>> @aaronmorton >>> http://www.thelastpickle.com >>> >>> On 29/03/2012, at 9:47 AM, Yiming Sun wrote: >>> >>> Actually, after I read an article on cassandra 1.0 compression just now >>> ( >>> http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-compression), >>> I am more puzzled. In our schema, we didn't specify any compression >>> options -- does cassandra 1.0 perform some default compression? or is the >>> data reduction purely because of the schema change? Thanks. >>> >>> -- Y. >>> >>> On Wed, Mar 28, 2012 at 4:40 PM, Yiming Sun wrote: >>> >>>> Hi, >>>> >>>> We are trying to estimate the amount of storage we need for a >>>> production cassandra cluster. While I was doing the calculation, I noticed >>>> a very dramatic difference in terms of storage space used by cassandra data >>>> files. >>>> >>>> Our previous setup consists of a single-node cassandra 0.8.x with no >>>> replication, and the data is stored using supercolumns, and the data files >>>> total about 534GB on disk. >>>> >>>> A few weeks ago, I put together a cluster consisting of 3 nodes >>>> running cassandra 1.0 with replication factor of 2, and the data is >>>> flattened out and stored using regular columns. And the aggregated data >>>> file size is only 488GB (would be 244GB if no replication). >>>> >>>> This is a very dramatic reduction in terms of storage needs, and is >>>> certainly good news in terms of how much storage we need to provision. >>>> However, because of the dramatic reduction, I also would like to make sure >>>> it is absolutely correct before submitting it - and also get a sense of why >>>> there was such a difference. -- I know cassandra 1.0 does data compression, >>>> but does the schema change from supercolumn to regular column also help >>>> reduce storage usage? Thanks. >>>> >>>> -- Y. >>>> >>> >>> >>> >> >> > > --e89a8ff254e067747a04bcd3d7e9 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Do you have a good reference for maintenance scripts for C= assandra=A0ring?
Thanks,
Tamar Fraenkel=A0
= Senior Software Engineer, TOK Media=A0

3D"Inline

tamar@tok-media.com
Tel:=A0=A0=A0+972 2 6= 409736=A0
Mob:=A0=A0+972 54 8356490=A0
Fax:=A0=A0= =A0+972 2 5612956=A0




On Tue, Apr 3, 2012 at 4:37 AM, aaron mo= rton <aaron= @thelastpickle.com> wrote:
If you have a workload with overwrites = you will end up with some data needing compaction. Running a nightly manual= compaction would remove this, but it will also soak up some IO so it may n= ot be the best solution.=A0

I do not know if Leveled compaction would result in a smalle= r disk load for the same workload.=A0

I agree with= other people, turn on compaction.=A0

Cheers

<= div style=3D"word-wrap:break-word">
-----------------
Aaron Morton
Freelance Deve= loper
@aaronmorton

On 3/04/2012, at 9:19 AM, Yiming= Sun wrote:

Yup Jeremiah, I learned a ha= rd lesson on how cassandra behaves when it runs out of disk space :-S. =A0 = =A0I didn't try the compression, but when it ran out of disk space, or = near running out, compaction would fail because it needs space to create so= me tmp data files.

I shall get a tatoo that says keep it around 50% -- this is = valuable tip.

-- Y.

On Sun, Apr 1, 2012 at 11:25 PM, Jeremiah Jordan <= JEREMI= AH.JORDAN@morningstar.com> wrote:
Is that 80% with compression? =A0If not, the first thing to do is turn on c= ompression. =A0Cassandra doesn't behave well when it runs out of disk s= pace. =A0You really want to try and stay around 50%, =A060-70% works, but o= nly if it is spread across multiple column families, and even then you can run into issues when doing repairs.

-Jeremiah



On Apr 1, 2012, at 9:44 PM, Yiming Sun wrote:

Thanks Aaron. =A0Well I guess it is possible the = data files from sueprcolumns could've been reduced in size after compac= tion.

This bring yet another question. =A0Say I am on a shoestring budget an= d can only put together a cluster with very limited storage space. =A0The f= irst iteration of pushing data into cassandra would drive the disk usage up= into the 80% range. =A0As time goes by, there will be updates to the data, and many columns will be overwritten. = =A0If I just push the updates in, the disks will run out of space on all of= the cluster nodes. =A0What would be the best way to handle such a situatio= n if I cannot to buy larger disks? Do I need to delete the rows/columns that are going to be updated, do a compa= ction, and then insert the updates? =A0Or is there a better way? =A0Thanks<= /div>

-- Y.

On Sat, Mar 31, 2012 at 3:28 AM, aaron morton <aaron@thel= astpickle.com> wrote:
does cassandra 1.0 perform some default compressi= on?=A0
No.=A0

The on disk size depends to some degree on the work load.=A0

If there are a lot of overwrites or deleted you may have rows/columns = that need to be compacted. You may have some big old SSTables that have not= been compacted for a while.=A0

There is some overhead involved in the super columns: the super col na= me, length of the name and the number of columns. =A0

Cheers

-----------------
Aaron Morton
Freelance Developer
@aaronmorton

On 29/03/2012, at 9:47 AM, Yiming Sun wrote:

Actually, after I read an article on cassandra 1.= 0 compression just now ( http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-compression= ), I am more puzzled. =A0In our schema, we didn't specify any compressi= on options -- does cassandra 1.0 perform some default compression? or is th= e data reduction purely because of the schema change? =A0Thanks.

-- Y.

On Wed, Mar 28, 2012 at 4:40 PM, Yiming Sun <yiming.sun@gm= ail.com> wrote:
Hi,

We are trying to estimate the amount of storage we need for a producti= on cassandra cluster. =A0While I was doing the calculation, I noticed a ver= y dramatic difference in terms of storage space used by cassandra data file= s.

Our previous setup consists of a single-node cassandra 0.8.x with no r= eplication, and the data is stored using supercolumns, and the data files t= otal about 534GB on disk.

A few weeks ago, I put together a cluster consisting of 3 nodes runnin= g cassandra 1.0 with replication factor of 2, and the data is flattened out= and stored using regular columns. =A0And the aggregated data file size is = only 488GB (would be 244GB if no replication).

This is a very dramatic reduction in terms of storage needs, and is ce= rtainly good news in terms of how much storage we need to provision. =A0How= ever, because of the dramatic reduction, I also would like to make sure it = is absolutely correct before submitting it - and also get a sense of why there was such a difference. -- I know ca= ssandra 1.0 does data compression, but does the schema change from supercol= umn to regular column also help reduce storage usage? =A0Thanks.

-- Y.







--e89a8ff254e067747a04bcd3d7e9-- --e89a8ff254e067747d04bcd3d7ea Content-Type: image/png; name="tokLogo.png" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: ii_135b91fb888fa9ff iVBORw0KGgoAAAANSUhEUgAAAF0AAAAnCAYAAABtyERkAAAABGdBTUEAALGPC/xhBQAACkNpQ0NQ SUNDIFByb2ZpbGUAAHgBnZZ3VFNZE8Dvey+90BJCkRJ6DU1KAJESepFeRSUkAUIJGBKwV0QFVxQV aYoiiyIuuLoUWSuiWFgUFLAvyCKgrIuriIplX/QcZf/Y/b6z88ec35s7c+/cmbnnPAAovoFCUSas AECGSCIO8/FgxsTGMfHdAAZEgAPWAHB52VlB4d4RABU/Lw4zG3WSsUygz/p1/xe4xfINYTI/m/5/ pcjLEkvQnULQkLl8QTYP5TyU03MlWTL7JMr0xDQZwxgZi9EEUVaVcfIXNv/s84XdZMzPEPFRH1nO WfwMvow7UN6SIxWgjASinJ8jFOSifBtl/XRphhDlNyjTMwTcbAAwFJldIuCloGyFMkUcEcZBeR4A BEryLE6cxRLBMjRPADiZWcvFwuQUCdOYZ8K0dnRkM30FuekCiYQVwuWlccV8JiczI4srWg7AlzvL ooCSrLZMtMj21o729iwbC7T8X+VfF796/TvIevvF42Xo555BjK5vtm+x32yZ1QCwp9Da7PhmSywD oGUTAKr3vtn0DwAgnwdA841Z92HI5iVFIslysrTMzc21EAp4FrKCfpX/6fDV859h1nkWsvO+1o7p KUjiStMlTFlReZnpmVIxMzuLyxMwWX8bYnTr/xw4K61ZeZiHCZIEYoEIPSoKnTKhKBltt4gvlAgz RUyh6J86/B/DZuUgwy9zjQKt5iOgL7EACjfoAPm9C2BoZIDE70dXoK99CyRGAdnLi9Ye/TL3KKPr n/XfFFyEfsLZwmSmzMwJi2DypOIcGaNvQqawgATkAR2oAS2gB4wBC9gAB+AM3IAX8AfBIALEgsWA B1JABhCDXLAKrAf5oBDsAHtAOagCNaAONIAToAWcBhfAZXAd3AR94D4YBCPgGZgEr8EMBEF4iArR IDVIGzKAzCAbiA3Nh7ygQCgMioUSoGRIBEmhVdBGqBAqhsqhg1Ad9CN0CroAXYV6oLvQEDQO/Qm9 gxGYAtNhTdgQtoTZsDscAEfAi+BkeCm8As6Dt8OlcDV8DG6GL8DX4T54EH4GTyEAISMMRAdhIWyE gwQjcUgSIkbWIAVICVKNNCBtSCdyCxlEJpC3GByGhmFiWBhnjC8mEsPDLMWswWzDlGOOYJoxHZhb mCHMJOYjlorVwJphnbB+2BhsMjYXm48twdZim7CXsH3YEexrHA7HwBnhHHC+uFhcKm4lbhtuH64R dx7XgxvGTeHxeDW8Gd4FH4zn4iX4fHwZ/hj+HL4XP4J/QyATtAk2BG9CHEFE2EAoIRwlnCX0EkYJ M0QFogHRiRhM5BOXE4uINcQ24g3iCHGGpEgyIrmQIkippPWkUlID6RLpAeklmUzWJTuSQ8lC8jpy Kfk4+Qp5iPyWokQxpXAo8RQpZTvlMOU85S7lJZVKNaS6UeOoEup2ah31IvUR9Y0cTc5Czk+OL7dW rkKuWa5X7rk8Ud5A3l1+sfwK+RL5k/I35CcUiAqGChwFrsIahQqFUwoDClOKNEVrxWDFDMVtikcV ryqOKeGVDJW8lPhKeUqHlC4qDdMQmh6NQ+PRNtJqaJdoI3Qc3YjuR0+lF9J/oHfTJ5WVlG2Vo5SX KVcon1EeZCAMQ4YfI51RxDjB6Ge8U9FUcVcRqGxVaVDpVZlWnaPqpipQLVBtVO1TfafGVPNSS1Pb qdai9lAdo26qHqqeq75f/ZL6xBz6HOc5vDkFc07MuacBa5hqhGms1Dik0aUxpaml6aOZpVmmeVFz Qouh5aaVqrVb66zWuDZNe762UHu39jntp0xlpjsznVnK7GBO6mjo+OpIdQ7qdOvM6BrpRupu0G3U fahH0mPrJent1mvXm9TX1g/SX6Vfr3/PgGjANkgx2GvQaTBtaGQYbbjZsMVwzEjVyM9ohVG90QNj qrGr8VLjauPbJjgTtkmayT6Tm6awqZ1pimmF6Q0z2MzeTGi2z6zHHGvuaC4yrzYfYFFY7qwcVj1r yIJhEWixwaLF4rmlvmWc5U7LTsuPVnZW6VY1Vvetlaz9rTdYt1n/aWNqw7OpsLk9lzrXe+7aua1z X9ia2Qps99vesaPZBdlttmu3+2DvYC+2b7Afd9B3SHCodBhg09kh7G3sK45YRw/HtY6nHd862TtJ nE44/eHMck5zPuo8Ns9onmBezbxhF10XrstBl8H5zPkJ8w/MH3TVceW6Vrs+dtNz47vVuo26m7in uh9zf+5h5SH2aPKY5jhxVnPOeyKePp4Fnt1eSl6RXuVej7x1vZO9670nfex8Vvqc98X6Bvju9B3w 0/Tj+dX5Tfo7+K/27wigBIQHlAc8DjQNFAe2BcFB/kG7gh4sMFggWtASDIL9gncFPwwxClka8nMo LjQktCL0SZh12KqwznBa+JLwo+GvIzwiiiLuRxpHSiPbo+Sj4qPqoqajPaOLowdjLGNWx1yPVY8V xrbG4eOi4mrjphZ6LdyzcCTeLj4/vn+R0aJli64uVl+cvvjMEvkl3CUnE7AJ0QlHE95zg7nV3KlE v8TKxEkeh7eX94zvxt/NHxe4CIoFo0kuScVJY8kuybuSx1NcU0pSJoQcYbnwRapvalXqdFpw2uG0 T+nR6Y0ZhIyEjFMiJVGaqCNTK3NZZk+WWVZ+1uBSp6V7lk6KA8S12VD2ouxWCR39meqSGks3SYdy 5udU5LzJjco9uUxxmWhZ13LT5VuXj67wXvH9SsxK3sr2VTqr1q8aWu2++uAaaE3imva1emvz1o6s 81l3ZD1pfdr6XzZYbSje8Gpj9Ma2PM28dXnDm3w21efL5YvzBzY7b67agtki3NK9de7Wsq0fC/gF 1wqtCksK32/jbbv2nfV3pd992p60vbvIvmj/DtwO0Y7+na47jxQrFq8oHt4VtKt5N3N3we5Xe5bs uVpiW1K1l7RXunewNLC0tUy/bEfZ+/KU8r4Kj4rGSo3KrZXT+/j7eve77W+o0qwqrHp3QHjgzkGf g83VhtUlh3CHcg49qYmq6fye/X1drXptYe2Hw6LDg0fCjnTUOdTVHdU4WlQP10vrx4/FH7v5g+cP rQ2shoONjMbC4+C49PjTHxN+7D8RcKL9JPtkw08GP1U20ZoKmqHm5c2TLSktg62xrT2n/E+1tzm3 Nf1s8fPh0zqnK84onyk6Szqbd/bTuRXnps5nnZ+4kHxhuH1J+/2LMRdvd4R2dF8KuHTlsvfli53u neeuuFw5fdXp6qlr7Gst1+2vN3fZdTX9YvdLU7d9d/MNhxutNx1vtvXM6znb69p74Zbnrcu3/W5f 71vQ19Mf2X9nIH5g8A7/ztjd9Lsv7uXcm7m/7gH2QcFDhYcljzQeVf9q8mvjoP3gmSHPoa7H4Y/v D/OGn/2W/dv7kbwn1Cclo9qjdWM2Y6fHvcdvPl34dORZ1rOZifzfFX+vfG78/Kc/3P7omoyZHHkh fvHpz20v1V4efmX7qn0qZOrR64zXM9MFb9TeHHnLftv5Lvrd6Ezue/z70g8mH9o+Bnx88Cnj06e/ AAOb8/zszueKAAAACXBIWXMAAA7DAAAOwwHHb6hkAAAHmklEQVRoBeVaa0wUVxQ+uzPLLqj1lbRW IkKKYgNq0WpRq0mjEWJRqVaLGmoCFdpCpYhGLP6hlvpCWl+NmGATFYH+qPhqq9W0wSqgLYhPFCxI Q2L7oyjCPthX7xkY2MedYRaGWbAnmcy959x77plvzp69c+5R2QnBICLLn7UAqg6D2aAJg8jyblPZ 7ubAatma/wVzVSVYblaB7U4l2OquAfgQGxlyodV4qQn+rAqYKZGgDp4M7MSpwASFgHr4KCIcuKRC T3/8+G8oLi7uVyvnzZsL4eHhomvYDQYw37gBphMFYC09D6AhP0ICrkpDprGkLQB614vAF0Iudnos aGa9DZqps0TXcxWePn0G6uvrXdkQFBQES5YsduPTGFKw5DzdaDRAeXkFTYdsvNDQUFFdpp8vgiF7 O9ibGjtARqB7SZbKIsDL5D8DdKsyuF+AFFUIeF9wMBqN8NVXX0NjY6PgckuXLsYfqHfJ2tQELR+l gD7+Y7A/+ktWY2xN10Gfsxz0x3aD3WSQVberMgR8z57cHgGPjY31Lujm6mp4tup9sJ694PoMsvYt l/OgbUc8WP9pklUvr4wH/PbtOzzL7Y4ejoAjceHFbYQCjParZdD23lpJK7FRq4Cd9jq3a2FCOsMU 2cFYa++A3dAGtkf3wPJ7oaguzut3xoHf5mPAvOgvOtYToaeAo26vgI4hRZ+xVfTZ2HdiQbt8BWgm TxEcxwYGd8nspkyw1NwAc9k5Es/pL8D+rBH0BPihX54Dlda3a25fGgUFJ0Cqh/PrcKAHBgZCYWEB z6PeGxoaYMuWTKps+/ZsQB1SCHcorWkbwV5Pj99MVBT4paYCG9wNqBS9Kq2O7FYiuMvyYAUYC3eA rYlsM10IgTd8mw1+H37hIvG8W1RUBBcvXhKc6BhSHAcp/kdqOn8BbGV/ONrQ1dZuTocX9u/3GPAu BZ0NduIUGJJxBNi5Sa4iro87G3N1GVUmlYmAnzp1RnC4EOA4QVHQ0cuNufuohmoz04n30UGiTuiB iZ7vF7cJNJGbqCNNJXQ7qINdmH0BHFUpCnr7b1eoYUXzSaKsgDtipIteC0xIlCOLa+Mfq+VBtRu/ J8aVK1d77eG8bmVB/+VXfl2nu9+6BKe+nB30eN2KT6kqzbc8CzFVVVVw4MBBqi5kioUUx0mKgm45 9p3j2lwbvVw9cqQbX04GExAMav+ZbiptDbfceEIMBHzXrhwhsWTAUQG3exHUJKMAt4k00kwTz8fQ 5vSGx86MhvaTzrsZ632S35FANTU1oukBqR7OL6WYp9v1en5NpzsTON6p318dle/QXqt+8uSp6Ny2 NvqzCU1SDHQhA54HPu7V8Q9WKikGunr0aKpN1oZHVL7cTLuhVW6VTvqOHz8OmNaVQsqBPop+sGCu rJJiZ5/HWK6dddPBhES68XrLwBCUn58PmIvpiRQDHQ3RpKxzs8e8/zDYmpvd+HIyzDfLqSkB5tXZ Hi8zYsRwwTmYgykpKRGU8wJFQfeZ+ya/rtO9LSfXqS9nx24ygukk/etTE+a+jRRbOywslBxS5ALe hQhTA7jbESNFQdeEvwaqoHFu9liOFoP+UJ4bv68MBFx/YCPVy9XkVIkJmCB5CQQ6PX0D6HQ6SEgQ /5jbu3cfiO14FAVd5esLug3rqQ9qyt4DrbtzwC4hJlIVuDB5wK33f3KRdHTxGE8qBQQEdAGOc8aM eQlSUpIFpyPgBw8Kf7kqCjpaqYtZCkz0QqrB5m8OQ0t8PFjq6qhyqcz2ikvQmhkNQoBj9hErB6TS 2LEvcx7uOH7OnNkQEfGGI8upjfEdD7pppDjoaMTQ7M+pYQZltuskEXX3LjY9ItvTZmgvJ2BnrQZj fhLYWxqo83HH4rsyhSrzlJmUlAhif6yFhUWA5xCupFgawHFhNdk+Dis8yp2PCh1mOI7Htu1JM5jL S7kjO67YiBzXIVkbH4DtIamNwQMLhjDw6pSh3JFUwwJAF/eZbKdGGN9TU9dDVtY2x2Wc2jt37uL+ fHEsT14BHRdn/P054Fsi3uJtod4tD+ug/YeTYP7+UM91L1QNHUzOwz/YJnsh0qRJk7hkl9CBBsb3 vLzD3MvhzfMa6GgAAi9E7RXlYDy0D2z3rnUWGwmN7JnvE5MF2vnLZPNw1xVjYmKgqqpasPwCa2lC Qy/BggXzualeiemuRrv2jRnpoE+MA9st56yg67ie+uy0WBiy9UfQLVrTb4CjDRg60tLoOXvexvz8 I11pggEJOm9ob++ayI0c2Hj47MlevLfr4TzcRiYkxIuqwOovTBN4NbyIWihRqB4/gxxQvAJMGPnY 8Q8ENkT6VlDiEpKHYfioqKgQLMnAcjss2Rg0oKtDZoA2IZXUwZCvWp08NSuS0fRgYHJyMilV2SL4 RYppYMmg68iDCn0MoKy/iF1Eio6WkaKjsP734CBSnUsjIT5tLO7bExPXQWnpZZqY43Gl0oJSBQTN 4yZ2r4IuQEqisTTaJy0ddO+uJOen9JRw96TB15Ls6Uo8mnrmdNCuJp69cCFgnuZ5pQEBOuZidHFr wGe2Z0X8g/WleD28WGprgZ0gPcU6WIF2tNvroDsa839pP5cfRwP95f0HN8yA0D1xOrwAAAAASUVO RK5CYII= --e89a8ff254e067747d04bcd3d7ea--