Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B4EB910EA9 for ; Sat, 21 Sep 2013 01:57:41 +0000 (UTC) Received: (qmail 54171 invoked by uid 500); 21 Sep 2013 01:57:39 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 54114 invoked by uid 500); 21 Sep 2013 01:57:39 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 54103 invoked by uid 99); 21 Sep 2013 01:57:39 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 21 Sep 2013 01:57:39 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of jbirchfield@stumbleupon.com designates 209.85.220.52 as permitted sender) Received: from [209.85.220.52] (HELO mail-pa0-f52.google.com) (209.85.220.52) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 21 Sep 2013 01:57:32 +0000 Received: by mail-pa0-f52.google.com with SMTP id kl14so102449pab.25 for ; Fri, 20 Sep 2013 18:57:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=stumbleupon.com; s=google; h=from:content-type:message-id:mime-version:subject:date:references :to:in-reply-to; bh=CGQ5X4aTuK/LlREzT/xWkKki3J3qHAhRLxk3jSKXErE=; b=CfsIBv6RKP4zxIIkoM2A3kC+cxfA+8j0cCpxKeTP+gwqawGQC+/9VVx4XH469l5Flb Tn6Ws8DN5HwVIEBARL9A1EWC5YExqcCnpjb3WYMGkNNWVTJaZUCJby09OwQQlxjvzjt5 9mfbQlPemLE5rqMHnZ/jjCfgHzCQ7qi+uUWKQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:content-type:message-id:mime-version :subject:date:references:to:in-reply-to; bh=CGQ5X4aTuK/LlREzT/xWkKki3J3qHAhRLxk3jSKXErE=; b=EafJ4KajkjUB08HreDbPIljVRpfIm+oBTagpR3CJs4f69wGYcHQZPTGUOKjR0Xi21M h8aFmiSOqk/TKFVofJHVbhYLvUvoWFjrd6nA+ZxuPAERzI+OuPnyd2lUdWUz1fikVrwl k5pvhr6xetEH2IrMbUgpxmSZs/bt05NTCEXvAjvPXw5Qiy2cuq4J6jF9trtPsYXN7Ypu URVHkB4Zh4tHCUh3xLat56VsuDeYTezQ/aY7IVOKnjmHmaRn2elJBjmWmN489hZQFUcW y4PP7VQnQ2qOHYx2Cr/Vs7Avy0xsIqPEqd++af2QbgMlBpRLYcmE5xrriKYD7ncxa/kr t94A== X-Gm-Message-State: ALoCoQkepzfW5riYa3RZ5wChylE2gjvtv7BdgK2F4sKvT+u3pfSKDznipDhxPQ1U8VTgojwM+FzK X-Received: by 10.68.65.47 with SMTP id u15mr11043979pbs.11.1379728631469; Fri, 20 Sep 2013 18:57:11 -0700 (PDT) Received: from h-28-51.sfo.stumble.net ([38.102.133.116]) by mx.google.com with ESMTPSA id dw3sm18670634pbc.17.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 20 Sep 2013 18:57:10 -0700 (PDT) From: James Birchfield Content-Type: multipart/signed; boundary="Apple-Mail=_D00BF177-E00F-4F95-AC0A-12377D90303A"; protocol="application/pkcs7-signature"; micalg=sha1 Message-Id: <153125AA-08AF-4687-821B-D27BDE086C33@stumbleupon.com> Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\)) Subject: Re: HBase Table Row Count Optimization - A Solicitation For Help Date: Fri, 20 Sep 2013 18:57:09 -0700 References: <731FDD98-8C3C-47B2-B5C5-0FA5B1261A42@stumbleupon.com> <8CB132C0-7594-44EB-AA8C-5882FEA9AE98@stumbleupon.com> <6E97D45C-E501-4F75-83D2-002C2E9B1395@stumbleupon.com> <1379725746.27943.YahooMailNeo@web140604.mail.bf1.yahoo.com> <70FF8C9F-6450-47D9-A0F9-13FB7874E0C2@stumbleupon.com> <96A997F5-BCFA-4410-BBD6-71F336D410EB@stumbleupon.com> To: user@hbase.apache.org In-Reply-To: X-Mailer: Apple Mail (2.1510) X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail=_D00BF177-E00F-4F95-AC0A-12377D90303A Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=iso-8859-1 Yes, we have a fully setup cluster complete with all you pointed out. = But I believe, now that it has been pointed out to me in this thread and = your reply, is exactly as you and Lars say. I am running the MapReduce = in process from a standalone java process, and I believe it is not = taking advantage of that infrastructure. So I will pull this all out of the process, and run it on the cluster = using the example I have read about. It is most likely just my ignorance leading to the root cause of this = problem. All the help is very appreciative. Thanks! Birch On Sep 20, 2013, at 6:46 PM, Bryan Beaudreault = wrote: > I could be wrong, but based on the info in your most recent emails and = the > logs therein as well, I believe you may be running this job as a = single > process. >=20 > Do you actually have a full hadoop setup running, with a jobtracker = and > tasktrackers? In the absence of proper configuration, the hadoop code = will > simply launch a local, single-process job. The LocalJobRunner = referenced > in your logs points to that. >=20 > If this is the case you are likely only running a single mapper and > reducer, or at most running a few mappers at once in threads in your = local > process. Either way this would obviously greatly limit the throughput. >=20 > If you have a full hadoop set-up, make sure the client (dev machine) = you > are running this job from has access to a mapred-site.xml and = hdfs-site.xml > configuration file, or at the very least set the mapred.job.tracker = value > manually in your job configuration before submitting. >=20 > Let me know if I'm totally off base here. >=20 >=20 > On Fri, Sep 20, 2013 at 9:34 PM, James Birchfield < > jbirchfield@stumbleupon.com> wrote: >=20 >> Excellent! Will do! >>=20 >> Birchj >> On Sep 20, 2013, at 6:32 PM, Ted Yu wrote: >>=20 >>> Please take a look at the javadoc >>> for >> = src/main/java/org/apache/hadoop/hbase/client/coprocessor/AggregationClient= .java >>>=20 >>> As long as the machine can reach your HBase cluster, you should be = able >> to >>> run AggregationClient and utilize the AggregateImplementation = endpoint in >>> the region servers. >>>=20 >>> Cheers >>>=20 >>>=20 >>> On Fri, Sep 20, 2013 at 6:26 PM, James Birchfield < >>> jbirchfield@stumbleupon.com> wrote: >>>=20 >>>> Thanks Ted. >>>>=20 >>>> That was the direction I have been working towards as I am learning >> today. >>>> Much appreciation to all the replies to this thread. >>>>=20 >>>> Whether I keep the MapReduce job or utilize the Aggregation = coprocessor >>>> (which is turning out that it should be possible for me here), I = need to >>>> make sure I am running the client in an efficient manner. Lars may = have >>>> hit upon the core problem. I am not running the map reduce job on = the >>>> cluster, but rather from a stand alone remote java client executing = the >> job >>>> in process. This may very well turn out to be the number one = issue. I >>>> would love it if this turns out to be true. Would make this a = great >>>> learning lesson for me as a relative newcomer to working with = HBase, and >>>> potentially allow me to finish this initial task much quicker than = I was >>>> thinking. >>>>=20 >>>> So assuming the MapReduce jobs need to be run on the cluster = instead of >>>> locally, does a coprocessor endpoint client need to be run the = same, or >> is >>>> it safe to run it on a remote machine since the work gets = distributed >> out >>>> to the region servers? Just wondering if I would run into the same >> issues >>>> if what I said above holds true. >>>>=20 >>>> Thanks! >>>> Birch >>>> On Sep 20, 2013, at 6:17 PM, Ted Yu wrote: >>>>=20 >>>>> In 0.94, we have AggregateImplementation, an endpoint coprocessor, >> which >>>>> implements getRowNum(). >>>>>=20 >>>>> Example is in AggregationClient.java >>>>>=20 >>>>> Cheers >>>>>=20 >>>>>=20 >>>>> On Fri, Sep 20, 2013 at 6:09 PM, lars hofhansl >> wrote: >>>>>=20 >>>>>> =46rom your numbers below you have about 26k regions, thus each = region >> is >>>>>> about 545tb/26k =3D 20gb. Good. >>>>>>=20 >>>>>> How many mappers are you running? >>>>>> And just to rule out the obvious, the M/R is running on the = cluster >> and >>>>>> not locally, right? (it will default to a local runner when it = cannot >>>> use >>>>>> the M/R cluster). >>>>>>=20 >>>>>> Some back of the envelope calculations tell me that assuming 1ge >> network >>>>>> cards, the best you can expect for 110 machines to map through = this >>>> data is >>>>>> about 10h. (so way faster than what you see). >>>>>> (545tb/(110*1/8gb/s) ~ 40ks ~11h) >>>>>>=20 >>>>>>=20 >>>>>> We should really add a rowcounting coprocessor to HBase and allow >> using >>>> it >>>>>> via M/R. >>>>>>=20 >>>>>> -- Lars >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>> ________________________________ >>>>>> From: James Birchfield >>>>>> To: user@hbase.apache.org >>>>>> Sent: Friday, September 20, 2013 5:09 PM >>>>>> Subject: Re: HBase Table Row Count Optimization - A Solicitation = For >>>> Help >>>>>>=20 >>>>>>=20 >>>>>> I did not implement accurate timing, but the current table being >> counted >>>>>> has been running for about 10 hours, and the log is estimating = the map >>>>>> portion at 10% >>>>>>=20 >>>>>> 2013-09-20 23:40:24,099 INFO [main] Job = : >>>> map >>>>>> 10% reduce 0% >>>>>>=20 >>>>>> So a loooong time. Like I mentioned, we have billions, if not >> trillions >>>>>> of rows potentially. >>>>>>=20 >>>>>> Thanks for the feedback on the approaches I mentioned. I was not = sure >>>> if >>>>>> they would have any effect overall. >>>>>>=20 >>>>>> I will look further into coprocessors. >>>>>>=20 >>>>>> Thanks! >>>>>> Birch >>>>>> On Sep 20, 2013, at 4:58 PM, Vladimir Rodionov < >> vrodionov@carrieriq.com >>>>>=20 >>>>>> wrote: >>>>>>=20 >>>>>>> How long does it take for RowCounter Job for largest table to = finish >> on >>>>>> your cluster? >>>>>>>=20 >>>>>>> Just curious. >>>>>>>=20 >>>>>>> On your options: >>>>>>>=20 >>>>>>> 1. Not worth it probably - you may overload your cluster >>>>>>> 2. Not sure this one differs from 1. Looks the same to me but = more >>>>>> complex. >>>>>>> 3. The same as 1 and 2 >>>>>>>=20 >>>>>>> Counting rows in efficient way can be done if you sacrifice some >>>>>> accuracy : >>>>>>>=20 >>>>>>>=20 >>>>>>=20 >>>>=20 >> = http://highscalability.com/blog/2012/4/5/big-data-counting-how-to-count-a-= billion-distinct-objects-us.html >>>>>>>=20 >>>>>>> Yeah, you will need coprocessors for that. >>>>>>>=20 >>>>>>> Best regards, >>>>>>> Vladimir Rodionov >>>>>>> Principal Platform Engineer >>>>>>> Carrier IQ, www.carrieriq.com >>>>>>> e-mail: vrodionov@carrieriq.com >>>>>>>=20 >>>>>>> ________________________________________ >>>>>>> From: James Birchfield [jbirchfield@stumbleupon.com] >>>>>>> Sent: Friday, September 20, 2013 3:50 PM >>>>>>> To: user@hbase.apache.org >>>>>>> Subject: Re: HBase Table Row Count Optimization - A Solicitation = For >>>> Help >>>>>>>=20 >>>>>>> Hadoop 2.0.0-cdh4.3.1 >>>>>>>=20 >>>>>>> HBase 0.94.6-cdh4.3.1 >>>>>>>=20 >>>>>>> 110 servers, 0 dead, 238.2364 average load >>>>>>>=20 >>>>>>> Some other info, not sure if it helps or not. >>>>>>>=20 >>>>>>> Configured Capacity: 1295277834158080 (1.15 PB) >>>>>>> Present Capacity: 1224692609430678 (1.09 PB) >>>>>>> DFS Remaining: 624376503857152 (567.87 TB) >>>>>>> DFS Used: 600316105573526 (545.98 TB) >>>>>>> DFS Used%: 49.02% >>>>>>> Under replicated blocks: 0 >>>>>>> Blocks with corrupt replicas: 1 >>>>>>> Missing blocks: 0 >>>>>>>=20 >>>>>>> It is hitting a production cluster, but I am not really sure how = to >>>>>> calculate the load placed on the cluster. >>>>>>> On Sep 20, 2013, at 3:19 PM, Ted Yu wrote: >>>>>>>=20 >>>>>>>> How many nodes do you have in your cluster ? >>>>>>>>=20 >>>>>>>> When counting rows, what other load would be placed on the = cluster ? >>>>>>>>=20 >>>>>>>> What is the HBase version you're currently using / planning to = use ? >>>>>>>>=20 >>>>>>>> Thanks >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> On Fri, Sep 20, 2013 at 2:47 PM, James Birchfield < >>>>>>>> jbirchfield@stumbleupon.com> wrote: >>>>>>>>=20 >>>>>>>>> After reading the documentation and scouring the mailing = list >>>>>>>>> archives, I understand there is no real support for fast row >> counting >>>>>> in >>>>>>>>> HBase unless you build some sort of tracking logic into your = code. >>>> In >>>>>> our >>>>>>>>> case, we do not have such logic, and have massive amounts of = data >>>>>> already >>>>>>>>> persisted. I am running into the issue of very long execution = of >> the >>>>>>>>> RowCounter MapReduce job against very large tables = (multi-billion >> for >>>>>> many >>>>>>>>> is our estimate). I understand why this issue exists and am = slowly >>>>>>>>> accepting it, but I am hoping I can solicit some possible = ideas to >>>> help >>>>>>>>> speed things up a little. >>>>>>>>>=20 >>>>>>>>> My current task is to provide total row counts on about 600 >>>>>>>>> tables, some extremely large, some not so much. Currently, I = have >> a >>>>>>>>> process that executes the MapRduce job in process like so: >>>>>>>>>=20 >>>>>>>>> Job job =3D = RowCounter.createSubmittableJob( >>>>>>>>>=20 >>>> ConfigManager.getConfiguration(), >>>>>>>>> new String[]{tableName}); >>>>>>>>> boolean waitForCompletion =3D >>>>>>>>> job.waitForCompletion(true); >>>>>>>>> Counters counters =3D job.getCounters(); >>>>>>>>> Counter rowCounter =3D >>>>>>>>> counters.findCounter(hbaseadminconnection.Counters.ROWS); >>>>>>>>> return rowCounter.getValue(); >>>>>>>>>=20 >>>>>>>>> At the moment, each MapReduce job is executed in serial = order, >>>> so >>>>>>>>> counting one table at a time. For the current implementation = of >> this >>>>>> whole >>>>>>>>> process, as it stands right now, my rough timing calculations >>>> indicate >>>>>> that >>>>>>>>> fully counting all the rows of these 600 tables will take = anywhere >>>>>> between >>>>>>>>> 11 to 22 days. This is not what I consider a desirable = timeframe. >>>>>>>>>=20 >>>>>>>>> I have considered three alternative approaches to speed = things >>>>>> up. >>>>>>>>>=20 >>>>>>>>> First, since the application is not heavily CPU bound, I = could >>>>>> use >>>>>>>>> a ThreadPool and execute multiple MapReduce jobs at the same = time >>>>>> looking >>>>>>>>> at different tables. I have never done this, so I am unsure = if >> this >>>>>> would >>>>>>>>> cause any unanticipated side effects. >>>>>>>>>=20 >>>>>>>>> Second, I could distribute the processes. I could find as = many >>>>>>>>> machines that can successfully talk to the desired cluster >> properly, >>>>>> give >>>>>>>>> them a subset of tables to work on, and then combine the = results >> post >>>>>>>>> process. >>>>>>>>>=20 >>>>>>>>> Third, I could combine both the above approaches and run a >>>>>>>>> distributed set of multithreaded process to execute the = MapReduce >>>> jobs >>>>>> in >>>>>>>>> parallel. >>>>>>>>>=20 >>>>>>>>> Although it seems to have been asked and answered many = times, I >>>>>>>>> will ask once again. Without the need to change our current >>>>>> configurations >>>>>>>>> or restart the clusters, is there a faster approach to obtain = row >>>>>> counts? >>>>>>>>> FYI, my cache size for the Scan is set to 1000. I have >> experimented >>>>>> with >>>>>>>>> different numbers, but nothing made a noticeable difference. = Any >>>>>> advice or >>>>>>>>> feedback would be greatly appreciated! >>>>>>>>>=20 >>>>>>>>> Thanks, >>>>>>>>> Birch >>>>>>>=20 >>>>>>>=20 >>>>>>> Confidentiality Notice: The information contained in this = message, >>>>>> including any attachments hereto, may be confidential and is = intended >>>> to be >>>>>> read only by the individual or entity to whom this message is >>>> addressed. If >>>>>> the reader of this message is not the intended recipient or an = agent >> or >>>>>> designee of the intended recipient, please note that any review, = use, >>>>>> disclosure or distribution of this message or its attachments, in = any >>>> form, >>>>>> is strictly prohibited. If you have received this message in = error, >>>> please >>>>>> immediately notify the sender and/or Notifications@carrieriq.com = and >>>>>> delete or destroy any copy of this message and its attachments. >>>>>>=20 >>>>=20 >>>>=20 >>=20 >>=20 --Apple-Mail=_D00BF177-E00F-4F95-AC0A-12377D90303A Content-Disposition: attachment; filename=smime.p7s Content-Type: application/pkcs7-signature; name=smime.p7s Content-Transfer-Encoding: base64 MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIPOzCCBN0w ggPFoAMCAQICEHGS++YZX6xNEoV0cTSiGKcwDQYJKoZIhvcNAQEFBQAwezELMAkGA1UEBhMCR0Ix GzAZBgNVBAgMEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UEBwwHU2FsZm9yZDEaMBgGA1UECgwR Q29tb2RvIENBIExpbWl0ZWQxITAfBgNVBAMMGEFBQSBDZXJ0aWZpY2F0ZSBTZXJ2aWNlczAeFw0w NDAxMDEwMDAwMDBaFw0yODEyMzEyMzU5NTlaMIGuMQswCQYDVQQGEwJVUzELMAkGA1UECBMCVVQx FzAVBgNVBAcTDlNhbHQgTGFrZSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNUIE5ldHdvcmsx ITAfBgNVBAsTGGh0dHA6Ly93d3cudXNlcnRydXN0LmNvbTE2MDQGA1UEAxMtVVROLVVTRVJGaXJz dC1DbGllbnQgQXV0aGVudGljYXRpb24gYW5kIEVtYWlsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A MIIBCgKCAQEAsjmFpPJ9q0E7YkY3rs3BYHW8OWX5ShpHornMSMxqmNVNNRm5pELlzkniii8efNIx B8dOtINknS4p1aJkxIW9hVE1eaROaJB7HHqkkqgX8pgV8pPMyaQylbsMTzC9mKALi+VuG6JG+ni8 om+rWV6lL8/K2m2qL+usobNqqrcuZzWLeeEeaYji5kbNoKXqvgvOdjp6Dpvq/NonWz1zHyLmSGHG TPNpsaguG7bUMSAsvIKKjqQOpdeJQ/wWWq8dcdcRWdq6hw2v+vPhwvCkxWeM1tZUOt4KpLoDd7Nl yP0e03RiqhjKaJMeoYV+9Udly/hNVyh00jT/MLbu9mIwFIws6wIDAQABo4IBJzCCASMwHwYDVR0j BBgwFoAUoBEKIz6W8Qfs4q8p74Klf9AwpLQwHQYDVR0OBBYEFImCZ33EnSZwAEu0UEh83j2uBG59 MA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdJQQWMBQGCCsGAQUFBwMCBggr BgEFBQcDBDARBgNVHSAECjAIMAYGBFUdIAAwewYDVR0fBHQwcjA4oDagNIYyaHR0cDovL2NybC5j b21vZG9jYS5jb20vQUFBQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmwwNqA0oDKGMGh0dHA6Ly9jcmwu Y29tb2RvLm5ldC9BQUFDZXJ0aWZpY2F0ZVNlcnZpY2VzLmNybDARBglghkgBhvhCAQEEBAMCAQYw DQYJKoZIhvcNAQEFBQADggEBAJ2Vyzy4fqUJxB6/C8LHdo45PJTGEKpPDMngq4RdiVTgZTvzbRx8 NywlVF+WIfw3hJGdFdwUT4HPVB1rbEVgxy35l1FM+WbKPKCCjKbI8OLp1Er57D9Wyd12jMOCAU9s APMeGmF0BEcDqcZAV5G8ZSLFJ2dPV9tkWtmNH7qGL/QGrpxp7en0zykX2OBKnxogL5dMUbtGB8SK N04g4wkxaMeexIud6H4RvDJoEJYRmETYKlFgTYjrdDrfQwYyyDlWjDoRUtNBpEMD9O3vMyfbOeAU TibJ2PU54om4k123KSZB6rObroP8d3XK6Mq1/uJlSmM+RMTQw16Hc6mYHK9/FX8wggUaMIIEAqAD AgECAhBtGeqnGU9qMyLmIjJ6qnHeMA0GCSqGSIb3DQEBBQUAMIGuMQswCQYDVQQGEwJVUzELMAkG A1UECBMCVVQxFzAVBgNVBAcTDlNhbHQgTGFrZSBDaXR5MR4wHAYDVQQKExVUaGUgVVNFUlRSVVNU IE5ldHdvcmsxITAfBgNVBAsTGGh0dHA6Ly93d3cudXNlcnRydXN0LmNvbTE2MDQGA1UEAxMtVVRO LVVTRVJGaXJzdC1DbGllbnQgQXV0aGVudGljYXRpb24gYW5kIEVtYWlsMB4XDTExMDQyODAwMDAw MFoXDTIwMDUzMDEwNDgzOFowgZMxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNo ZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBMaW1pdGVkMTkwNwYD VQQDEzBDT01PRE8gQ2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0EwggEi MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCShIRbS1eY1F4vi6ThQMijU1hfZmXxMk73nzJ9 VdB4TFW3QpTg+SdxB8XGaaS5MsTxQBqQzCdWYn8XtXFpruUgG+TLY15gyqJB9mrho/+43x9IbWVD jCouK2M4d9+xF6zC2oIC1tQyatRnbyATj1w1+uVUgK/YcQodNwoCUFNslR2pEBS0mZVZEjH/CaLS TNxS297iQAFbSGjdxUq04O0kHzqvcV8H46y/FDuwJXFoPfQP1hdYRhWBPGiLi4MPbXohV+Y0sNsy fuNK4aVScmQmkU6lkg//4LFg/RpvaFGZY40ai6XMQpubfSJj06mg/M6ekN9EGfRcWzW6FvOnm//B AgMBAAGjggFLMIIBRzAfBgNVHSMEGDAWgBSJgmd9xJ0mcABLtFBIfN49rgRufTAdBgNVHQ4EFgQU ehNOAHRbxnhjZCfBL+KgW7x5xXswDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQIMAYBAf8CAQAw EQYDVR0gBAowCDAGBgRVHSAAMFgGA1UdHwRRME8wTaBLoEmGR2h0dHA6Ly9jcmwudXNlcnRydXN0 LmNvbS9VVE4tVVNFUkZpcnN0LUNsaWVudEF1dGhlbnRpY2F0aW9uYW5kRW1haWwuY3JsMHQGCCsG AQUFBwEBBGgwZjA9BggrBgEFBQcwAoYxaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VUTkFkZFRy dXN0Q2xpZW50X0NBLmNydDAlBggrBgEFBQcwAYYZaHR0cDovL29jc3AudXNlcnRydXN0LmNvbTAN BgkqhkiG9w0BAQUFAAOCAQEAhda+eFdVbTN/RFL+QtUGqAEDgIr7DbL9Sr/2r0FJ9RtaxdKtG3Nu PukmfOZMmMEwKN/L+0I8oSU+CnXW0D05hmbRoZu1TZtvryhsHa/l6nRaqNqxwPF1ei+eupN5yv7i kR5WdLL4jdPgQ3Ib7Y/9YDkgR/uLrzplSDyYPaUlv73vYOBJ5RbI6z9Dg/Dg7g3B080zX5vQvWBq szv++tTJOjwf7Zv/m0kzvkIpOYPuM2kugp1FTahp2oAbHj3SGl18R5mlmwhtEpmG1l1XBxunML5L SUS4kH7K0Xk467Qz+qA6XSZYnmFVGLQh1ZnV4ENAQjC+6qXnlNKw/vN1+X9u5zCCBTgwggQgoAMC AQICEQDUDYTJRFzUmB4EVWPwaIOrMA0GCSqGSIb3DQEBBQUAMIGTMQswCQYDVQQGEwJHQjEbMBkG A1UECBMSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHEwdTYWxmb3JkMRowGAYDVQQKExFDT01P RE8gQ0EgTGltaXRlZDE5MDcGA1UEAxMwQ09NT0RPIENsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQg U2VjdXJlIEVtYWlsIENBMB4XDTEzMDkxNjAwMDAwMFoXDTE0MDkxNjIzNTk1OVowLDEqMCgGCSqG SIb3DQEJARYbamJpcmNoZmllbGRAc3R1bWJsZXVwb24uY29tMIIBIjANBgkqhkiG9w0BAQEFAAOC AQ8AMIIBCgKCAQEArboUOEoV/+AdKYb9CSW+anbBBH88nmxhG0HKmoptylv+QIX9p1U0rrgEqoyx m0RzmZIPe1nIX28Rq5cb69t8ginQ3dZetwKCrveyfFwB8dvLQRuv1nNWr4t8EbgKbU+Jm9Bk39Ej zQagcc/He9khBjA81bFVBm1XPeRwfZePHvLBh6dOxWG7P/85Inx1Dqhc0Fc/ZiYVeK/vYMOsh649 lv+6HvNCjeg+0WfYQuhS/cWr/763ceFKyFy6Bm6reIA3egZRhijIztDLtIGCVT31BD8Fcs0Dqn8y WMog+bRvudxPAoia+ykOq4A7GKD5HmzE5FBv5CRr3YvyKsYr+BHwRQIDAQABo4IB6zCCAecwHwYD VR0jBBgwFoAUehNOAHRbxnhjZCfBL+KgW7x5xXswHQYDVR0OBBYEFCRm0AmiohSgxQ8T6M4zVEn7 VU8dMA4GA1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMCAGA1UdJQQZMBcGCCsGAQUFBwMEBgsr BgEEAbIxAQMFAjARBglghkgBhvhCAQEEBAMCBSAwRgYDVR0gBD8wPTA7BgwrBgEEAbIxAQIBAQEw KzApBggrBgEFBQcCARYdaHR0cHM6Ly9zZWN1cmUuY29tb2RvLm5ldC9DUFMwVwYDVR0fBFAwTjBM oEqgSIZGaHR0cDovL2NybC5jb21vZG9jYS5jb20vQ09NT0RPQ2xpZW50QXV0aGVudGljYXRpb25h bmRTZWN1cmVFbWFpbENBLmNybDCBiAYIKwYBBQUHAQEEfDB6MFIGCCsGAQUFBzAChkZodHRwOi8v Y3J0LmNvbW9kb2NhLmNvbS9DT01PRE9DbGllbnRBdXRoZW50aWNhdGlvbmFuZFNlY3VyZUVtYWls Q0EuY3J0MCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5jb21vZG9jYS5jb20wJgYDVR0RBB8wHYEb amJpcmNoZmllbGRAc3R1bWJsZXVwb24uY29tMA0GCSqGSIb3DQEBBQUAA4IBAQCEGvpBZ5O8m7R4 99PS1Svx4W9LAyHuC5GfyANmNZ86OsTptp/m0ya2WgKiAPFYEIge32o07Nu3OShix9HESYGR6w4W Owm0Ubn1GkM2PleKNDF/0m94WXHjJTOapjWYf3/0hpGjcSRCvpIXXUKCKRynAWojAJMgjkm0rVRf xmNT4Ny/S2NEJaLVUKnC8C8aBymbB0g2bzMsAW7tVhnOyD7vamIeQwyc7jqeKOgq1iZ/dbtFFOtq Bc0S+WInhqnAnKjrIQUGWVrq6QNxs9EW9UgSs6O/5UkZEf/YEAVbQ4KM5wzZ5k/dIuPNQm02x8h2 li7fcDFSDEfrDyEkL5M2uTULMYIDrjCCA6oCAQEwgakwgZMxCzAJBgNVBAYTAkdCMRswGQYDVQQI ExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBD QSBMaW1pdGVkMTkwNwYDVQQDEzBDT01PRE8gQ2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1 cmUgRW1haWwgQ0ECEQDUDYTJRFzUmB4EVWPwaIOrMAkGBSsOAwIaBQCgggHZMBgGCSqGSIb3DQEJ AzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTEzMDkyMTAxNTcxMFowIwYJKoZIhvcNAQkE MRYEFKJsRwQFzJ1H9Uoj1rjC+2mY4/t5MIG6BgkrBgEEAYI3EAQxgawwgakwgZMxCzAJBgNVBAYT AkdCMRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNV BAoTEUNPTU9ETyBDQSBMaW1pdGVkMTkwNwYDVQQDEzBDT01PRE8gQ2xpZW50IEF1dGhlbnRpY2F0 aW9uIGFuZCBTZWN1cmUgRW1haWwgQ0ECEQDUDYTJRFzUmB4EVWPwaIOrMIG8BgsqhkiG9w0BCRAC CzGBrKCBqTCBkzELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4G A1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxOTA3BgNVBAMTMENPTU9E TyBDbGllbnQgQXV0aGVudGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRANQNhMlEXNSYHgRV Y/Bog6swDQYJKoZIhvcNAQEBBQAEggEAZqGVETc0XxQRVE823HvzSR+sprk2PrjgcN270HQ2qwl0 DffW28Oyg5ecvmyZ+bOHBBGUXTaLAPn8rwXoMGYU4IFUGJoCjVfoADiw0ptZazpZ98mAI9UAY8mJ quYnTIJZBFAi+uvj78c7WNnoqCqynbYN2h8hZJdZw8bhHcqfvWvO7PZyoWckWgt1VMzGJ+i+LJnW 4vhgd1rb8F0nHzSMm0hUWsfLio7c+2EyrI2nlGnzPK15WPnAYc8m4+rcbfLmMWyilaCoQmh48MZl z28HJInfU6iPXyiZ/oG/HDRXsJQJZbblEzp+zwHL6CkWYci/KzBlQ4WYfFxxdbMTBUWlNQAAAAAA AA== --Apple-Mail=_D00BF177-E00F-4F95-AC0A-12377D90303A--