Return-Path: X-Original-To: apmail-flume-user-archive@www.apache.org Delivered-To: apmail-flume-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C633110D9C for ; Tue, 17 Dec 2013 15:33:37 +0000 (UTC) Received: (qmail 88418 invoked by uid 500); 17 Dec 2013 15:33:26 -0000 Delivered-To: apmail-flume-user-archive@flume.apache.org Received: (qmail 88123 invoked by uid 500); 17 Dec 2013 15:33:21 -0000 Mailing-List: contact user-help@flume.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flume.apache.org Delivered-To: mailing list user@flume.apache.org Received: (qmail 87995 invoked by uid 99); 17 Dec 2013 15:33:17 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Dec 2013 15:33:17 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of chenshangan521@gmail.com designates 74.125.82.41 as permitted sender) Received: from [74.125.82.41] (HELO mail-wg0-f41.google.com) (74.125.82.41) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Dec 2013 15:33:13 +0000 Received: by mail-wg0-f41.google.com with SMTP id y10so3331553wgg.2 for ; Tue, 17 Dec 2013 07:32:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=UwiE+CFo28EVr7sn4HynUzJ8PsB1EEX4ITd2482xcMo=; b=zm8TyPLLeoC+Olz4GJmtYlP9NR59JsS8A/P1sxHKt49syaGuVCqr6YGdNbL1Lt2vQV vJhspVdGndcsHBsfn8FykV+Kkoklp7qIERZi6NvbvZYU/j8fexwvn8hnSMEoBm2Toy8b nkP7DyXNloMr3jLjFMOu7W3Yzwf3D54oMceLxIewoHVc4bw4hcA5kpLjc32La26dACnO dUKZePboxb+mLrS7QFS3NspS9JtDzA6zUbuKjZTnn5n8EE7OO16XrQ3bvXYa6OpPgqWc VaJXwL3h9Tn24H/OKThy/hdwWKuJlBCdT4I4Jdig51aBqjSA7B4tZPWlPJzRy6UL6XEG /QFQ== MIME-Version: 1.0 X-Received: by 10.194.142.142 with SMTP id rw14mr7920775wjb.87.1387294371504; Tue, 17 Dec 2013 07:32:51 -0800 (PST) Received: by 10.217.138.204 with HTTP; Tue, 17 Dec 2013 07:32:50 -0800 (PST) In-Reply-To: References: <6dfdc17fa54b4c51ae341c96048f651e@AMXPR03MB261.eurprd03.prod.outlook.com> <5f3132aae0d046fd926399b9e2b3be25@DBXPR03MB272.eurprd03.prod.outlook.com> <52b47bffbb5b4968bd81e1653466950a@DBXPR03MB272.eurprd03.prod.outlook.com> Date: Tue, 17 Dec 2013 23:32:50 +0800 Message-ID: Subject: Re: file channel read performance impacted by write rate From: Shangan Chen To: user@flume.apache.org Content-Type: multipart/mixed; boundary=089e013a16a43c370204edbca46b X-Virus-Checked: Checked by ClamAV on apache.org --089e013a16a43c370204edbca46b Content-Type: multipart/alternative; boundary=089e013a16a43c36ff04edbca469 --089e013a16a43c36ff04edbca469 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable the attachment flume.conf is channel and sink config, dumps.txt is thread dumps. channel type "dual" is a channel type I developped to utilize the merits of memory channel and filechannel. when the volume is not quite big, I use memory channel, when the size of events reach to a percentage of the memory channel capacity, it switch to the filechannel, when volume decrease switch to memory again. thanks for looking into this. On Tue, Dec 17, 2013 at 8:54 PM, Brock Noland wrote: > Can you take and share 8-10 thread dumps while the sink is taking events > "slowly"? > > Can you share your machine and file channel configuration? > On Dec 17, 2013 6:28 AM, "Shangan Chen" wrote: > >> we face the same problem, performance of taking events from channel is a >> severe bottleneck. When there're less events in channel, problem does n= ot >> alleviate. following is a log of the metrics of writing to hdfs, writing= to >> 5 files with a batchsize of 200000, take cost the most of the total time= . >> >> >> 17 =E5=8D=81=E4=BA=8C=E6=9C=88 2013 18:49:28,056 INFO >> [SinkRunner-PollingRunner-DefaultSinkProcessor] >> (org.apache.flume.sink.hdfs.HDFSEventSink.process:489) - >> HdfsSink-TIME-STAT sink[sink_hdfs_b] writers[5] eventcount[200000] >> all[44513] take[38197] append[5647] sync[17] getFilenameTime[371] >> >> >> >> >> >> On Mon, Nov 25, 2013 at 4:46 PM, Jan Van Besien wrote= : >> >>> Hi, >>> >>> Is anybody still looking into this question? >>> >>> Should I log it in jira such that somebody can look into it later? >>> >>> thanks, >>> Jan >>> >>> >>> >>> On 11/18/2013 11:28 AM, Jan Van Besien wrote: >>> > Hi, >>> > >>> > Sorry it took me a while to answer this. I compiled a small test case >>> > using only off the shelve flume components that shows what is going o= n. >>> > >>> > The setup is a single agent with http source, null sink and file >>> > channel. I am using the default configuration as much as possible. >>> > >>> > The test goes as follows: >>> > >>> > - start the agent without sink >>> > - run a script that sends http requests in multiple threads to the ht= tp >>> > source (the script simply calls the url >>> http://localhost:8080/?key=3Dvalue >>> > over and over a gain, whereby value is a random string of 100 chars). >>> > - this script does about 100 requests per second on my machine. I lea= ve >>> > it running for a while, such that the file channel contains about 200= 00 >>> > events. >>> > - add the null sink to the configuration (around 11:14:33 in the log)= . >>> > - observe the logging of the null sink. You'll see in the log file th= at >>> > it takes more than 10 seconds per 1000 events (until about even 5000, >>> > around 11:15:33) >>> > - stop the http request generating script (i.e. no more writing in fi= le >>> > channel) >>> > - observer the logging of the null sink: events 5000 until 20000 are >>> all >>> > processed within a few seconds. >>> > >>> > In the attachment: >>> > - flume log >>> > - thread dumps while the ingest was running and the null sink was >>> enabled >>> > - config (agent1.conf) >>> > >>> > I also tried with more sinks (4), see agent2.conf. The results are th= e >>> same. >>> > >>> > Thanks for looking into this, >>> > Jan >>> > >>> > >>> > On 11/14/2013 05:08 PM, Brock Noland wrote: >>> >> On Thu, Nov 14, 2013 at 2:50 AM, Jan Van Besien >> >> > wrote: >>> >> >>> >> On 11/13/2013 03:04 PM, Brock Noland wrote: >>> >> > The file channel uses a WAL which sits on disk. Each time a= n >>> >> event is >>> >> > committed an fsync is called to ensure that data is durable. >>> Without >>> >> > this fsync there is no durability guarantee. More details >>> here: >>> >> > https://blogs.apache.org/flume/entry/apache_flume_filechanne= l >>> >> >>> >> Yes indeed. I was just not expecting the performance impact to = be >>> >> that big. >>> >> >>> >> >>> >> > The issue is that when the source is committing one-by-one >>> it's >>> >> > consuming the disk doing an fsync for each event. I would >>> find a >>> >> way to >>> >> > batch up the requests so they are not written one-by-one or >>> use >>> >> multiple >>> >> > disks for the file channel. >>> >> >>> >> I am already using multiple disks for the channel (4). >>> >> >>> >> >>> >> Can you share your configuration? >>> >> >>> >> Batching the >>> >> requests is indeed what I am doing to prevent the filechannel t= o >>> be the >>> >> bottleneck (using a flume agent with a memory channel in front >>> of the >>> >> agent with the file channel), but it inheritely means that I >>> loose >>> >> end-to-end durability because events are buffered in memory >>> before being >>> >> flushed to disk. >>> >> >>> >> >>> >> I would be curious to know though if you doubled the sinks if that >>> would >>> >> give more time to readers. Could you take three-four thread dumps of >>> the >>> >> JVM while it's in this state and share them? >>> >> >>> > >>> >>> >> >> >> -- >> have a good day! >> chenshang'an >> >> --=20 have a good day! chenshang'an --089e013a16a43c36ff04edbca469 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
the attachment flume.conf is channel and sink config, dump= s.txt is thread dumps.
channel type "dual" is a channel type = I developped to utilize the merits of memory channel and filechannel. when = the volume is not quite big, I use memory channel, when the size of events = reach to a percentage of the memory channel capacity, it switch to the file= channel, when volume decrease switch to memory again.

thanks for looking into this.


On Tue, Dec 17, 2013 at 8:5= 4 PM, Brock Noland <brock@cloudera.com> wrote:

Can you take and share 8-10 t= hread dumps while the sink is taking events "slowly"?

Can you share your machine and file channel configuration? <= /p>

On Dec 17, 2013 6:28 AM, "Shangan Chen"= ; <chensha= ngan521@gmail.com> wrote:
we face the same problem, performance of taking events fro= m channel is a severe bottleneck. When there're less events in channel,= =C2=A0problem does not alleviate. following is a log of the metrics of wri= ting to hdfs, writing to 5 files with a batchsize of 200000, take cost the = most of the total time.


17 =E5=8D=81=E4=BA=8C=E6=9C=88 2013 18:49:28,= 056 INFO =C2=A0[SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.= flume.sink.hdfs.HDFSEventSink.process:489) =C2=A0- HdfsSink-TIME-STAT sink[= sink_hdfs_b] writers[5] eventcount[200000] all[44513] take[38197] append[56= 47] sync[17] getFilenameTime[371]





On Mon, No= v 25, 2013 at 4:46 PM, Jan Van Besien <janvb@ngdata.com> wrot= e:
Hi,

Is anybody still looking into this question?

Should I log it in jira such that somebody can look into it later?

thanks,
Jan



On 11/18/2013 11:28 AM, Jan Van Besien wrote:
> Hi,
>
> Sorry it took me a while to answer this. I compiled a small test case<= br> > using only off the shelve flume components that shows what is going on= .
>
> The setup is a single agent with http source, null sink and file
> channel. I am using the default configuration as much as possible.
>
> The test goes as follows:
>
> - start the agent without sink
> - run a script that sends http requests in multiple threads to the htt= p
> source (the script simply calls the url http://localhost:8080/?key=3Dvalue > over and over a gain, whereby value is a random string of 100 chars).<= br> > - this script does about 100 requests per second on my machine. I leav= e
> it running for a while, such that the file channel contains about 2000= 0
> events.
> - add the null sink to the configuration (around 11:14:33 in the log).=
> - observe the logging of the null sink. You'll see in the log file= that
> it takes more than 10 seconds per 1000 events (until about even 5000,<= br> > around 11:15:33)
> - stop the http request generating script (i.e. no more writing in fil= e
> channel)
> - observer the logging of the null sink: events 5000 until 20000 are a= ll
> processed within a few seconds.
>
> In the attachment:
> - flume log
> - thread dumps while the ingest was running and the null sink was enab= led
> - config (agent1.conf)
>
> I also tried with more sinks (4), see agent2.conf. The results are the= same.
>
> Thanks for looking into this,
> Jan
>
>
> On 11/14/2013 05:08 PM, Brock Noland wrote:
>> On Thu, Nov 14, 2013 at 2:50 AM, Jan Van Besien <janvb@ngdata.com
>> <mailto:j= anvb@ngdata.com>> wrote:
>>
>> =C2=A0 =C2=A0 =C2=A0On 11/13/2013 03:04 PM, Brock Noland wrote: >> =C2=A0 =C2=A0 =C2=A0 > The file channel uses a WAL which sits o= n disk. =C2=A0Each time an
>> =C2=A0 =C2=A0 =C2=A0event is
>> =C2=A0 =C2=A0 =C2=A0 > committed an fsync is called to ensure t= hat data is durable. Without
>> =C2=A0 =C2=A0 =C2=A0 > this fsync there is no durability guaran= tee. More details here:
>> =C2=A0 =C2=A0 =C2=A0 > https://blogs.apache.or= g/flume/entry/apache_flume_filechannel
>>
>> =C2=A0 =C2=A0 =C2=A0Yes indeed. I was just not expecting the perfo= rmance impact to be
>> =C2=A0 =C2=A0 =C2=A0that big.
>>
>>
>> =C2=A0 =C2=A0 =C2=A0 > The issue is that when the source is com= mitting one-by-one it's
>> =C2=A0 =C2=A0 =C2=A0 > consuming the disk doing an fsync for ea= ch event. =C2=A0I would find a
>> =C2=A0 =C2=A0 =C2=A0way to
>> =C2=A0 =C2=A0 =C2=A0 > batch up the requests so they are not wr= itten one-by-one or use
>> =C2=A0 =C2=A0 =C2=A0multiple
>> =C2=A0 =C2=A0 =C2=A0 > disks for the file channel.
>>
>> =C2=A0 =C2=A0 =C2=A0I am already using multiple disks for the chan= nel (4).
>>
>>
>> Can you share your configuration?
>>
>> =C2=A0 =C2=A0 =C2=A0Batching the
>> =C2=A0 =C2=A0 =C2=A0requests is indeed what I am doing to prevent = the filechannel to be the
>> =C2=A0 =C2=A0 =C2=A0bottleneck (using a flume agent with a memory = channel in front of the
>> =C2=A0 =C2=A0 =C2=A0agent with the file channel), but it inheritel= y means that I loose
>> =C2=A0 =C2=A0 =C2=A0end-to-end durability because events are buffe= red in memory before being
>> =C2=A0 =C2=A0 =C2=A0flushed to disk.
>>
>>
>> I would be curious to know though if you doubled the sinks if that= would
>> give more time to readers. Could you take three-four thread dumps = of the
>> JVM while it's in this state and share them?
>>
>




--
= have a good day!=C2=A0
chenshang'an




--
= have a good day!=C2=A0
chenshang'an

--089e013a16a43c36ff04edbca469-- --089e013a16a43c370204edbca46b Content-Type: text/plain; charset=US-ASCII; name="flume.conf.txt" Content-Disposition: attachment; filename="flume.conf.txt" Content-Transfer-Encoding: base64 X-Attachment-Id: f_hpbb76ww0 CgpsY19oYWRvb3B0ZXN0MDQuY2hhbm5lbHMuY2hfaGRmc19iLnR5cGUgPSBkdWFsCmxjX2hhZG9v cHRlc3QwNC5jaGFubmVscy5jaF9oZGZzX2IubWVtb3J5LmNhcGFjaXR5ID0gMTAwMDAwCmxjX2hh ZG9vcHRlc3QwNC5jaGFubmVscy5jaF9oZGZzX2IubWVtb3J5LnRyYW5zYWN0aW9uQ2FwYWNpdHkg PSAzMDAwMApsY19oYWRvb3B0ZXN0MDQuY2hhbm5lbHMuY2hfaGRmc19iLmZpbGUuY2hlY2twb2lu dERpciA9IC9kYXRhL2NoX2hkZnNfYi5jaGVja3BvaW50CmxjX2hhZG9vcHRlc3QwNC5jaGFubmVs cy5jaF9oZGZzX2IuZmlsZS5kYXRhRGlycyA9IC9kYXRhL2NoX2hkZnNfYi5kYXRhCmxjX2hhZG9v cHRlc3QwNC5jaGFubmVscy5jaF9oZGZzX2IuZmlsZS5jYXBhY2l0eSA9IDIwMDAwMDAwMApsY19o YWRvb3B0ZXN0MDQuY2hhbm5lbHMuY2hfaGRmc19iLmZpbGUudHJhbnNhY3Rpb25DYXBhY2l0eSA9 IDMwMDAwCmxjX2hhZG9vcHRlc3QwNC5jaGFubmVscy5jaF9oZGZzX2IuZmlsZS5jaGVja3BvaW50 SW50ZXJ2YWwgPSAzMDAwMDAKbGNfaGFkb29wdGVzdDA0LmNoYW5uZWxzLmNoX2hkZnNfYi5maWxl Lm1heEZpbGVTaXplID0gMjE0NjQzNTA3MQpsY19oYWRvb3B0ZXN0MDQuY2hhbm5lbHMuY2hfaGRm c19iLmZpbGUua2VlcC1hbGl2ZSA9IDMwCmxjX2hhZG9vcHRlc3QwNC5jaGFubmVscy5jaF9oZGZz X2IuZmlsZS53cml0ZS10aW1lb3V0ID0gMzAKCgojIyMjIyBzaW5rIGRlZmluYXRpb24gIyMjIyMj IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMK bGNfaGFkb29wdGVzdDA0LnNpbmtzLnNpbmtfaGRmc19iLmNoYW5uZWwgPSBjaF9oZGZzX2IKbGNf aGFkb29wdGVzdDA0LnNpbmtzLnNpbmtfaGRmc19iLnR5cGUgPSBoZGZzCmxjX2hhZG9vcHRlc3Qw NC5zaW5rcy5zaW5rX2hkZnNfYi5zd2l0Y2hvbiA9IHRydWUKbGNfaGFkb29wdGVzdDA0LnNpbmtz LnNpbmtfaGRmc19iLmhkZnMucGF0aCA9IC91c2VyL2hpdmUvd2FyZWhvdXNlL29yaWdpbmFsbG9n LmRiLwpsY19oYWRvb3B0ZXN0MDQuc2lua3Muc2lua19oZGZzX2IuaGRmcy5maWxlUHJlZml4ID0g bGNfaGFkb29wdGVzdDA0CmxjX2hhZG9vcHRlc3QwNC5zaW5rcy5zaW5rX2hkZnNfYi5oZGZzLnJv bGxJbnRlcnZhbCA9IDEyMDAKbGNfaGFkb29wdGVzdDA0LnNpbmtzLnNpbmtfaGRmc19iLmhkZnMu cm9sbFNpemUgPSAzMDcyMDAwMDAwCmxjX2hhZG9vcHRlc3QwNC5zaW5rcy5zaW5rX2hkZnNfYi5o ZGZzLnJvbGxDb3VudCA9IDAKbGNfaGFkb29wdGVzdDA0LnNpbmtzLnNpbmtfaGRmc19iLmhkZnMu YmF0Y2hTaXplID0gMjAwMDAKbGNfaGFkb29wdGVzdDA0LnNpbmtzLnNpbmtfaGRmc19iLmhkZnMu Y2FsbFRpbWVvdXQgPSAxODAwMDAKbGNfaGFkb29wdGVzdDA0LnNpbmtzLnNpbmtfaGRmc19iLmhk ZnMuaWRsZVRpbWVvdXQgPSAxODAwCmxjX2hhZG9vcHRlc3QwNC5zaW5rcy5zaW5rX2hkZnNfYi5o ZGZzLnRocmVhZHNQb29sU2l6ZSA9IDY0CmxjX2hhZG9vcHRlc3QwNC5zaW5rcy5zaW5rX2hkZnNf Yi5oZGZzLmNvZGVDID0gTHpvcENvZGVjCmxjX2hhZG9vcHRlc3QwNC5zaW5rcy5zaW5rX2hkZnNf Yi5oZGZzLmZpbGVUeXBlID0gQ29tcHJlc3NlZFN0cmVhbQpsY19oYWRvb3B0ZXN0MDQuc2lua3Mu c2lua19oZGZzX2IuaGRmcy53cml0ZUZvcm1hdCA9IFRleHQKbGNfaGFkb29wdGVzdDA0LnNpbmtz LnNpbmtfaGRmc19iLmhkZnMudXNlTG9jYWxUaW1lU3RhbXAgPSBUcnVlCmxjX2hhZG9vcHRlc3Qw NC5zaW5rcy5zaW5rX2hkZnNfYi5zZXJpYWxpemVyID0gVEVYVApsY19oYWRvb3B0ZXN0MDQuc2lu a3Muc2lua19oZGZzX2Iuc2VyaWFsaXplci5hcHBlbmROZXdsaW5lID0gZmFsc2UK --089e013a16a43c370204edbca46b Content-Type: text/plain; charset=US-ASCII; name="dumps.txt" Content-Disposition: attachment; filename="dumps.txt" Content-Transfer-Encoding: base64 X-Attachment-Id: f_hpbb7b4d1 ImhkZnMtc2lua19oZGZzX2ItY2FsbC1ydW5uZXItNjMiIHByaW89MTAgdGlkPTB4MDAwMDdmNWJk ODA4NDgwMCBuaWQ9MHgyZWJhIHdhaXRpbmcgb24gY29uZGl0aW9uIFsweDAwMDA3ZjVlOTRmY2Uw MDBdCiAgIGphdmEubGFuZy5UaHJlYWQuU3RhdGU6IFdBSVRJTkcgKHBhcmtpbmcpCiAgICBhdCBz dW4ubWlzYy5VbnNhZmUucGFyayhOYXRpdmUgTWV0aG9kKQogICAgLSBwYXJraW5nIHRvIHdhaXQg Zm9yICA8MHgwMDAwMDAwNjAwMmJhYjU4PiAoYSBqYXZhLnV0aWwuY29uY3VycmVudC5sb2Nrcy5B YnN0cmFjdFF1ZXVlZFN5bmNocm9uaXplciRDb25kaXRpb25PYmplY3QpCiAgICBhdCBqYXZhLnV0 aWwuY29uY3VycmVudC5sb2Nrcy5Mb2NrU3VwcG9ydC5wYXJrKExvY2tTdXBwb3J0LmphdmE6MTg2 KQogICAgYXQgamF2YS51dGlsLmNvbmN1cnJlbnQubG9ja3MuQWJzdHJhY3RRdWV1ZWRTeW5jaHJv bml6ZXIkQ29uZGl0aW9uT2JqZWN0LmF3YWl0KEFic3RyYWN0UXVldWVkU3luY2hyb25pemVyLmph dmE6MjA0MykKICAgIGF0IGphdmEudXRpbC5jb25jdXJyZW50LkxpbmtlZEJsb2NraW5nUXVldWUu dGFrZShMaW5rZWRCbG9ja2luZ1F1ZXVlLmphdmE6NDQyKQogICAgYXQgamF2YS51dGlsLmNvbmN1 cnJlbnQuVGhyZWFkUG9vbEV4ZWN1dG9yLmdldFRhc2soVGhyZWFkUG9vbEV4ZWN1dG9yLmphdmE6 MTA2OCkKICAgIGF0IGphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBvb2xFeGVjdXRvci5ydW5X b3JrZXIoVGhyZWFkUG9vbEV4ZWN1dG9yLmphdmE6MTEzMCkKICAgIGF0IGphdmEudXRpbC5jb25j dXJyZW50LlRocmVhZFBvb2xFeGVjdXRvciRXb3JrZXIucnVuKFRocmVhZFBvb2xFeGVjdXRvci5q YXZhOjYxNSkKICAgIGF0IGphdmEubGFuZy5UaHJlYWQucnVuKFRocmVhZC5qYXZhOjcyMikKCiAg IExvY2tlZCBvd25hYmxlIHN5bmNocm9uaXplcnM6CiAgICAtIE5vbmUKCiJoZGZzLXNpbmtfaGRm c19iLWNhbGwtcnVubmVyLTYyIiBwcmlvPTEwIHRpZD0weDAwMDA3ZjViZDgwODI4MDAgbmlkPTB4 MmViOSB3YWl0aW5nIG9uIGNvbmRpdGlvbiBbMHgwMDAwN2Y1ZTk1MGNmMDAwXQogICBqYXZhLmxh bmcuVGhyZWFkLlN0YXRlOiBXQUlUSU5HIChwYXJraW5nKQogICAgYXQgc3VuLm1pc2MuVW5zYWZl LnBhcmsoTmF0aXZlIE1ldGhvZCkKICAgIC0gcGFya2luZyB0byB3YWl0IGZvciAgPDB4MDAwMDAw MDYwMDJiYWI1OD4gKGEgamF2YS51dGlsLmNvbmN1cnJlbnQubG9ja3MuQWJzdHJhY3RRdWV1ZWRT eW5jaHJvbml6ZXIkQ29uZGl0aW9uT2JqZWN0KQogICAgYXQgamF2YS51dGlsLmNvbmN1cnJlbnQu bG9ja3MuTG9ja1N1cHBvcnQucGFyayhMb2NrU3VwcG9ydC5qYXZhOjE4NikKICAgIGF0IGphdmEu dXRpbC5jb25jdXJyZW50LmxvY2tzLkFic3RyYWN0UXVldWVkU3luY2hyb25pemVyJENvbmRpdGlv bk9iamVjdC5hd2FpdChBYnN0cmFjdFF1ZXVlZFN5bmNocm9uaXplci5qYXZhOjIwNDMpCiAgICBh dCBqYXZhLnV0aWwuY29uY3VycmVudC5MaW5rZWRCbG9ja2luZ1F1ZXVlLnRha2UoTGlua2VkQmxv Y2tpbmdRdWV1ZS5qYXZhOjQ0MikKICAgIGF0IGphdmEudXRpbC5jb25jdXJyZW50LlRocmVhZFBv b2xFeGVjdXRvci5nZXRUYXNrKFRocmVhZFBvb2xFeGVjdXRvci5qYXZhOjEwNjgpCiAgICBhdCBq YXZhLnV0aWwuY29uY3VycmVudC5UaHJlYWRQb29sRXhlY3V0b3IucnVuV29ya2VyKFRocmVhZFBv b2xFeGVjdXRvci5qYXZhOjExMzApCiAgICBhdCBqYXZhLnV0aWwuY29uY3VycmVudC5UaHJlYWRQ b29sRXhlY3V0b3IkV29ya2VyLnJ1bihUaHJlYWRQb29sRXhlY3V0b3IuamF2YTo2MTUpCiAgICBh dCBqYXZhLmxhbmcuVGhyZWFkLnJ1bihUaHJlYWQuamF2YTo3MjIp --089e013a16a43c370204edbca46b--