Return-Path: X-Original-To: apmail-activemq-users-archive@www.apache.org Delivered-To: apmail-activemq-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 64ECA10160 for ; Tue, 11 Feb 2014 22:47:05 +0000 (UTC) Received: (qmail 5495 invoked by uid 500); 11 Feb 2014 22:47:04 -0000 Delivered-To: apmail-activemq-users-archive@activemq.apache.org Received: (qmail 5455 invoked by uid 500); 11 Feb 2014 22:47:04 -0000 Mailing-List: contact users-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@activemq.apache.org Delivered-To: mailing list users@activemq.apache.org Received: (qmail 5444 invoked by uid 99); 11 Feb 2014 22:47:04 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 11 Feb 2014 22:47:04 +0000 X-ASF-Spam-Status: No, hits=2.8 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,URI_HEX X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of chiragpujara@gmail.com designates 209.85.212.178 as permitted sender) Received: from [209.85.212.178] (HELO mail-wi0-f178.google.com) (209.85.212.178) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 11 Feb 2014 22:47:00 +0000 Received: by mail-wi0-f178.google.com with SMTP id cc10so5569336wib.17 for ; Tue, 11 Feb 2014 14:46:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=PSSLyv8sB+TcThu4iL/IpLMk+/i5WuvvihjtuhilYYs=; b=REkf2CMd742Q57ATfKXHihsRzwbnHjATuZFi7jJZbLqhTfi0J0rmyHUPliLpWFWx0f ijXE4bKf8VIYrcO5/mZ23nD+pfIcTfc7LMbrBq7SMtKlT+mQgnoG4XC4jwryaguRVlkl +FimJ/t1CRoTHeXukZc09WabHd/XZERoevklUv1JmZU8l50DTwz3BAY6GDrLeCkJsNEh yxcx/LtvA1Q3WsaXPUjOOHQUsd32Eg4TnAxf6QfPV8j+3k059t0ylBvevBgha5YflD+W H/WPU6oa0SHOPkAksVJVlkR56KVko3lQd4iUPWBut3vuS92TagUypw+3m5YNtV8Ie7p7 JK6w== MIME-Version: 1.0 X-Received: by 10.180.38.7 with SMTP id c7mr339589wik.0.1392158798670; Tue, 11 Feb 2014 14:46:38 -0800 (PST) Received: by 10.194.237.7 with HTTP; Tue, 11 Feb 2014 14:46:38 -0800 (PST) In-Reply-To: References: <1391652460821-4677512.post@n4.nabble.com> <987dd67b3cc2b18c2f3e60bd7a156c6d.squirrel@email.powweb.com> <1391841397106-4677656.post@n4.nabble.com> <919cc93dbe48038799c7fd29f9a28191.squirrel@email.powweb.com> Date: Tue, 11 Feb 2014 16:46:38 -0600 Message-ID: Subject: Re: Broker Hangs after some time - or does nothing. From: Chirag Pujara To: users@activemq.apache.org Content-Type: multipart/mixed; boundary=e89a8f6479e9ab24f704f2293ac3 X-Virus-Checked: Checked by ClamAV on apache.org --e89a8f6479e9ab24f704f2293ac3 Content-Type: multipart/alternative; boundary=e89a8f6479e9ab24f304f2293ac1 --e89a8f6479e9ab24f304f2293ac1 Content-Type: text/plain; charset=ISO-8859-1 I tried again and this time I didnt send bunch of messages at same time. I noticed that some message gets processed right away but some messges stays on queue and doesnt get processed. If send few more messges let say 50 messages and it processes 40 messages and 10 msgs left on queue. I can send more messages but not all get processed. I am attaching config file. I have enabled producerFlowControl and removed slowConsumerStrategy. I am using activemq-5.10-SNAPSHOT. On Mon, Feb 10, 2014 at 9:04 PM, Chirag Pujara wrote: > i said broker hangs bcs if i tried to send more messages or messages on > queue doent get processed. i cannot even create queue or send message using > webconsole. > > will try again and rhis time will moniter memory. > > On Feb 10, 2014 6:22 PM, "artnaseef" wrote:< >> >> Perhaps I misunderstood "Broker Hangs". It helps to have more detailed >> symptoms. >> >> The messages in the DLQ are failing at the consumer - either transactions >> rolling back, or otherwise failing before commit, or CLIENT_ACKNOWLEDGE >> client calling Session.recover(). >> >> See here for more details: >> http://activemq.apache.org/message-redelivery-and-dlq-handling.html >> >> You are right about the webconsole -- it will most likely stop functioning >> once the broker runs out of memory. >> >> > >> > >> > I have producerFlowControl to false. I seperated my producer and >> > consumers. >> > But still I was able to reproduce issue. For some time bunch of messages >> > started pile up in queue. Then they started to endup in DLQ. In DLQ I >> saw >> > most of them have these property: >> > >> > dlqDeliveryFailureCause: java.lang.Throwable: Exceeded redelivery policy >> > limit:RedeliveryPolicy {destination = null, collisionAvoidanceFactor = >> > 0.15, maximumRedeliveries = 6, maximumRedeliveryDelay = -1, >> > initialRedeliveryDelay = 1000, useCollisionAvoidance = false, >> > useExponentialBackOff = false, backOffMultiplier = 5.0, redeliveryDelay >> = >> > 1000}, cause:null >> > >> > Does this thing has anything to do with the issue? >> > >> > I was not monitoring heap size when it got stuck. that will be next >> thing >> > to do. But I can hit webconsole from that activemq instance. So if it >> had >> > memory issues how come console is still working? isn't it same process? >> > >> > thanks, >> > chirag >> > >> > >> > On Sat, Feb 8, 2014 at 12:36 AM, artnaseef wrote: >> > >> >> How about a stack trace on the client? Can you look for the consumer >> >> threads >> >> and see what they are doing? >> >> >> >> Note that with producers and consumers on the same connection, it's >> >> possible >> >> to reach a deadlock if producer-flow-control kicks in because the >> entire >> >> connection is blocked, not just the one producer. There are two ways >> >> this >> >> can happen. First, if the client produces and consumes the same >> >> destination. Second, if the client is producing one destination and >> >> consuming a second destination and another client is consuming the >> first >> >> and >> >> producing to the second. Actually, more complicated possibilities >> exist >> >> as >> >> well. >> >> >> >> Your best bet is to never consume and produce on the same connection so >> >> that >> >> consumption never blocks due to producer-flow-control. In that >> >> scenario, >> >> deadlocks won't happen because consumers can always consume. >> >> >> >> Back to stack traces - if the consuming threads are blocked waiting to >> >> obtain a lock while producing threads hold that lock, that could >> >> indicate >> >> the problem. >> >> >> >> Anyway, slow consumption is quite often the cause of broker hangs, so >> >> looking for slow consumption and understanding speeding consumption (or >> >> slowing production) is important. If the broker JVM becomes completely >> >> unresponsive, try connecting jconsole or visualvm before the broker >> >> hangs >> >> and then watch memory. If the JVM is running out of Heap or Permgen >> >> space, >> >> that would explain the hung broker. >> >> >> >> >> >> >> >> -- >> >> View this message in context: >> >> >> http://activemq.2283324.n4.nabble.com/Broker-Hangs-after-some-time-or-does-nothing-tp4677506p4677656.html >> >> Sent from the ActiveMQ - User mailing list archive at Nabble.com. >> >> >> > >> > >> > >> > >> > _______________________________________________ >> > If you reply to this email, your message will be added to the discussion >> > below: >> > >> http://activemq.2283324.n4.nabble.com/Broker-Hangs-after-some-time-or-does-nothing-tp4677506p4677744.html >> > To start a new topic under ActiveMQ - User, email >> > ml-node+s2283324n2341805h3@n4.nabble.com >> > To unsubscribe from ActiveMQ - User, visit >> > >> http://activemq.2283324.n4.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=2341805&code=YXJ0QGFydG5hc2VlZi5jb218MjM0MTgwNXwtMjA1NDcyNjY5MQ== >> >> >> >> >> >> >> -- >> View this message in context: >> http://activemq.2283324.n4.nabble.com/Broker-Hangs-after-some-time-or-does-nothing-tp4677506p4677751.html >> Sent from the ActiveMQ - User mailing list archive at Nabble.com. > > --e89a8f6479e9ab24f304f2293ac1 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
I tried again and this time I didnt send bunch of mes= sages at same time. I noticed that some message gets processed right away b= ut some messges stays on queue and doesnt get processed. If send few more m= essges let say 50 messages and it processes 40 messages and 10 msgs left on= queue. I can send more messages but not all get processed.

I am attaching config file. I have enabled producerFlowControl an= d removed slowConsumerStrategy. I am using activemq-5.10-SNAPSHOT.

<= /div>


On Mon, = Feb 10, 2014 at 9:04 PM, Chirag Pujara <chiragpujara@gmail.com>= ; wrote:

i said broker hangs bcs if i tried to sen= d more messages or messages on queue doent get processed. i cannot even cre= ate queue or send message using webconsole.

will try again and rhis time will moniter memory.


On Feb 10, 2014 6:22 PM, "artnaseef" <art@artnaseef.com> wrote:< Perhaps I misunderstood "Broker Hangs". =A0It helps to have more = detailed
symptoms.

The messages in the DLQ are failing at the consumer - either transactions rolling back, or otherwise failing before commit, or CLIENT_ACKNOWLEDGE
client calling Session.recover().

See here for more details:
http://activemq.apache.org/message-redelivery-and-dl= q-handling.html

You are right about the webconsole -- it will most likely stop functioning<= br> once the broker runs out of memory.

>
>
> I have producerFlowControl to false. I seperated my producer and
> consumers.
> But still I was able to reproduce issue. For some time bunch of messag= es
> started pile up in queue. Then they started to endup in DLQ. In DLQ I = saw
> most of them have these property:
>
> dlqDeliveryFailureCause: java.lang.Throwable: Exceeded redelivery poli= cy
> limit:RedeliveryPolicy {destination =3D null, collisionAvoidanceFactor= =3D
> 0.15, maximumRedeliveries =3D 6, maximumRedeliveryDelay =3D -1,
> initialRedeliveryDelay =3D 1000, useCollisionAvoidance =3D false,
> useExponentialBackOff =3D false, backOffMultiplier =3D 5.0, redelivery= Delay =3D
> 1000}, cause:null
>
> Does this thing has anything to do with the issue?
>
> I was not monitoring heap size when it got stuck. that will be next th= ing
> to do. But I can hit webconsole from that activemq instance. So if it = had
> memory issues how come console is still working? isn't it same pro= cess?
>
> thanks,
> chirag
>
>
> On Sat, Feb 8, 2014 at 12:36 AM, artnaseef <art@artnaseef.com> wrote:
>
>> How about a stack trace on the client? =A0Can you look for the con= sumer
>> threads
>> and see what they are doing?
>>
>> Note that with producers and consumers on the same connection, it&= #39;s
>> possible
>> to reach a deadlock if producer-flow-control kicks in because the = entire
>> connection is blocked, not just the one producer. =A0There are two= ways
>> this
>> can happen. =A0First, if the client produces and consumes the same=
>> destination. =A0Second, if the client is producing one destination= and
>> consuming a second destination and another client is consuming the= first
>> and
>> producing to the second. =A0Actually, more complicated possibiliti= es exist
>> as
>> well.
>>
>> Your best bet is to never consume and produce on the same connecti= on so
>> that
>> consumption never blocks due to producer-flow-control. =A0In that<= br> >> scenario,
>> deadlocks won't happen because consumers can always consume. >>
>> Back to stack traces - if the consuming threads are blocked waitin= g to
>> obtain a lock while producing threads hold that lock, that could >> indicate
>> the problem.
>>
>> Anyway, slow consumption is quite often the cause of broker hangs,= so
>> looking for slow consumption and understanding speeding consumptio= n (or
>> slowing production) is important. =A0If the broker JVM becomes com= pletely
>> unresponsive, try connecting jconsole or visualvm before the broke= r
>> hangs
>> and then watch memory. =A0If the JVM is running out of Heap or Per= mgen
>> space,
>> that would explain the hung broker.
>>
>>
>>
>> --
>> View this message in context:
>> http:= //activemq.2283324.n4.nabble.com/Broker-Hangs-after-some-time-or-does-nothi= ng-tp4677506p4677656.html
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.<= br> >>
>
>
>
>
> _______________________________________________
> If you reply to this email, your message will be added to the discussi= on
> below:
> http://ac= tivemq.2283324.n4.nabble.com/Broker-Hangs-after-some-time-or-does-nothing-t= p4677506p4677744.html
> To start a new topic under ActiveMQ - User, email
> ml-node+s2283324n2341805h3@n4.nabble.com
> To unsubscribe from ActiveMQ - User, visit
> http://activ= emq.2283324.n4.nabble.com/template/NamlServlet.jtp?macro=3Dunsubscribe_by_c= ode&node=3D2341805&code=3DYXJ0QGFydG5hc2VlZi5jb218MjM0MTgwNXwtMjA1N= DcyNjY5MQ=3D=3D






--
View this message in context: http://activemq.2283324.n4.nabble.com/Broker-Hangs-after-so= me-time-or-does-nothing-tp4677506p4677751.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


--e89a8f6479e9ab24f304f2293ac1-- --e89a8f6479e9ab24f704f2293ac3 Content-Type: text/xml; charset=US-ASCII; name="activemq.xml" Content-Disposition: attachment; filename="activemq.xml" Content-Transfer-Encoding: base64 X-Attachment-Id: f_hrjr95er0 PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48YmVhbnMgeG1sbnM9Imh0dHA6 Ly93d3cuc3ByaW5nZnJhbWV3b3JrLm9yZy9zY2hlbWEvYmVhbnMiIHhtbG5zOmFtcT0iaHR0cDov L2FjdGl2ZW1xLmFwYWNoZS5vcmcvc2NoZW1hL2NvcmUiIHhtbG5zOnhzaT0iaHR0cDovL3d3dy53 My5vcmcvMjAwMS9YTUxTY2hlbWEtaW5zdGFuY2UiIHhzaTpzY2hlbWFMb2NhdGlvbj0iaHR0cDov L3d3dy5zcHJpbmdmcmFtZXdvcmsub3JnL3NjaGVtYS9iZWFucyBodHRwOi8vd3d3LnNwcmluZ2Zy YW1ld29yay5vcmcvc2NoZW1hL2JlYW5zL3NwcmluZy1iZWFucy54c2QgICBodHRwOi8vYWN0aXZl bXEuYXBhY2hlLm9yZy9zY2hlbWEvY29yZSBodHRwOi8vYWN0aXZlbXEuYXBhY2hlLm9yZy9zY2hl bWEvY29yZS9hY3RpdmVtcS1jb3JlLnhzZCI+DQoNCiAgICA8IS0tIEFsbG93cyB1cyB0byB1c2Ug c3lzdGVtIHByb3BlcnRpZXMgYXMgdmFyaWFibGVzIGluIHRoaXMgY29uZmlndXJhdGlvbiBmaWxl IC0tPg0KICAgIDxiZWFuIGNsYXNzPSJvcmcuc3ByaW5nZnJhbWV3b3JrLmJlYW5zLmZhY3Rvcnku Y29uZmlnLlByb3BlcnR5UGxhY2Vob2xkZXJDb25maWd1cmVyIj4NCiAgICAgICAgPHByb3BlcnR5 IG5hbWU9ImxvY2F0aW9ucyI+DQogICAgICAgICAgICA8dmFsdWU+ZmlsZToke2FjdGl2ZW1xLmNv bmZ9L2NyZWRlbnRpYWxzLnByb3BlcnRpZXM8L3ZhbHVlPg0KICAgICAgICA8L3Byb3BlcnR5Pg0K ICAgIDwvYmVhbj4NCg0KICAgIDwhLS0NCiAgICAgICAgVGhlIDxicm9rZXI+IGVsZW1lbnQgaXMg dXNlZCB0byBjb25maWd1cmUgdGhlIEFjdGl2ZU1RIGJyb2tlci4NCiAgICAtLT4NCiAgICA8YnJv a2VyIHhtbG5zPSJodHRwOi8vYWN0aXZlbXEuYXBhY2hlLm9yZy9zY2hlbWEvY29yZSIgYnJva2Vy TmFtZT0iYWN0aXZlbXEiIGRhdGFEaXJlY3Rvcnk9Ii9vcHQvZ3d4L2FjdGl2ZW1xZGF0YSI+DQoN CiAgICAgICAgDQoNCiAgICAgICAgPGRlc3RpbmF0aW9uUG9saWN5Pg0KICAgICAgICAgICAgPHBv bGljeU1hcD4NCiAgICAgICAgICAgICAgPHBvbGljeUVudHJpZXM+DQogICAgICAgICAgICAgICAg PHBvbGljeUVudHJ5IHByb2R1Y2VyRmxvd0NvbnRyb2w9ImZhbHNlIiB0b3BpYz0iJmd0OyI+DQog ICAgICAgICAgICAgICAgICA8cGVuZGluZ01lc3NhZ2VMaW1pdFN0cmF0ZWd5Pg0KICAgICAgICAg ICAgICAgICAgICA8Y29uc3RhbnRQZW5kaW5nTWVzc2FnZUxpbWl0U3RyYXRlZ3kgbGltaXQ9IjEw MDAiLz4NCiAgICAgICAgICAgICAgICAgIDwvcGVuZGluZ01lc3NhZ2VMaW1pdFN0cmF0ZWd5Pg0K ICAgICAgICAgICAgICAgIDwvcG9saWN5RW50cnk+DQogICAgICAgICAgICAgICA8cG9saWN5RW50 cnkgcHJvZHVjZXJGbG93Q29udHJvbD0idHJ1ZSIgcXVldWU9IiZndDsiPg0KICAgICAgICAgICAg ICAgICAgPGRlYWRMZXR0ZXJTdHJhdGVneT4NCgkJCQkJCQkJCQk8aW5kaXZpZHVhbERlYWRMZXR0 ZXJTdHJhdGVneSBxdWV1ZVByZWZpeD0iRExRLiIgdXNlUXVldWVGb3JRdWV1ZU1lc3NhZ2VzPSJ0 cnVlIi8+DQoJCQkJCQkJCSAgPC9kZWFkTGV0dGVyU3RyYXRlZ3k+DQogICAgICAgICAgICAgICAg ICANCiAgICAgICAgICAgICAgICA8L3BvbGljeUVudHJ5Pg0KPCEtLSAgICAgICAgICAgICAgICAg PHBvbGljeUVudHJ5IHF1ZXVlPSI+IiA+IC0tPg0KPCEtLSAJCQkJCTxzbG93Q29uc3VtZXJTdHJh dGVneT4gLS0+DQo8IS0tIAkJCQkJCTxhYm9ydFNsb3dDb25zdW1lclN0cmF0ZWd5IGFib3J0Q29u bmVjdGlvbj0idHJ1ZSIvPiAtLT4NCjwhLS0gCQkJCQk8L3Nsb3dDb25zdW1lclN0cmF0ZWd5PiAt LT4NCjwhLS0gCQkJCTwvcG9saWN5RW50cnk+IC0tPg0KICAgICAgICAgICAgICA8L3BvbGljeUVu dHJpZXM+DQogICAgICAgICAgICA8L3BvbGljeU1hcD4NCiAgICAgICAgPC9kZXN0aW5hdGlvblBv bGljeT4NCg0KDQogICAgICAgIDxtYW5hZ2VtZW50Q29udGV4dD4NCiAgICAgICAgICAgIDxtYW5h Z2VtZW50Q29udGV4dCBjcmVhdGVDb25uZWN0b3I9ImZhbHNlIi8+DQogICAgICAgIDwvbWFuYWdl bWVudENvbnRleHQ+DQoNCiAgICAgICAgPHBlcnNpc3RlbmNlQWRhcHRlcj4NCiAgICAgICAgICAg IDxyZXBsaWNhdGVkTGV2ZWxEQiBiaW5kPSJ0Y3A6Ly8wLjAuMC4wOjAiIGRpcmVjdG9yeT0iL29w dC9nd3gvYWN0aXZlbXFkYXRhL2xldmVsZGIiIHJlcGxpY2FzPSI0IiB6a0FkZHJlc3M9Imhvc3Qt MToyMTgxLGhvc3QtMToyMTgyLGhvc3QtMjoyMTgxLGhvc3QtMjoyMTgyIiB6a1Bhc3N3b3JkPSJw YXNzd29yZCIgemtQYXRoPSIvb3B0L2d3eC9hY3RpdmVtcWRhdGEiLz4NCiAgICAgICAgPC9wZXJz aXN0ZW5jZUFkYXB0ZXI+DQoNCg0KICAgICAgICAgDQogCQk8c3lzdGVtVXNhZ2U+DQogICAgICAg ICAgICA8c3lzdGVtVXNhZ2Ugc2VuZEZhaWxJZk5vU3BhY2U9InRydWUiPg0KICAgICAgICAgICAg ICAgIDxtZW1vcnlVc2FnZT4NCiAgICAgICAgICAgICAgICAgICAgPG1lbW9yeVVzYWdlIGxpbWl0 PSIyNTYgbWIiLz4NCiAgICAgICAgICAgICAgICA8L21lbW9yeVVzYWdlPg0KICAgICAgICAgICAg ICAgIDxzdG9yZVVzYWdlPg0KICAgICAgICAgICAgICAgICAgICA8c3RvcmVVc2FnZSBsaW1pdD0i MiBnYiIvPg0KICAgICAgICAgICAgICAgIDwvc3RvcmVVc2FnZT4NCiAgICAgICAgICAgICAgICA8 dGVtcFVzYWdlPg0KICAgICAgICAgICAgICAgICAgICA8dGVtcFVzYWdlIGxpbWl0PSI1MTIgbWIi Lz4NCiAgICAgICAgICAgICAgICA8L3RlbXBVc2FnZT4NCiAgICAgICAgICAgIDwvc3lzdGVtVXNh Z2U+DQogICAgICAgIDwvc3lzdGVtVXNhZ2U+DQoNCiAgICAgICAgDQogICAgICAgIDx0cmFuc3Bv cnRDb25uZWN0b3JzPg0KICAgICAgICAgICAgPCEtLSBET1MgcHJvdGVjdGlvbiwgbGltaXQgY29u Y3VycmVudCBjb25uZWN0aW9ucyB0byAxMDAwIGFuZCBmcmFtZSBzaXplIHRvIDEwME1CIC0tPg0K ICAgICAgICAgICAgPHRyYW5zcG9ydENvbm5lY3RvciBuYW1lPSJvcGVud2lyZSIgdXJpPSJ0Y3A6 Ly8wLjAuMC4wOiR7b3BlbndpcmVQb3J0fT9tYXhpbXVtQ29ubmVjdGlvbnM9MTAwMCZhbXA7d2ly ZWZvcm1hdC5tYXhGcmFtZVNpemU9MTA0ODU3NjAwIi8+DQogICAgICAgICAgICA8dHJhbnNwb3J0 Q29ubmVjdG9yIG5hbWU9ImFtcXAiIHVyaT0iYW1xcDovLzAuMC4wLjA6JHthbXFwUG9ydH0/bWF4 aW11bUNvbm5lY3Rpb25zPTEwMDAmYW1wO3dpcmVmb3JtYXQubWF4RnJhbWVTaXplPTEwNDg1NzYw MCIvPg0KICAgICAgICA8L3RyYW5zcG9ydENvbm5lY3RvcnM+DQoNCiAgICAgICAgPCEtLSBkZXN0 cm95IHRoZSBzcHJpbmcgY29udGV4dCBvbiBzaHV0ZG93biB0byBzdG9wIGpldHR5IC0tPg0KICAg ICAgICA8c2h1dGRvd25Ib29rcz4NCiAgICAgICAgICAgIDxiZWFuIHhtbG5zPSJodHRwOi8vd3d3 LnNwcmluZ2ZyYW1ld29yay5vcmcvc2NoZW1hL2JlYW5zIiBjbGFzcz0ib3JnLmFwYWNoZS5hY3Rp dmVtcS5ob29rcy5TcHJpbmdDb250ZXh0SG9vayIvPg0KICAgICAgICA8L3NodXRkb3duSG9va3M+ DQoNCiAgICA8L2Jyb2tlcj4NCg0KICAgIDwhLS0NCiAgICAgICAgRW5hYmxlIHdlYiBjb25zb2xl cywgUkVTVCBhbmQgQWpheCBBUElzIGFuZCBkZW1vcw0KDQogICAgICAgIFRha2UgYSBsb29rIGF0 ICR7QUNUSVZFTVFfSE9NRX0vY29uZi9qZXR0eS54bWwgZm9yIG1vcmUgZGV0YWlscw0KICAgIC0t Pg0KICAgICA8aW1wb3J0IHJlc291cmNlPSJqZXR0eS54bWwiLz4gDQoNCjwvYmVhbnM+ --e89a8f6479e9ab24f704f2293ac3--