Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 8BC4D200CF0 for ; Thu, 7 Sep 2017 10:42:10 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 8A52B16128B; Thu, 7 Sep 2017 08:42:10 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id AC6051609BF for ; Thu, 7 Sep 2017 10:42:09 +0200 (CEST) Received: (qmail 25137 invoked by uid 500); 7 Sep 2017 08:42:08 -0000 Mailing-List: contact issues-help@camel.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@camel.apache.org Delivered-To: mailing list issues@camel.apache.org Received: (qmail 25128 invoked by uid 99); 7 Sep 2017 08:42:08 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 07 Sep 2017 08:42:08 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 3D6A91A6EDA for ; Thu, 7 Sep 2017 08:42:08 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.201 X-Spam-Level: X-Spam-Status: No, score=-99.201 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id ToOSBT02O5h7 for ; Thu, 7 Sep 2017 08:42:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id EFF7D5F121 for ; Thu, 7 Sep 2017 08:42:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 0DC85E087D for ; Thu, 7 Sep 2017 08:42:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 582842414B for ; Thu, 7 Sep 2017 08:42:00 +0000 (UTC) Date: Thu, 7 Sep 2017 08:42:00 +0000 (UTC) From: =?utf-8?Q?=C3=96nder_Sezgin_=28JIRA=29?= To: issues@camel.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (CAMEL-11750) Camel route with multicast (parallel) generate huge CPU load MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 07 Sep 2017 08:42:10 -0000 [ https://issues.apache.org/jira/browse/CAMEL-11750?page=3Dcom.atlassia= n.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D161= 56693#comment-16156693 ]=20 =C3=96nder Sezgin commented on CAMEL-11750: -------------------------------------- well, i would say if you have got long running tasks and and you push many = tasks to your direct endpoint which will cause oveflooding your processor c= hain / route, i think this is a design issue. i am not convinced that you can put delay to each task you send to direct e= npoint. this may resolve your issue because you have got long running tasks= .=20 what if you don't have long running tasks and you want to multicast and par= allelize your route? I think putting delay will break such scenario as i me= ntioned as "what if". i think you need to use throttler EIP if you have long running tasks and ov= erflooding your direct endpoint to reduce the number of messages to your di= rect enpoint because you have got long running tasks.=20 > Camel route with multicast (parallel) generate huge CPU load > ------------------------------------------------------------ > > Key: CAMEL-11750 > URL: https://issues.apache.org/jira/browse/CAMEL-11750 > Project: Camel > Issue Type: Bug > Components: camel-core > Reporter: Nazar Vishka > Priority: Critical > > I've created simple [Spring Camel Route application|https://github.com/le= ofromgroza/camel-long-term-route] that can be used for issue reproducing. H= ere we have very simple route: > {code} > public void configure() throws Exception { > from("direct:start").routeId("start") > .multicast().parallelProcessing() > .to("direct:very-long-task", "direct:long-task") > .end(); > from("direct:long-task").routeId("long-task") > .log("Started long-task") > .process(exchange -> Thread.sleep(5000)) > .log("Finished long-task") > .end(); > from("direct:very-long-task").routeId("very-long-task") > .log("Started very-long-task") > .process(exchange -> Thread.sleep(35000)) > .log("Finished very-long-task") > .end(); > }{code} > From our main route 'start' we are starting in parallel two sub-routes: '= long-task' and 'very-long-task'. They are just doing something for some per= iod of time and do not generate any load to the system. > But I found that when one task finished earlier than other one, route sta= rt to make a huge CPU load. Here you can see a CPU usage during executiong = of Camel route that was mentioned earlier (after finishing of 'long-task' u= sage of CPU uncreased from 0 to 12.5%): > !https://content.screencast.com/users/NazarV/folders/Jing/media/830268f0-= d184-4c57-adb1-b782ea63fa6d/2017-09-06_1241.png! > Screenshot was made when I was running route on my Windows PC with 4 phys= ical CPU cores + 4 HT. On Unix systems we found that after end of 'long-tas= k' it used 100% of one core till the end of work. > One more interesting thing that i've found is that the main load on the s= ystem was generated by the thread MulticastProcessor-AggregateTask that was= spending a lot of time in the method java.util.concurrent.locks.AbstractQu= euedSynchronizer.ConditionObject#awaitNanos that was called from java.util.= concurrent.DelayQueue#poll(long, java.util.concurrent.TimeUnit): > {code}"Camel (camel-1) thread #2 - MulticastProcessor-AggregateTask" #29 = daemon prio=3D5 os_prio=3D0 tid=3D0x00000000215e3000 nid=3D0x7a0 runnable [= 0x0000000022eaf000] > java.lang.Thread.State: RUNNABLE > =09at java.lang.Thread.isInterrupted(Native Method) > =09at java.lang.Thread.interrupted(Thread.java:944) > =09at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObje= ct.checkInterruptWhileWaiting(AbstractQueuedSynchronizer.java:2002) > =09at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObje= ct.awaitNanos(AbstractQueuedSynchronizer.java:2079) > =09at java.util.concurrent.DelayQueue.poll(DelayQueue.java:273) > =09at org.apache.camel.util.concurrent.SubmitOrderedCompletionService.pol= l(SubmitOrderedCompletionService.java:127) > =09at org.apache.camel.processor.MulticastProcessor$AggregateOnTheFlyTask= .aggregateOnTheFly(MulticastProcessor.java:463) > =09at org.apache.camel.processor.MulticastProcessor$AggregateOnTheFlyTask= .run(MulticastProcessor.java:418) > =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:= 511) > =09at java.util.concurrent.FutureTask.run(FutureTask.java:266) > =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecuto= r.java:1142) > =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:617) > =09at java.lang.Thread.run(Thread.java:745){code} > In DelayQueue#poll(long, java.util.concurrent.TimeUnit) we have a piece o= f code: > {code}E first =3D q.peek(); > if (first =3D=3D null) { > ... > } else { > long delay =3D first.getDelay(NANOSECONDS); > ... > long timeLeft =3D available.awaitNanos(delay) {code} > During debugging I found that E first is object of class [org.apache.came= l.util.concurrent.SubmitOrderedCompletionService.SubmitOrderFutureTask|http= s://github.com/apache/camel/blob/camel-2.19.2/camel-core/src/main/java/org/= apache/camel/util/concurrent/SubmitOrderedCompletionService.java] and it's = very interesting [getDelay(TimeUnit) method|https://github.com/apache/camel= /blob/camel-2.19.2/camel-core/src/main/java/org/apache/camel/util/concurren= t/SubmitOrderedCompletionService.java#L64] always return 1. I thing that is= a source of current issue: > {code} public long getDelay(TimeUnit unit) { > // if the answer is 0 then this task is ready to be taken > return id - index.get(); > }{code} > So when delay is 1, AggregateTask thread is not sleeping at all and gener= ates a huge CPU load all the time when it is waiting for the end of the las= t task. > I think that [getDelay(TimeUnit) method|https://github.com/apache/camel/b= lob/camel-2.19.2/camel-core/src/main/java/org/apache/camel/util/concurrent/= SubmitOrderedCompletionService.java#L64] should be somehow re-implemented. -- This message was sent by Atlassian JIRA (v6.4.14#64029)