Return-Path: X-Original-To: apmail-camel-users-archive@www.apache.org Delivered-To: apmail-camel-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id AFBAC1032E for ; Tue, 3 Dec 2013 21:05:26 +0000 (UTC) Received: (qmail 47144 invoked by uid 500); 3 Dec 2013 21:05:26 -0000 Delivered-To: apmail-camel-users-archive@camel.apache.org Received: (qmail 47115 invoked by uid 500); 3 Dec 2013 21:05:26 -0000 Mailing-List: contact users-help@camel.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@camel.apache.org Delivered-To: mailing list users@camel.apache.org Received: (qmail 47107 invoked by uid 99); 3 Dec 2013 21:05:26 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 Dec 2013 21:05:26 +0000 X-ASF-Spam-Status: No, hits=2.3 required=5.0 tests=SPF_SOFTFAIL,URI_HEX X-Spam-Check-By: apache.org Received-SPF: softfail (nike.apache.org: transitioning domain of mcasado@tendrilinc.com does not designate 216.139.236.26 as permitted sender) Received: from [216.139.236.26] (HELO sam.nabble.com) (216.139.236.26) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 Dec 2013 21:05:20 +0000 Received: from [192.168.236.26] (helo=sam.nabble.com) by sam.nabble.com with esmtp (Exim 4.72) (envelope-from ) id 1Vnx9P-0005LP-QF for users@camel.apache.org; Tue, 03 Dec 2013 13:04:59 -0800 Date: Tue, 3 Dec 2013 13:04:59 -0800 (PST) From: marcelcasado To: users@camel.apache.org Message-ID: <1386104699807-5744266.post@n5.nabble.com> Subject: Issues using camel split and aggregator together MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org I have an issue when using camel split and camel aggregator together the messages continue down the route before all the messages from split has been completely processed. In my route splited-agregated messages get into the "finally" block when I was expecting that only one will get to the "finally" only once after all the split messages has been completed. I tried different things but not been able to get to hold the splited messages to hit the "finally" block. What I'm trying to do is to split xml elements and the batch them using the "aggregate" so I do batch processing. Ideally when done with all the processing I want to hit the "finally" block. Here are the camel routes: fromF(readUri, inboundDataDir, pollingInterval, filterBeanName) .routeId(routeId) .doTry() .to("bean:" + interfaceActivityReportEnricher) .to("bean:" + tenantIdEnricher) .split(stax(splitClass, false)) .streaming() .to("bean:dataBatchEnricher") .to("direct:" + batchingStrategy) .endDoTry() .doCatch(Exception.class) .to("bean:errorProcessor?method=handleError(${file:absolute.path}, ${exception}, ${property." + INTERFACE_ACTIVITY_REPORT_PROPERTY_NAME + "})") .doFinally() .to("bean:completedFileNameEnricher?method=enrichWithCompletedFileName(*," + inboundDataDir + "," + outboundDataDir + ")") .setBody() .simple("${property." + INTERFACE_ACTIVITY_REPORT_PROPERTY_NAME + ".report}") .marshal(jaxbDataFormat) .toF(writeUri, outboundDataDir, interfaceActivityReportDir) .end(); if (batchingStrategy == "sizeStrategy-TenantIdCorrelation") { from("direct:sizeStrategy-TenantIdCorrelation") // aggregate all exchanges correlated by the TENANT_ID_PROPERTY_NAME property. // Aggregate them using the ArrayListAggregationStrategy strategy which // and after N messages has been aggregated then complete the aggregation // and send it to processor .aggregate(property(TENANT_ID_PROPERTY_NAME), new ArrayListAggregationStrategy()) .aggregationRepository(repo) .completionSize(8) .completionTimeout(10000) .forceCompletionOnStop() .parallelProcessing() .to("bean:" + importProcessor) .end(); } Thanks, -Marcel -- View this message in context: http://camel.465427.n5.nabble.com/Issues-using-camel-split-and-aggregator-together-tp5744266.html Sent from the Camel - Users mailing list archive at Nabble.com.