Return-Path: Delivered-To: apmail-camel-users-archive@www.apache.org Received: (qmail 57953 invoked from network); 23 Dec 2010 00:08:22 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 23 Dec 2010 00:08:22 -0000 Received: (qmail 55481 invoked by uid 500); 23 Dec 2010 00:08:22 -0000 Delivered-To: apmail-camel-users-archive@camel.apache.org Received: (qmail 55445 invoked by uid 500); 23 Dec 2010 00:08:22 -0000 Mailing-List: contact users-help@camel.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@camel.apache.org Delivered-To: mailing list users@camel.apache.org Received: (qmail 55437 invoked by uid 99); 23 Dec 2010 00:08:22 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 23 Dec 2010 00:08:22 +0000 X-ASF-Spam-Status: No, hits=-2.3 required=10.0 tests=RCVD_IN_DNSWL_MED,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of jnewsham@referentia.com designates 64.18.3.92 as permitted sender) Received: from [64.18.3.92] (HELO exprod8og106.obsmtp.com) (64.18.3.92) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 23 Dec 2010 00:08:13 +0000 Received: from source ([209.85.214.180]) (using TLSv1) by exprod8ob106.postini.com ([64.18.7.12]) with SMTP ID DSNKTRKS10OxgSRsjKhm/BS/q1u5f66Za/sj@postini.com; Wed, 22 Dec 2010 16:07:53 PST Received: by iwn37 with SMTP id 37so5420522iwn.39 for ; Wed, 22 Dec 2010 16:07:51 -0800 (PST) Received: by 10.42.165.136 with SMTP id k8mr7618846icy.456.1293062870730; Wed, 22 Dec 2010 16:07:50 -0800 (PST) Received: from [192.168.0.41] ([64.128.15.200]) by mx.google.com with ESMTPS id i16sm6024829ibl.6.2010.12.22.16.07.49 (version=TLSv1/SSLv3 cipher=RC4-MD5); Wed, 22 Dec 2010 16:07:50 -0800 (PST) Message-ID: <4D1292D4.1080509@referentia.com> Date: Wed, 22 Dec 2010 14:07:48 -1000 From: Jim Newsham User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.13) Gecko/20101207 Lightning/1.0b2 Thunderbird/3.1.7 MIME-Version: 1.0 To: users@camel.apache.org Subject: Re: long-running requests over jms References: <4D0FE67F.6090105@referentia.com> <4D113D13.9070800@referentia.com> <4D11AACB.2050508@die-schneider.net> In-Reply-To: <4D11AACB.2050508@die-schneider.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org On 12/21/2010 9:37 PM, Christian Schneider wrote: > I think on JMS you never can be sure that the consumer is working on a > request. (When using simple request / reply). Yeah, that's precisely the issue I was hoping could be addressed. For direct routes, you know that the consumer is working on the request, but for jms routes, you don't. I thought, why can't jms routes be made to work a little more like direct routes? Long-running requests could work in just the same way as short-running requests, and if the "link" fails, the producer gets an exception. > So my suggestion is to simply offer a decoupled service that can be > asked about the requests the consumer processes at the moment. I am > not sure if this can be generically supported by camel. > Perhaps camel could offer a way to look into a route and see which > requests it processes via JMX. As this would be quite camel specific I > guess the better way would be to write the monitoring service on your > own. > Another simple solution would be to let the consumer write keep alive > messages an a separate topic that also contain information on the > requests it works on. So you could also monitor the progress. > > The good thing in writing this on your own is that you could even send > a progress in percent if this is possible in your case. Then the > client could display how far the processing went. > Yeah, I was hoping to avoid creating a dichotomy in the code, where the service interfaces for longer-running requests have to be coded differently from the service interfaces for short-running requests. Not to mention the added implementation complexity for each service interface, and the added complexity in the routes (I can't use InOut routes anymore, so I have to wire up reply routes). I was hoping that communication layer could handle the complexity internally and expose a uniform model to the service layer. Seems like I don't have a lot of agreement on this idea, though. Thanks for the alternative suggestions. Regards, Jim > Best regards > > Christian > > > Am 22.12.2010 00:49, schrieb Jim Newsham: >> >> Thanks for the responses Claus and Willem. I couldn't find a JIRA >> issue related to jms exchange store, etc. I opened a new JIRA issue >> for long-running jms requests: >> https://issues.apache.org/jira/browse/CAMEL-3456. >> >> I'm trying to understand the pluggable store you're referring to. I >> think what you're saying is that the pluggable store is purely >> consumer-side, where the consumer would notice a long-running >> request, park the exchange in the store, and release the consumer >> thread, then when the result becomes available, it would pick up the >> exchange again and send it on. Did I understand correctly? And the >> purpose would be to free the consumer thread? >> >> I can see the potential problem that if a lot of long-running >> requests are made, some thread pool might be exhausted. For our own >> purposes, I'm not too worried about this problem as the long-running >> requests will be rare and exceptional cases and we have small enough >> load and can afford to configure a large enough thread pool so that >> it's not an issue. So for us, the primary issue is that there's no >> way for the producer to be aware that the consumer is still >> processing the request (since setting requestTimeout to a large value >> is not a good idea). >> >> Regards, >> Jim >> >> On 12/21/2010 4:43 AM, Claus Ibsen wrote: >>> Hi >>> >>> We have some thoughts of letting camel-jms support using a pluggable >>> store so it can persist the in flight Exchange. And then on the reply >>> consumer side, it can peek in the store to find the correlated >>> Exchange and load it from the store, and continue routing the >>> Exchange. >>> >>> Then we can support long use-cases. Also if Camel crashes or you >>> shutdown Camel. >>> >>> I think we have created a JIRA ticket for it, I may be mistaken. Fell >>> free to see if you can dig out the JIRA. >>> If not create a new one and refer to this thread (using nabble etc.) >>> >>> >>> On Tue, Dec 21, 2010 at 12:27 AM, Jim >>> Newsham wrote: >>>> Hi everyone, >>>> >>>> We are using Camel + ActiveMQ, with InOut messages and bean() >>>> routes, as a >>>> form of flexible remoting (remote service invocation). This has been >>>> working out quite well for us so far. One issue that we've run >>>> into is that >>>> while most service requests complete very quickly, some particular >>>> service >>>> requests can take a long time to execute (due to the processing >>>> they must >>>> perform) -- perhaps many minutes, or in extreme cases possibly an >>>> hour or >>>> more. However, if the request exceeds the configured jms >>>> "requestTimeout" >>>> parameter, then the requester will receive a timeout exception. >>>> >>>> I feel that the requestTimeout parameter alone is not flexible >>>> enough to do >>>> what we need. requestTimeout should be a somewhat small value so >>>> that the >>>> application is responsive to disconnects (we certainly can't set it >>>> to an >>>> hour; the 20 sec default seems reasonable). By contrast, there >>>> doesn't seem >>>> to be a very well-defined, reasonable upper bound on the long-running >>>> requests -- they could take an hour, perhaps more. >>>> >>>> I feel that what we need to ensure responsiveness while supporting >>>> long-running requests is some form of periodic, pending request >>>> heartbeat. >>>> Coding this on an ad-hoc basis per request would be tedious and >>>> cumbersome. >>>> It would be great if I could set a "requestTimeout" on the producer >>>> endpoint (let's say 20s), and configure the consumer endpoint with a >>>> heartbeat "requestKeepalive" parameter (let's say 15s), and the >>>> consumer >>>> would send periodic messages to the producer which would reset its >>>> timeout >>>> counter, until the consumer finally sends the result. >>>> >>>> What do you think? Is such a proposal feasible? Any alternative >>>> ideas? I >>>> took a look at the jms component code, but I'm not quite sure where >>>> the >>>> producer timeout happens, or where a consumer keepalive processor >>>> would go. >>>> >>>> Thanks, >>>> Jim >>>> >>> >>> >> >> >