From users-return-49925-archive-asf-public=cust-asf.ponee.io@activemq.apache.org Fri Apr 27 07:29:02 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 651B9180608 for ; Fri, 27 Apr 2018 07:29:02 +0200 (CEST) Received: (qmail 29400 invoked by uid 500); 27 Apr 2018 05:28:56 -0000 Mailing-List: contact users-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@activemq.apache.org Delivered-To: mailing list users@activemq.apache.org Received: (qmail 29384 invoked by uid 99); 27 Apr 2018 05:28:55 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Apr 2018 05:28:55 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 90668C00DB for ; Fri, 27 Apr 2018 05:28:54 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.249 X-Spam-Level: ** X-Spam-Status: No, score=2.249 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, HEADER_FROM_DIFFERENT_DOMAINS=0.001, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id Y_qT7ps5l49R for ; Fri, 27 Apr 2018 05:28:52 +0000 (UTC) Received: from mail-wr0-f171.google.com (mail-wr0-f171.google.com [209.85.128.171]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id B292E5FAE6 for ; Fri, 27 Apr 2018 05:28:51 +0000 (UTC) Received: by mail-wr0-f171.google.com with SMTP id g21-v6so500471wrb.8 for ; Thu, 26 Apr 2018 22:28:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to; bh=Kwo8odu17xfhZ1YyUlQg0R1W4Gy3zqGMShZb3symGKo=; b=LA4IoropmFxktU/TMcApKWqeaAZxEhDBXwZcJHIJfvKn4kUzfBVZ0O3i8bUV6uRAub Csc3vzc9TMeymHVS0vYYzrMUnJwsybaYsEI4XtpNgBnV0FDxq0nkPScvT0G/Xo4DeVa9 5NC+FY9OY/1HdT1p1n44g+uh3zuwd4PvCDdwg/9nanO3n/Fl0M6f7v7B764Eymxg5r+h 0cwePmiHJnbE5bpWBO0vfl2OW9cgTohh0as/cEBNqkSjO8Nd6KxlLWKNsSOcn7ZA8pRZ 24ruTwWLAL9e4t/jxi0F4P4ZP6b5vMBHGBq8msi79TddY03ACUpzS3LXjeX0YxDBhyGm 4MXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to; bh=Kwo8odu17xfhZ1YyUlQg0R1W4Gy3zqGMShZb3symGKo=; b=ThZT7VsQgbRv1XtOgD+Z6jA+JVYxBHP+d7krlBIjzvv8kaY/CPQN4oI//kNfqXs1Al +qA/v37XA5/AGBFKE/KzoUtIcD3/fOCuAIhhVdgacBjG0h6OmPidPnGr+KdrcOuirvRw SvEtxYTVOVvkEFT6XPbEofkRHSW4+/QSvwU657fj1582aAjusx3O+9LLLcVMgMA8tiw0 qORZy0x+jwTvMEIym6A4Qq7qIGl60dTZWva+JbBlwG/vaAfYn406Qdk6TL58juFQLLOn B+hDf0RX06lgdy/26HzartlUenF+YEbOxczLw6zE5dZbO7jJNUxrvKD1TEASlJVKpyfd 5G0Q== X-Gm-Message-State: ALQs6tCElbJrkatNBYtd028oXzsTw/NQDVSwPV4AXWI0A5vgbzaOUfAC XnCSHVwDVaphhRBmoRQJl/6OUfwavDC7t0vq0PH1QQ== X-Google-Smtp-Source: AB8JxZo1TNl5d+Dy1S5jte7u1nMv9BIJnjKGKUw41JDdSheTfLd4xfwg5rVMQG/Y+14dFr6a8IKpB68zBm9szV8DWSo= X-Received: by 2002:adf:e38e:: with SMTP id e14-v6mr519761wrm.198.1524806924907; Thu, 26 Apr 2018 22:28:44 -0700 (PDT) MIME-Version: 1.0 Sender: tbain98@gmail.com Received: by 10.223.234.206 with HTTP; Thu, 26 Apr 2018 22:28:24 -0700 (PDT) In-Reply-To: <1524548906501-0.post@n4.nabble.com> References: <1523448563003-0.post@n4.nabble.com> <1524146450628-0.post@n4.nabble.com> <1524548906501-0.post@n4.nabble.com> From: Tim Bain Date: Thu, 26 Apr 2018 23:28:24 -0600 X-Google-Sender-Auth: ocBXYKz1Mp-zEcwaR9X_Ej1dQqM Message-ID: Subject: Re: Using ActiveMQ For Distributed Replicated Cache To: ActiveMQ Users Content-Type: multipart/alternative; boundary="0000000000001c3838056acdc704" --0000000000001c3838056acdc704 Content-Type: text/plain; charset="UTF-8" On Mon, Apr 23, 2018 at 11:48 PM, pragmaticjdev wrote: > Highly appreciate the detailed replies and the clarifications on > distributed > cache vs database. > We are trying to build a distributed cache. I agree to all the inputs you > shared for such a cache implementation. In summary it would mean > 1. Subscriber should clear the cache when it cannot connect to the > broker > 2. Publisher should not rollback the database transaction on > failures as > step #1 would be sufficient & the cache is loaded as and when queried > OK, if you're building a cache rather than a database, what serves as the record copy of the data, from which the cache can be queried on demand? Would you just query your OLTP database whenever you have a cache miss? Also, my earlier responses weren't considering the possibility that you'd be updating objects within the cache. In that case, I agree that clearing the cache on disconnection/error is the right way to go. Sorry if that caused any confusion. > A couple of follow up questions > 1. > > > Typically you would call it in a tight loop, so you're only as stale as > > the amount of time it takes you to publish the messages received the last > > time. > > > How can one control the polling time of the consumer? My jms consumer code > from our spring application looks like this > > @Component > public class Consumer { > > @JmsListener(destination = "java-topic", containerFactory = > "topicListenerFactory") > public void receiveTopicMessage(@Payload Person person) throws > JMSException > { > //update the local cache entry > } > } > > How do I change the above code to call it in a tight loop? Ah, so you're using message-driven code rather than managing the connection yourself. In that case you'd want to do the following to handle the error (by clearing the cache): https://stackoverflow.com/questions/40654586/spring-jms-set-errorhandler-for-jmslistener-annotated-method You certainly could switch to explicitly managing the connection (see http://activemq.apache.org/hello-world.html as an example of what that code would look like), but that's not necessary if you'd rather use the message-driven paradigm. > Also would that > mean one or more threads would be constantly busy leading to constant usage > of CPU cycles? > If you were to switch to accessing the connection directly, you'd typically include a small Thread.sleep() to prevent spin-waiting. I apologize if the choice of the words "tight loop" implied spin-waiting; I just meant that you would keep the sleeps relatively short, not that there wouldn't be any at all. > 2. > For my question on overloaded subscriber I didn't completely follow your > suggestion for not being worried about this scenario. You mentioned > > > > If you're going with a distributed cache, then don't worry about > > this, because you'll handle it with queries to the truth store when you > > have cache misses (at the cost of slower performance). > > Assume there are two app servers with an object loaded in the local cache. > An update to this object occurs on app server 1 which publishes that object > on the jms queue. Here if app server 2 is overloaded (busy CPU), the jms > consumer thread might not get a chance to execute at that instance in time. > What happens in such cases, does activemq retry after some time? In that scenario, your fate is in the hands of the JRE's thread scheduler. There's no retrying at the application level; the thread simply sits there with its execution pointer set to the next operation to be done, but it might take time (milliseconds, not minutes) until the JRE decides that this particular thread should be allowed to run its operations. With that said, if the correct operation of your system depends on the cache being updated before the subsequent operation is evaluated (i.e. multi-process synchronization), then an asynchronous cache based on ActiveMQ is not what you want, and you need to be hitting a (non-distributed) database such as a SQL-based RDBMS for all of your operations. Distributed systems have a certain amount of unpredictability for operations that are run in parallel, so if your use case can't support this, then you need an external data store such as an RDBMS to enforce ordering/synchronization. I haven't been able to tell from what you've written if this is your situation or not, but make sure you're clear on the tradeoffs of a distributed/parallel architecture like the one you're proposing, and make sure you can accept those tradeoffs. > Can the > number of such retries be configured? It could so happen that the app > server > 2 could remain in an overloaded state for a longer duration (may be 30 > mins). > Being "overloaded" doesn't mean your threads won't get to run, unless you've done something dumb like setting that thread's priority lower than the other threads in the application. In the default configuration, all threads will be scheduled more or less evenly, so they'll make progress, just not as fast as they would if the box was idle. There's nothing to worry about here, unless you can't accept the inherent unpredictability of a distributed system (see previous paragraphs). Tim --0000000000001c3838056acdc704--