Return-Path: Delivered-To: apmail-activemq-dev-archive@www.apache.org Received: (qmail 11049 invoked from network); 19 May 2008 17:00:40 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 19 May 2008 17:00:40 -0000 Received: (qmail 88357 invoked by uid 500); 19 May 2008 17:00:41 -0000 Delivered-To: apmail-activemq-dev-archive@activemq.apache.org Received: (qmail 88341 invoked by uid 500); 19 May 2008 17:00:41 -0000 Mailing-List: contact dev-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@activemq.apache.org Delivered-To: mailing list dev@activemq.apache.org Received: (qmail 88330 invoked by uid 99); 19 May 2008 17:00:41 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 19 May 2008 10:00:41 -0700 X-ASF-Spam-Status: No, hits=2.6 required=10.0 tests=DNS_FROM_OPENWHOIS,SPF_HELO_PASS,SPF_PASS,WHOIS_MYPRIVREG X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of lists@nabble.com designates 216.139.236.158 as permitted sender) Received: from [216.139.236.158] (HELO kuber.nabble.com) (216.139.236.158) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 19 May 2008 16:59:46 +0000 Received: from isper.nabble.com ([192.168.236.156]) by kuber.nabble.com with esmtp (Exim 4.63) (envelope-from ) id 1Jy8iV-00051t-LS for dev@activemq.apache.org; Mon, 19 May 2008 10:00:07 -0700 Message-ID: <17323314.post@talk.nabble.com> Date: Mon, 19 May 2008 10:00:07 -0700 (PDT) From: Sridhar2008 To: dev@activemq.apache.org Subject: Re: Cluster transport ... In-Reply-To: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Nabble-From: activemq@komandur.com References: <17221068.post@talk.nabble.com> <46B44A74-DEB6-4667-8BEA-A5799ACE632D@gmail.com> <17246093.post@talk.nabble.com> X-Virus-Checked: Checked by ClamAV on apache.org > Incidentally I'm intrigued by the DNS discovery stuff; it sounds >great >:). What did you have in mind? That each broker on startup >would >register a DNS entry (with a timeout maybe so they get >removed) and Yes, auto-registration. Couple of things to note: - The 'timeout feature' again is DNS master specific, the existing ones need to be enhanced to support this aging out. The standard timeout you see in the DNS entry is for a different purpose: refresh between master and slave DNS servers. - There is no standard APY for web dns registration, so we will define an APY, with specific implementatioin to interact with the DNS server (folks can chose to change the implementation for their env). - Most messaging systems run in trusted env, but some use cases may need authentication before a broker is allowed to do 'auto-registration'. This can be addressed later. >clients would ping DNS when attempting to connect in the >FailoverTransport? Yes, we would specify the cluster name in client config and they would connect using 'ClusterTransport' :-) James.Strachan wrote: > > 2008/5/15 Sridhar2008 : >> >> >> Rob/James, >> >> Thanks for the feedback. I will address both of your questions in a >> combined >> way :-) >> >> Rob> Do you need persistent messaging - or non-persistent only? >> James> or just pick one broker per operation/transaction? >> >> Initially, it is going to be the above. Since the messaging usage >> scenarios >> are many, I will just assume the use case of interest for now is durable >> transfer using cluster for high availability. In this case I am not sure >> that it buys much to send to multiple brokers simultaneously (see my next >> paragraph below) - adds additional headaches as James sites below >> (however, >> there may be other use cases where this might be useful), in addition to >> making client-side logic complex. > > Agreed. > > For sending to a topic (outside of a transaction which may include > other operations) then sending the message to all brokers is a no > brainer. I guess so long as acknowledgements only get sent to the > broker they came from & for queue sending we only send to one of the > brokers it should be fairly straight forward. > > (So a little bit of hacking to the FanOutTransport should do the trick I > think). > > We could get clever going forward; where rather than randomly picking > one of the brokers (or round robbin) we kinda partition destinations > across the available brokers? > > > >> Some thoughts: Eventual goal is a broker down in the cluster implies >> capacity hit and not a service hit. So, I will rather solve this use case >> as >> a distributed storage issue (after 'cluster transport' is added) - >> investigate either a DHT based solution or perhaps something like Hadoop >> with a JDBC interface. The state is replicated in multiple nodes of the >> cluster so the broker that is down can be ignored (ie no need to sweat on >> recovering its state). > > Agreed. > > FWIW distributing the state is trivial - the FanOutTransport can do > this today really. The issue is ensuring consistency (so more to do > with locking & consistency than moving the messages around). > > Incidentally I'm intrigued by the DNS discovery stuff; it sounds great > :). What did you have in mind? That each broker on startup would > register a DNS entry (with a timeout maybe so they get removed) and > clients would ping DNS when attempting to connect in the > FailoverTransport? > > -- > James > ------- > http://macstrac.blogspot.com/ > > Open Source Integration > http://open.iona.com > > -- View this message in context: http://www.nabble.com/Cluster-transport-...-tp17221068s2354p17323314.html Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.