qpid-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From conflue...@apache.org
Subject [CONF] Apache Qpid > Multiple Java Brokers - Use Cases
Date Thu, 20 Aug 2009 10:47:00 GMT
    <base href="http://cwiki.apache.org/confluence">
            <link rel="stylesheet" href="/confluence/s/1519/1/1/_/styles/combined.css?spaceKey=qpid&amp;forWysiwyg=true"
<body style="background-color: white" bgcolor="white">
<div id="pageContent">
<div id="notificationFormat">
<div class="wiki-content">
<div class="email">
     <h2><a href="http://cwiki.apache.org/confluence/display/qpid/Multiple+Java+Brokers+-+Use+Cases">Multiple
Java Brokers - Use Cases</a></h2>
     <h4>Page <b>edited</b> by             <a href="http://cwiki.apache.org/confluence/display/~ritchiem">Martin
     <div class="notificationGreySide">
         <h2><a name="MultipleJavaBrokers-UseCases-Purpose"></a>Purpose</h2>

<p>This page is intended to outline the known use cases for running multiple Java Brokers,
addressing logged issues and limitations of the current implementation (as of V0.5). It is
<b>not</b> about clustering proper.</p>

<h2><a name="MultipleJavaBrokers-UseCases-UseCases"></a>Use Cases</h2>

<h3><a name="MultipleJavaBrokers-UseCases-HighVolumeTransientBroker"></a>High
Volume Transient Broker</h3>


<p>This use case relates to applications with a high residual message load i.e. where
message data on the broker remains in memory for some time or consumption lags production
such that a backlog is constantly present in the broker queues. </p>

<p>This paradigm is reasonably common, partly because publication threads are generally
handling only the simple publish call where we often see consumption threads handling writes
to RDBMS or other time expensive processing. Thus a rate gap opens up, and creates a data

<p>In this scenario, particularly for deployments on a 32bit VM, the broker can exhaust
a 3GB heap or start to perform poorly as it approaches max heap.</p>


<p>Broker side OoM or performance degradation requiring bounce. Messages in flight not
processed, client applications experience connection loss.</p>

<p><b>Possible Solution A - Load Balancing Module</b></p>

<p>For our end users, we could potentially reduce the hassle factor in running 2 brokers
by providing a solution comprised of load balancing module which would reside alongside the
broker i.e. on server side. This module would intercept published messages and share them
between multiple brokers (scaling according to app parameters). Consumers would require multiple
connections, but publication would be unaffected and the burden of load balancing could be
shifted from the user application to Qpid.</p>

<p><b>Possible Solution B - No Message Order, 2 Brokers</b></p>

<p>In this scenario, it would be possible to use 2 brokers and message order would not
matter. Publishing clients would use 2 connections and publish alternately to each broker,
providing a simplistic load balancing solution. Consuming clients would then consume from
the 2 brokers, using the same topic name etc. The consumer could choose to consume in parallel,
thus potentially speeding up processing time or by taking messages singly from the two sources

<p><b>Possible Solution C - With Message Order, Paired Flow</b></p>

<p>Again, using 2 brokers but this time working with the assumption that the application
data flows across the broker can be separated by source/destination. An easy solution for
this is to simply divide the required traffic by source or destination and put an amount on
each broker. </p>

<p>This may necessitate multiple consuming connections (to each of the brokers) on the
client side where there are multiple sources feeding the same client. Alternatively, for some
applications, the clients can be segmented in pairs of publisher-consumer by flow.</p>

<p><b>Possible Solution D - Redirect to Passive Broker</b></p>

<p>An alternative approach might be to monitor heap use on the primary broker and kick
off a second broker once the first is under heavy load. Client connections (publishing and
consuming) would required to be redirected to the secondary broker until the first broker
recovers. This is a kind of active-passive pair approach, indeed the secondary broker could
be up all the time and simply redirected to as required. Rob mentioned that AMQP 1-0 has the
concept of redirect, so it may be something we could look at to inform the solution. </p>

<p>There are some questions around a redirect solution:</p>
<ul class="alternate" type="square">
	<li>might require an ability to manually override on an incoing connection so that
if broker 1 is maxed out due to a down consumer, on restart that consumer can drain broker
	<li>could the console be used to redirect connections - allowing operate control, or
possibly using a JMX script or similar (with requried MBean method support added) to allow
us a cheap solution not broker oriented ?</li>


	<li>Where message order is important, only a solution which separated flows in pairs
could be used</li>
	<li>Failover ?</li>
	<li>Management of the brokers might need some scripting such that they can be brought
up &amp; down as a pair, to at least black box the operations cost</li>
	<li>? Priority Queues ...</li>

     <div id="commentsSection" class="wiki-content pageSection">
       <div style="float: right;">
            <a href="http://cwiki.apache.org/confluence/users/viewnotifications.action"
class="grey">Change Notification Preferences</a>

       <a href="http://cwiki.apache.org/confluence/display/qpid/Multiple+Java+Brokers+-+Use+Cases">View
       <a href="http://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=2851639&revisedVersion=8&originalVersion=7">View
       <a href="http://cwiki.apache.org/confluence/display/qpid/Multiple+Java+Brokers+-+Use+Cases?showComments=true&amp;showCommentArea=true#addcomment">Add

Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:commits-subscribe@qpid.apache.org

View raw message