activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From build...@apache.org
Subject svn commit: r981754 - in /websites/production/activemq/content: cache/main.pageCache shared-file-system-master-slave.html
Date Thu, 03 Mar 2016 21:21:57 GMT
Author: buildbot
Date: Thu Mar  3 21:21:57 2016
New Revision: 981754

Log:
Production update by buildbot for activemq

Modified:
    websites/production/activemq/content/cache/main.pageCache
    websites/production/activemq/content/shared-file-system-master-slave.html

Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.

Modified: websites/production/activemq/content/shared-file-system-master-slave.html
==============================================================================
--- websites/production/activemq/content/shared-file-system-master-slave.html (original)
+++ websites/production/activemq/content/shared-file-system-master-slave.html Thu Mar  3 21:21:57
2016
@@ -82,7 +82,7 @@
   <tbody>
         <tr>
         <td valign="top" width="100%">
-<div class="wiki-content maincontent"><h2 id="SharedFileSystemMasterSlave-SharedFileSystemMasterSlave">Shared
File System Master Slave</h2><p>If you have a SAN or shared file system it can
be used to provide <em>high availability</em> such that if a broker is killed,
another broker can take over immediately.</p><div class="confluence-information-macro
confluence-information-macro-warning"><p class="title">Ensure your shared file locks
work</p><span class="aui-icon aui-icon-small aui-iconfont-error confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>Note that the requirements of this failover
system are a distributed file system like a SAN for which exclusive file locks work reliably.
If you do not have such a thing available then consider using <a shape="rect" href="masterslave.html">MasterSlave</a>
instead which implements something similar but working on commodity hardware using local file
systems which ActiveMQ does the replication.</p><div
  class="confluence-information-macro confluence-information-macro-note"><p class="title">OCFS2
Warning</p><span class="aui-icon aui-icon-small aui-iconfont-warning confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>Was testing using OCFS2 and both brokers
thought they had the master lock - this is because "OCFS2 only supports locking with 'fcntl'
and not 'lockf and flock', therefore mutex file locking from Java isn't supported."</p><p>From
<a shape="rect" class="external-link" href="http://sources.redhat.com/cluster/faq.html#gfs_vs_ocfs2"
rel="nofollow">http://sources.redhat.com/cluster/faq.html#gfs_vs_ocfs2</a> :<br
clear="none"> OCFS2: No cluster-aware flock or POSIX locks<br clear="none"> GFS:
fully supports Cluster-wide flocks and POSIX locks and is supported.<br clear="none">
See this JIRA for more discussion: <a shape="rect" class="external-link" href="https://issues.apache.org/jira/browse/AMQ-4378">https://issues.apache.org/jira/browse
 /AMQ-4378</a></p></div></div><div class="confluence-information-macro
confluence-information-macro-note"><p class="title">NFSv3 Warning</p><span
class="aui-icon aui-icon-small aui-iconfont-warning confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>In the event of an abnormal NFSv3 client
termination (i.e., the ActiveMQ master broker), the NFSv3 server will not timeout the lock
that is held by that client. This effectively renders the ActiveMQ data directory inaccessible
because the ActiveMQ slave broker can't acquire the lock and therefore cannot start up. The
only solution to this predicament with NFSv3 is to reboot all ActiveMQ instances to reset
everything.</p><p>Use of NFSv4 is another solution because it's design includes
timeouts for locks. When using NFSv4 and the client holding the lock experiences an abnormal
termination, by design, the lock is released after 30 seconds, allowing another client to
grab the lock. For more information 
 about this, see <a shape="rect" class="external-link" href="http://blogs.netapp.com/eislers_nfs_blog/2008/07/part-i-since-nf.html"
rel="nofollow">this blog entry</a>.</p></div></div></div></div><p>Basically
you can run as many brokers as you wish from the same shared file system directory. The first
broker to grab the exclusive lock on the file is the master broker. If that broker dies and
releases the lock then another broker takes over. The slave brokers sit in a loop trying to
grab the lock from the master broker.</p><p>The following example shows how to
configure a broker for Shared File System Master Slave where <strong>/sharedFileSystem</strong>
is some directory on a shared file system. It is just a case of configuring a file based store
to use a shared directory.</p><div class="code panel pdl" style="border-width: 1px;"><div
class="codeContent panelContent pdl">
+<div class="wiki-content maincontent"><h2 id="SharedFileSystemMasterSlave-SharedFileSystemMasterSlave">Shared
File System Master Slave</h2><p>If you have a SAN or shared file system it can
be used to provide <em>high availability</em> such that if a broker is killed,
another broker can take over immediately.</p><div class="confluence-information-macro
confluence-information-macro-warning"><p class="title">Ensure your shared file locks
work</p><span class="aui-icon aui-icon-small aui-iconfont-error confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>Note that the requirements of this failover
system are a distributed file system like a SAN for which exclusive file locks work reliably.
If you do not have such a thing available then consider using <a shape="rect" href="masterslave.html">MasterSlave</a>
instead which implements something similar but working on commodity hardware using local file
systems which ActiveMQ does the replication.</p><div
  class="confluence-information-macro confluence-information-macro-note"><p class="title">OCFS2
Warning</p><span class="aui-icon aui-icon-small aui-iconfont-warning confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>Was testing using OCFS2 and both brokers
thought they had the master lock - this is because "OCFS2 only supports locking with 'fcntl'
and not 'lockf and flock', therefore mutex file locking from Java isn't supported."</p><p>From
<a shape="rect" class="external-link" href="http://sources.redhat.com/cluster/faq.html#gfs_vs_ocfs2"
rel="nofollow">http://sources.redhat.com/cluster/faq.html#gfs_vs_ocfs2</a> :<br
clear="none"> OCFS2: No cluster-aware flock or POSIX locks<br clear="none"> GFS:
fully supports Cluster-wide flocks and POSIX locks and is supported.<br clear="none">
See this JIRA for more discussion: <a shape="rect" class="external-link" href="https://issues.apache.org/jira/browse/AMQ-4378">https://issues.apache.org/jira/browse
 /AMQ-4378</a></p></div></div><div class="confluence-information-macro
confluence-information-macro-note"><p class="title">NFSv3 Warning</p><span
class="aui-icon aui-icon-small aui-iconfont-warning confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>In the event of an abnormal NFSv3 client
termination (i.e., the ActiveMQ master broker), the NFSv3 server will not timeout the lock
that is held by that client. This effectively renders the ActiveMQ data directory inaccessible
because the ActiveMQ slave broker can't acquire the lock and therefore cannot start up. The
only solution to this predicament with NFSv3 is to reboot all ActiveMQ instances to reset
everything.</p><p>Use of NFSv4 is another solution because its design includes
timeouts for locks. When using NFSv4 and the client holding the lock experiences an abnormal
termination, by design, the lock is released after 30 seconds, allowing another client to
grab the lock. For more information a
 bout this, see <a shape="rect" class="external-link" href="http://blogs.netapp.com/eislers_nfs_blog/2008/07/part-i-since-nf.html"
rel="nofollow">this blog entry</a>.</p></div></div></div></div><p>Basically
you can run as many brokers as you wish from the same shared file system directory. The first
broker to grab the exclusive lock on the file is the master broker. If that broker dies and
releases the lock then another broker takes over. The slave brokers sit in a loop trying to
grab the lock from the master broker.</p><p>The following example shows how to
configure a broker for Shared File System Master Slave where <strong>/sharedFileSystem</strong>
is some directory on a shared file system. It is just a case of configuring a file based store
to use a shared directory.</p><div class="code panel pdl" style="border-width: 1px;"><div
class="codeContent panelContent pdl">
 <pre class="brush: java; gutter: false; theme: Default" style="font-size:12px;">  
 &lt;persistenceAdapter&gt;
       &lt;kahaDB directory="/sharedFileSystem/sharedBrokerData"/&gt;
     &lt;/persistenceAdapter&gt;



Mime
View raw message