tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Eggers <its_toas...@yahoo.com>
Subject Re: Tomcat 6.0.18 clustering problem
Date Wed, 15 Jun 2011 22:47:03 GMT
----- Original Message -----

> From: Nilesh - MiKu <nilesh.m@directi.com>
> To: Tomcat Users List <users@tomcat.apache.org>; Mark Eggers <its_toasted@yahoo.com>
> Cc: 
> Sent: Tuesday, June 14, 2011 4:01 AM
> Subject: Re: Tomcat 6.0.18 clustering problem
> 
>T hanks Mark. :)
> 
> My comments inlined...
> 
> Over and above, the underlined network pipeline system also seems to be
> fine. Still dont understand what is wrong. After enabling the logs to FINE
> level, i could see following...
> 
> Jun 14, 2011 10:26:38 AM org.apache.catalina.tribes.transport.ReceiverBase
> getBind
> FINE: Starting replication listener on address:xx.xx.xx.xxx
> Jun 14, 2011 10:26:38 AM org.apache.catalina.tribes.transport.ReceiverBase
> bind
> INFO: Receiver Server Socket bound to:/xx.xx.xx.xxx:4000
> Jun 14, 2011 10:26:38 AM
> org.apache.catalina.tribes.membership.McastServiceImpl setupSocket
> INFO: Setting cluster mcast soTimeout to 500
> Jun 14, 2011 10:26:38 AM
> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
> INFO: Sleeping for 1000 milliseconds to establish cluster membership, start
> level:4
> Jun 14, 2011 10:26:39 AM
> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
> INFO: Done sleeping, membership established, start level:4
> Jun 14, 2011 10:26:39 AM
> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
> INFO: Sleeping for 1000 milliseconds to establish cluster membership, start
> level:8
> Jun 14, 2011 10:26:40 AM
> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
> INFO: Done sleeping, membership established, start level:8
> 
> Since, the clusters do not recognize anything, no communication in the logs!
> :(
> 
> 
> 
> On Tue, Jun 14, 2011 at 5:07 AM, Mark Eggers <its_toasted@yahoo.com> 
> wrote:
> 
>>  ----- Original Message -----
>> 
>>  > From: Nilesh - MiKu <nilesh.m@directi.com>
>>  > To: users@tomcat.apache.org
>>  > Cc:
>>  > Sent: Monday, June 13, 2011 8:36 AM
>>  > Subject: Tomcat 6.0.18 clustering problem
>>  >
>>  > Hi people...
>>  >
>>  > Background :
>>  >
>>  > I have two nodes (say, n1 and n2) running 3 instances of tomcat (say 
> t1,
>>  t2,
>>  > t3), with n1 running t1, t3 and n2 running t2. (All running same
>>  > application.). I want to make clustering for n1-t1 and n2-t2.
>>  >
>>  > Clustering cofig for n1-t1 is....
>>  >
>>  > <Cluster 
> className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
>>  > channelSendOptions="8">
>>  >
>>  > <Manager 
> className="org.apache.catalina.ha.session.DeltaManager"
>>  > expireSessionsOnShutdown="false"
>>  > notifyListenersOnReplication="true"/>
>>  >
>>  > <Channel
>>  > 
> className="org.apache.catalina.tribes.group.GroupChannel">
>>  > <Membership
>>  > 
> className="org.apache.catalina.tribes.membership.McastService"
>>  > address="228.0.0.4"
>>  > port="45564"
>>  > frequency="500"
>>  > dropTime="3000"/>
>>  >
>>  > <Receiver
>>  > 
> className="org.apache.catalina.tribes.transport.nio.NioReceiver"
>>  > address="auto"
>>  > port="4000"
>>  > autoBind="100"
>>  > selectorTimeout="5000"
>>  > maxThreads="6"/>
>>  >
>>  > <Sender
>>  > 
> className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
>>  > <Transport
>>  >
>> 
> className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
>>  > </Sender>
>>  >
>>  > <Interceptor
>>  >
>> 
> className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor"/>
>>  > <Interceptor
>>  >
>> 
> className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
>>  > <Interceptor
>>  >
>> 
> className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
>>  > </Channel>
>>  >
>>  > <Valve 
> className="org.apache.catalina.ha.tcp.ReplicationValve"
>>  > 
> filter=".*\.ico;.*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.css;.*\.txt;"/>
>>  >
>>  > <ClusterListener
>>  > 
> className="org.apache.catalina.ha.session.ClusterSessionListener"/>
>>  >
>>  > </Cluster>
>>  >
>>  > Clustering cofig for n2-t2 is same as above....
>>  >
>>  > n1-t3 has element <Cluster> commented and is not participating 
> in
>>  > clustering
>>  > at all. Its being used for some other special purpose.
>>  >
>>  > Here is what i get when i start the tomcat instance.
>>  >
>>  > Jun 11, 2011 9:26:18 AM org.apache.catalina.core.AprLifecycleListener
>>  init
>>  > INFO: The APR based Apache Tomcat Native library which allows optimal
>>  > performance in production environments was not found on the
>>  > java.library.path: /usr/lib/jvm/jav
>>  >
>> 
> a-1.6.0-sun-1.6.0.13/jre/lib/amd64/server:/usr/lib/jvm/java-1.6.0-sun-1.6.0.13/jre/lib/amd64:/usr/lib/jvm/java-1.6.0-sun-1.6.0.13/jre/../lib/amd64:/usr/java/packages
>>  > /lib/amd64:/lib:/usr/lib
>>  > Jun 11, 2011 9:26:18 AM org.apache.coyote.http11.Http11Protocol init
>>  > INFO: Initializing Coyote HTTP/1.1 on http-8080
>>  > Jun 11, 2011 9:26:18 AM org.apache.catalina.startup.Catalina load
>>  > INFO: Initialization processed in 446 ms
>>  > Jun 11, 2011 9:26:18 AM org.apache.catalina.core.StandardService start
>>  > INFO: Starting service Catalina
>>  > Jun 11, 2011 9:26:18 AM org.apache.catalina.core.StandardEngine start
>>  > INFO: Starting Servlet Engine: Apache Tomcat/6.0.18
>>  > Jun 11, 2011 9:26:18 AM org.apache.catalina.ha.tcp.SimpleTcpCluster 
> start
>>  > INFO: Cluster is about to start
>>  > Jun 11, 2011 9:26:18 AM 
> org.apache.catalina.tribes.transport.ReceiverBase
>>  > bind
>>  > INFO: Receiver Server Socket bound to:/70.87.28.134:4000
>>  > Jun 11, 2011 9:26:18 AM
>>  > org.apache.catalina.tribes.membership.McastServiceImpl setupSocket
>>  > INFO: Setting cluster mcast soTimeout to 500
>>  > Jun 11, 2011 9:26:18 AM
>>  > org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
>>  > INFO: Sleeping for 1000 milliseconds to establish cluster membership,
>>  start
>>  > level:4
>>  > Jun 11, 2011 9:26:19 AM
>>  > org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
>>  > INFO: Done sleeping, membership established, start level:4
>>  > Jun 11, 2011 9:26:19 AM
>>  > org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
>>  > INFO: Sleeping for 1000 milliseconds to establish cluster membership,
>>  start
>>  > level:8
>>  > Jun 11, 2011 9:26:20 AM
>>  > org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
>>  > INFO: Done sleeping, membership established, start level:8
>>  > Jun 11, 2011 9:26:20 AM org.apache.catalina.loader.WebappClassLoader
>>  > validateJarFile
>>  > INFO: validateJarFile(/opt/
>>  > 
> mail.pw/webapps/pw-mail/WEB-INF/lib/selenium-server-0.9.2-standalone.jar)
>>  -
>>  > jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending
>>  > class: javax/servlet/Servlet.class
>>  > Jun 11, 2011 9:26:20 AM org.apache.catalina.loader.WebappClassLoader
>>  > validateJarFile
>>  > INFO: validateJarFile(/opt/
>>  > mail.pw/webapps/pw-mail/WEB-INF/lib/servlet-api-2.5-6.1.11.jar) - jar
>>  not
>>  > loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: ja
>>  > vax/servlet/Servlet.class
>>  > Jun 11, 2011 9:26:21 AM org.apache.catalina.ha.session.DeltaManager 
> start
>>  > INFO: Register manager /pw-mail to cluster element Engine with name
>>  Catalina
>>  > Jun 11, 2011 9:26:21 AM org.apache.catalina.ha.session.DeltaManager 
> start
>>  > INFO: Starting clustering manager at /pw-mail
>>  > Jun 11, 2011 9:26:21 AM org.apache.catalina.ha.session.DeltaManager
>>  > getAllClusterSessions
>>  > INFO: Manager [localhost#/pw-mail]: skipping state transfer. No 
> members
>>  > active in cluster group.
>>  > Jun 11, 2011 9:26:28 AM org.apache.catalina.ha.session.DeltaManager 
> start
>>  > INFO: Register manager /manager to cluster element Engine with name
>>  Catalina
>>  > Jun 11, 2011 9:26:28 AM org.apache.catalina.ha.session.DeltaManager 
> start
>>  > INFO: Starting clustering manager at /manager
>>  > Jun 11, 2011 9:26:28 AM org.apache.catalina.ha.session.DeltaManager
>>  > getAllClusterSessions
>>  > INFO: Manager [localhost#/manager]: skipping state transfer. No 
> members
>>  > active in cluster group.
>>  > Jun 11, 2011 9:26:28 AM org.apache.coyote.http11.Http11Protocol start
>>  > INFO: Starting Coyote HTTP/1.1 on http-8080
>>  > Jun 11, 2011 9:26:28 AM org.apache.jk.common.ChannelSocket init
>>  > INFO: JK: ajp13 listening on /0.0.0.0:8009
>>  > Jun 11, 2011 9:26:28 AM org.apache.jk.server.JkMain start
>>  > INFO: Jk running ID=0 time=0/24  config=null
>>  > Jun 11, 2011 9:26:28 AM org.apache.catalina.startup.Catalina start
>>  > INFO: Server startup in 10245 ms
>>  >
>>  > Note : context for all instances is pw-mail.
>>  >
>>  > Can anyone say what is wrong with this configuration.
>>  >
>>  >
>>  > --
>>  > Best Regards,
>>  > Nilesh Mevada
>>  >
>> 
>>  This looks like an AMD 64 bit Linux platform? I'm just guessing based 
> on
>>  the paths in your mail message.
>> 
>>  yes.
> 
>>  At any rate, I'll make some comments which will hopefully help.
>> 
>>  First of all, I would recommend upgrading to the latest Tomcat 6 version
>>  (6.0.32) and JRE version if possible. There have been a lot of
>>  cluster-related patches since 6.0.18. If possible, look at upgrading to the
>>  latest Tomcat 7 version (7.0.14).
>> 
>>  >From the output, it looks like you have the Selenium server included in
>>  your application. I think the server version includes an embedded Jetty
>>  server, and Tomcat is complaining about classes that are included.
>> 
>>  See:
>> 
>>  > INFO: validateJarFile(/opt/
>>  > 
> mail.pw/webapps/pw-mail/WEB-INF/lib/selenium-server-0.9.2-standalone.jar)
>>  -
>>  > jar not loaded. See Servlet Spec 2.3, section 9.7.2. Offending
>>  > class: javax/servlet/Servlet.class
>> 
>> 
>>  I think you'll need the corresponding coreless version, but check the
>>  Selenium documentation to make sure.
>> 
>>  Also, you've included the servlet API in your application. This is 
> shown
>>  by:
>> 
>>  > INFO: validateJarFile(/opt/
>>  > mail.pw/webapps/pw-mail/WEB-INF/lib/servlet-api-2.5-6.1.11.jar) - jar
>>  not
>>  > loaded. See Servlet Spec 2.3, section 9.7.2. Offending class: ja
>>  > vax/servlet/Servlet.class
>> 
>> 
>>  Don't do this. Your IDE should enable you to write servlet code without 
> it
>>  packaging up the API. Each IDE is different, so read your documentation.
>> 
>>  will take care of the above two.
> 
> 
>>  Make sure that your application is marked distributable by adding
>>  <distributable/> in your web.xml file. Make sure that all session 
> properties
>>  implement Serializable.
>> 
>>  yes, its marked that way.
> 
> 
>>  Your cluster configuration doesn't look too much different than one I 
> use.
>> 
>>  > <Cluster 
> className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
>>  > channelSendOptions="8">
>> 
>>  I'm not sure why you are using ASYNCHRONOUS as your channelSendOptions
>>  (especially without an ACK). This will allow session updates to be 
> processed
>>  in a different order from which they were sent. I don't know how this 
> will
>>  impact your application.
>> 
>>  Just to make response faster. Although, the app is not so heavily used, and
> i can make the value 4 i.e. Sync with Ack.
> 
> 
>>  > <Interceptor
>>  >
>> 
> className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor"/>
>> 
>>  You're using an interceptor that doesn't seem to be documented. 
> Looking at
>>  the source code, it appears that this interceptor sends a ping message out
>>  every second.
>> 
>>  > <Valve 
> className="org.apache.catalina.ha.tcp.ReplicationValve"
>>  > 
> filter=".*\.ico;.*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.css;.*\.txt;"/>
>> 
>>  You've removed the htm and html items from the filter, and added ico. 
> I'll
>>  assume that there are no htm / html files in your application (all pages 
> are
>>  generated dynamically).
>> 
>>  yes. There a couple of html files, but since i am using nginx, that static
> html file request never reaches my tomcat instance. Hence no worries.
> 
> 
>>  In short, I don't see any show stoppers in your configuration, but 
> maybe
>>  other list members have some ideas.
>> 
>>  However, there could be some system issues that are preventing multicasting
>>  from working.
>> 
>>  1. Make sure your system is set up for multicasting
>> 
>>  Check to see if your interface is enabled for multicasting. Mine looks like
>>  this in part:
>> 
>>  eth0      Link encap:Ethernet
>>            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>> 
>>  I have MASTER - SLAVE multicast settings... something like..
> bond0     Link encap:Ethernet
> UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
> bond0:0   Link encap:Ethernet
> UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
> eth0      Link encap:Ethernet
> UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
> 
> Anyway, the things are proper as far as this thing is concerned.
> 
> 2. Make sure your firewall allows multicasting. By default, Fedora 15 does
>>  not. Other Linux distributions may be different. Add the following to your
>>  firewall rules (adjust for your distribution).
>> 
>>  -A INPUT -d 224.0.0.0/4 -m state --state NEW -j ACCEPT
>> 
>>  You'll probably want this to be much more restrictive, but this may get 
> you
>>  up and running.
>> 
>>  3. Add a multicast route
>> 
>>  Adjust this to fit your distribution and configuration.
>> 
>>  /sbin/ip route add to multicast 224.0.0.0/4 dev eth0
>> 
>>  Filip Hanik published a link to a multicast test tool (MCaster) that was
>>  included in (a now ancient version of) Tomcat 4. This was useful in order 
> to
>>  confirm that you had multicasting set up correctly on your systems. You
>>  might be able to dig it up and build it by following the Archives link on
>>  the Tomcat home page.
>> 
>> 
>>  Seems to be correct on my host for the above two points..
> 
> 
>>  4. Don't announce multicast on the localhost address.
>> 
>>  By default, Tomcat gets the host address for multicasting via
>>  java.net.InetAddress.getLocalHost().getHostAddress(). Make sure you're 
> not
>>  advertising 127.0.0.1. In Linux, the most common source of this problem is
>>  by adding your host name to the localhost line in /etc/hosts.
>> 
> 
> Thanks for your separate log config. i could set it up, and see that n1-t1
> and n2-t2 are binding to appropriate n2, n1 ip addresses well.
> 
>> 
> 
> You can also set up separate logging for clustering by making some changes
>>  to $CATALINA_HOME/conf/logging.properties
>> 
>>  For example:
>> 
>>  # Added a cluster logging handler
>>  handlers = 1catalina.org.apache.juli.FileHandler,
>>  2localhost.org.apache.juli.FileHandler,
>>  3manager.org.apache.juli.FileHandler,
>>  4host-manager.org.apache.juli.FileHandler,
>>  java.util.logging.ConsoleHandler,5cluster.org.apache.juli.FileHandler
>> 
>>  # specify the level and where to store the information
>>  5cluster.org.apache.juli.FileHandler.level = FINER
>>  5cluster.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
>>  5cluster.org.apache.juli.FileHandler.prefix = cluster.
>> 
>>  # various cluster components logging
>>  org.apache.catalina.tribes.MESSAGES.level = FINE
>>  org.apache.catalina.tribes.MESSAGES.handlers =
>>  5cluster.org.apache.juli.FileHandler
>> 
>>  org.apache.catalina.tribes.level = FINE
>>  org.apache.catalina.tribes.handlers = 5cluster.org.apache.juli.FileHandler
>> 
>>  org.apache.catalina.ha.level = FINE
>>  org.apache.catalina.ha.handlers = 5cluster.org.apache.juli.FileHander
>> 
>>  org.apache.catalina.ha.deploy.level = INFO
>>  org.apache.catalina.ha.deploy.handlers =
>>  5cluster.org.apache.juli.FileHandler
>> 
>>  Adjust the logging levels accordingly.
>> 
>>  . . . . just my two cents.
>> 
>>  /mde/


No need to cc me, since I read the list. All that does is give me duplicate mail messages
in my inbox :-p.

> Jun 14, 2011 10:26:38 AM org.apache.catalina.tribes.transport.ReceiverBase
> getBind
> FINE: Starting replication listener on address:xx.xx.xx.xxx
> Jun 14, 2011 10:26:38 AM org.apache.catalina.tribes.transport.ReceiverBase
> bind
> INFO: Receiver Server Socket bound to:/xx.xx.xx.xxx:4000
> Jun 14, 2011 10:26:38 AM
> org.apache.catalina.tribes.membership.McastServiceImpl setupSocket
> INFO: Setting cluster mcast soTimeout to 500
> Jun 14, 2011 10:26:38 AM
> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
> INFO: Sleeping for 1000 milliseconds to establish cluster membership, start
> level:4
> Jun 14, 2011 10:26:39 AM
> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
> INFO: Done sleeping, membership established, start level:4
> Jun 14, 2011 10:26:39 AM
> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
> INFO: Sleeping for 1000 milliseconds to establish cluster membership, start
> level:8
> Jun 14, 2011 10:26:40 AM
> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
> INFO: Done sleeping, membership established, start level:8
> 


That's all you see? Hmm.

OK, first of all, I'm trying to emulate this on a single machine. I've played with aliased
IP addresses, but no matter how I configure things, communication still goes across localhost.

My current environment:

OS:     Fedora 15 2.6.38.7-30.fc15.i686 #1 SMP
JRE:    1.6.0_22-b04 (with the vulnerability patch applied)
Tomcat: 6.0.29 (but see notes below)

Hacking things up to represent multiple hosts is an amusing exercise. Here's the best I've
been able to accomplish.

First, set up IP address aliases.

As root, I do this.

/sbin/ifconfig eth0:0 192.168.0.253 netmask 255.255.255.0 up
/sbin/ifconfig eth0:1 192.168.0.252 netmask 255.255.255.0 up
/sbin/ifconfig eth0:2 192.168.0.251 netmask 255.255.255.0 up

For convenience, I add host names to /etc/hosts.

192.168.0.253phobos
192.168.0.252deimos
192.168.0.251mars

The fun part is modifying $CATALINA_BASE/conf/server.xml. I use multiple $CATALINA_BASE and
a single $CATALINA_HOME to simplify the setup.

In each server.xml, I did the following:

1. Change the shutdown port to be unique for each Tomcat instance.

In Tomcat 6.0.x, the shutdown port is bound to all interfaces. In Tomcat 7.0.x, you can specify
which interface.

2. Specify unique ports in the Receiver element of Cluster->Channel for each Tomcat instance.

3. Add an address attribute to the following elements for each Tomcat instance

All Connector elements (I use an AJP and HTTP connector).
Receiver element in Cluster->Channel
Transport element in Cluster->Channel->Sender

This doesn't take care of the multicasting announcements, and in Linux you cannot bind that
to a particular address.

When I start up my cluster, I see the following in the logs (from one host).

FINE: Mcast add member org.apache.catalina.tribes.membership.MemberImpl[tcp://{192, 168, 0,
251}:4100,{192, 168, 0, 251},4100, alive=1011,id={20 84 -32 54 -117 4 65 -60 -98 104 93 -50
80 -110 3 -93 }, payload={}, command={}, domain={}, ]


FINE: Mcast add member org.apache.catalina.tribes.membership.MemberImpl[tcp://{192, 168, 0,
253}:4200,{192, 168, 0, 253},4200, alive=6525,id={41 43 -84 33 -73 126 69 -120 -109 -5 40
-23 -101 -21 21 -77 }, payload={}, command={}, domain={}, ]



I have a small test application that uses sessions. A user authenticates against a MySQL database,
populates some attributes, and then chooses a randomly generated pet.

When I watch the lo adapter with Wireshark, I see messages going across TCP ports 4000, 4100,
and 4200. This corresponds to the Cluster->Membership->Receiver port number attribute
I've specified in server.xml.

Looking more closely at those packets, they seem to contain the actual serialized session
information (among other things).

Based on that, I think you'll have to add something like the following in /etc/sysconfig/iptables:

-A INPUT -m state --state NEW -m tcp -p tcp --dport 4000 -s 192.168.0.0/24 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 4100 -s 192.168.0.0/24 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 4200 -s 192.168.0.0/24 -j ACCEPT

In short, replace the dport with the other cluster members' ports. Of course, adjust port
numbers and source addresses appropriately.

I've come across other information via a Google search that indicates you might have to add
the following two lines to your iptables configuration.

-A INPUT -m state --state NEW -m tcp -p tcp --dport 45564 -s 192.168.0.0/24 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 45564 -s 192.168.0.0/24 -j ACCEPT

This corresponds to the port attribute in Cluster->Channel->Membership.

However, looking at some Wireshark traces, I only see this port being used with UDP / Multicast.

As for the Ping Interceptor:
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor"/>

Every time I tried using that, I got a NullPointer exception (even with the firewall turned
off). My cluster configuration (using different ports, and no IP aliases):

<!-- cluster here so that farm deployment works -->
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
         channelSendOptions="6">
  <Manager className="org.apache.catalina.ha.session.DeltaManager"
           expireSessionsOnShutdown="false"
           notifyListenersOnReplication="true"/>
  <Channel className="org.apache.catalina.tribes.group.GroupChannel">
    <Membership className="org.apache.catalina.tribes.membership.McastService"
                address="228.0.0.4"
                port="45564"
                frequency="500"
                dropTime="3000"/>
    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
              address="auto"
              port="4000"
              selectorTimeout="5000"
              maxThreads="6"/>
    <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
      <Transport
        className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
    </Sender>
    <Interceptor
    className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
    <Interceptor
    className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
    <Interceptor 
    className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
    <Interceptor 
    className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor"/>
  </Channel>
  <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
         filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
  <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
            tempDir="${catalina.base}/temp-dir/"
            deployDir="${catalina.base}/webapps/"
            watchDir="${catalina.base}/watch-dir/"
            processDeployFrequency="4"
            watchEnabled="true"/>
  <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>

In Tomcat 6.0.29:


WARNING: Unable to send ping from TCP ping thread.
java.lang.NullPointerException
        at org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor.sendPing(TcpPingInterceptor.java:121)
at org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor$PingThread.run(TcpPingInterceptor.java:166)
Jun 15, 2011 3:27:40 PM org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor$PingThread
run
WARNING: Unable to send ping from TCP ping thread.
java.lang.NullPointerException
at org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor.sendPing(TcpPingInterceptor.java:121)
at org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor$PingThread.run(TcpPingInterceptor.java:166)

Also, this pops up as well:

Jun 15, 2011 3:27:41 PM org.apache.catalina.tribes.group.GroupChannel$HeartbeatThread run
SEVERE: Unable to send heartbeat through Tribes interceptor stack. Will try to sleep again.
java.lang.NullPointerException
at org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor.sendPing(TcpPingInterceptor.java:121)
at org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor.heartbeat(TcpPingInterceptor.java:93)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.heartbeat(ChannelInterceptorBase.java:97)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.heartbeat(ChannelInterceptorBase.java:97)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.heartbeat(ChannelInterceptorBase.java:97)
at org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.heartbeat(TcpFailureDetector.java:192)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.heartbeat(ChannelInterceptorBase.java:97)
at org.apache.catalina.tribes.group.GroupChannel.heartbeat(GroupChannel.java:149)
at org.apache.catalina.tribes.group.GroupChannel$HeartbeatThread.run(GroupChannel.java:661)

Without the TcpPing interceptor, I get no NullPointer error messages.

I apologize for the length and log snippet formatting. I hope someone finds this useful.

I'll try the TcpPing interceptor with the 6.0.x trunk later on today and see if I still get
the NullPointer messages.

. . . . just my two cents.

/mde/ 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Mime
View raw message