tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rainer Jung <rainer.j...@kippdata.de>
Subject Re: jk Status not showing errors
Date Tue, 02 Jun 2009 19:42:09 GMT
On 02.06.2009 20:53, Matthew Laird wrote:
> Unfortunately I'm not seeing that.  What I did was start both Tomcats in
> my LB pair, start Apache, then I take the second Tomcat down to see if
> it will detect it being failed.
> 
> Unfortunately it never seems to, it just shows the second as OK/IDLE,
> and happily directs all requests to the first.  This concerns me,
> because if the second were to fail, then later the first, everything
> would die and I'd have no advance warning.  I can't seem to make it ping
> and detect a dead Tomcat.

Assuming that you did refresh the jkstatus display: what is your test
client? The fact that you see OK/IDLE, but all requests go to the other
node indicates, that you are using requests with associated session, so
the balancer is not allowed to send them to the other node and thus does
not detect the down node. Check to remove the JSESSIONID cookie before
sending requests, or use a client which allows cookie disabling (like curl).

> I am using the latest version of mod_jk, I upgraded that before I began
> playing with the load balancer settings.  I'd appreciate any feedback on
> what I might be doing wrong.  Thanks.
> 
> workers.properties:
> 
> worker.list=production,development,old,jkstatus
> 
> worker.production.type=lb
> worker.production.balance_workers=production1,production2
> worker.production.sticky_session=True
> worker.production.method=S
> 
> worker.lbbasic.type=ajp13
> worker.lbbasic.connect_timeout=10000
> worker.lbbasic.recovery_options=7
> worker.lbbasic.socket_keepalive=1
> worker.lbbasic.socket_timeout=60
> worker.lbbasic.ping_mode=CI
> 
> worker.production1.reference=worker.lbbasic
> worker.production1.port=8009
> worker.production1.host=localhost
> 
> worker.production2.reference=worker.lbbasic
> worker.production2.port=8012
> worker.production2.host=localhost
> 
> worker.development.port=8010
> worker.development.host=localhost
> worker.development.type=ajp13
> 
> worker.old.port=8011
> worker.old.host=localhost
> worker.old.type=ajp13
> 
> worker.jkstatus.type=status

Looks OK.

Rainer

> Lawrence Lamprecht wrote:
>> I do not know if this is relevant or not, but I have just installed
>> the latest version of mod_jk and the jkstatus is very much better than
>> it used to be.
>>
>> I had the same issue with loadbalancers not showing when they are
>> offline or broken. With the latest version, jksataus has the
>> possibility to auto refresh itself. This now shouws when load
>> balancers go down without a request being send to it. It is pretty
>> dynamic as well. I ran several tests where I took one of the balancers
>> down, and left jkstatus refreshing every 10 seconds and that told me
>> that the worker was in error.
>>
>> It also shows you that the work is OK - IDLE when the worker is not
>> being used but is good. As soon as it receives a request the status
>> then changes to OK.
>>
>> Hope this helps.
>>
>> Kind regards / Met vriendelijke groet,
>> Lawrence Lamprecht
>> Application Content Manager
>> QUADREM Netherlands B.V.
>> Kabelweg 61, 1014 BA  Amsterdam
>> Post Office Box 20672, 1001 NR  Amsterdam
>> Office: +31 20 880 41 16
>> Mobile: +31 6 13 14 26 31
>> Fax: +31 20 880 41 02
>>
>>
>>
>> Read our blog: Intelligent Supply Management - Your advantage
>>
>>
>> -----Original Message-----
>> From: Rainer Jung [mailto:rainer.jung@kippdata.de] Sent: Saturday, May
>> 30, 2009 2:46 PM
>> To: Tomcat Users List
>> Subject: Re: jk Status not showing errors
>>
>> On 29.05.2009 22:50, Matthew Laird wrote:
>>> Good afternoon,
>>>
>>> I've been trying to get the jkstatus component of mod_jk running, and
>>> I'm not quite sure what I'm doing wrong in trying to have it report dead
>>> Tomcat instances.
>>>
>>> I have two tomcat instances setup in a load balancer, as a test I've
>>> taken down one of them.  However the jkstatus screen still shows both of
>>> them as OK.  I'm not sure what I'm missing from my workers.properties
>>> file to make it test the Tomcat and report a failed instance, so I can
>>> set Nagios to monitor this page and report problems.
>>>
>>> My workers.properties is:
>>>
>>> worker.list=production,development,old,jkstatus
>>>
>>> worker.production.type=lb
>>> worker.production.balance_workers=production1,production2
>>> worker.production.sticky_session=True
>>> worker.production.method=S
>>>
>>> worker.lbbasic.type=ajp13
>>> worker.lbbasic.connect_timeout=10000
>>> worker.lbbasic.recovery_options=7
>>> worker.lbbasic.socket_keepalive=1
>>> worker.lbbasic.socket_timeout=60
>>>
>>> worker.production1.reference=worker.lbbasic
>>> worker.production1.port=8009
>>> worker.production1.host=localhost
>>> #worker.production1.redirect=production2
>>>
>>> worker.production2.reference=worker.lbbasic
>>> worker.production2.port=8012
>>> worker.production2.host=localhost
>>> #worker.production2.activation=disabled
>>>
>>> worker.development.port=8010
>>> worker.development.host=localhost
>>> worker.development.type=ajp13
>>>
>>> worker.old.port=8011
>>> worker.old.host=localhost
>>> worker.old.type=ajp13
>>>
>>> worker.jkstatus.type=status
>>>
>>>
>>> Any advice on extra options to make jkstatus check and report when one
>>> of the Tomcat instances isn't responding would be appreciated.
>>
>> I assume, that the actual error detection works and you are really only
>> asking about display in status worker. I also assume your are using a
>> recent mod_jk. Nevertheless do yourself a favor and look at the Timeouts
>> documentation page to improve your configuration.
>>
>> Until recently, only workers used via a load balancing worker had good
>> manageability with jkstatus. Very recently also pure AJP workers without
>> any load balancer got more useful information in their display.
>>
>> So let's talk about your worker "production". Whenever a request comes
>> in the lb first checks whether it already carries a session for one of
>> the nodes 1 or 2, or whether the request can be freely balanced.
>>
>> The status of a worker (node) in jkstatus can only change, if a request
>> is been sent to the worker. So if all your requests belong say to node
>> 2, you'll never notice anything is wrong with 1. But if 1 is broken, and
>> a request for one comes in, or a request that is freely balanceable and
>> the lb decides to send it to 1, then JK will detect the problem and
>> display it. The display will switch from "OK" to "ERR".
>>
>> If you want to parse the info, do not choose the html format, instead
>> choose a different output format, like XML or the properties format
>> (line oriented).
>>
>> Regards,
>>
>> Rainer
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Mime
View raw message