tomcat-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hans Schmid" <Hans.Sch...@einsurance.de>
Subject AW: mod_jk release policy - was: JK 1.2.9-dev test results
Date Fri, 18 Feb 2005 09:03:07 GMT
Hi,

I just want to describe our usecase because we make heavy use of the
local_worker and local_worker_only flags right now.

We use those flags for 'maintenance' mode and failover very successfuly.

But please see our setup and usecase below.

> -----Urspr√ľngliche Nachricht-----
> Von: Mladen Turk [mailto:mturk@apache.org]
> Gesendet: Donnerstag, 17. Februar 2005 20:34
> An: Tomcat Developers List
> Betreff: Re: mod_jk release policy - was: JK 1.2.9-dev test results
>
>
> Rainer Jung wrote:
> > Hi,
> >
> > first: thanks a lot to Mladen for adding all the beautiful features [and
> > removing CRLF :) ]. Big leap forward!
> >
>
> Still, I cope with those on a daily basis.
>
> > I think that until Monday we were still in the progress of adding
> > features, and fixing bugs. 1.2.8 changed a lot internally, but most was
> > functionally compatible to 1.2.6. Release 1.2.9 still supported all
> > features of 1.2.6.
> >
>
> Something similar I already explained discussing with guys interested
> on Netware platform.
>
> Something need to be done, and the obvious solution was not to reinvent
> the wheel, but rather use all the code and knowledge about the subject
> already present.
>
> To be able to use some new features like dynamic config, some things
> has to be changed internally, but nothing was touched in the protocol
> level, only how that protocol is managed.
>
> So I don't see the point of forking 1.3. Both config and core features
> are the same. Of course some advanced configuration properties where
> changes, lot new added, but from the outside its still old mod_jk.
>
> Further more adding shared memory and dynamic config I see as a final
> design change for mod_jk.
>
> > Now we are in the discussion of dropping features (and we even did drop
> > some like locality support) and I have the impresssion there should be a
> > separate discussion thread about the future of mod_jk:
> >
>
>
> Other thing is 'deprecating' certain thing.
> By that I don't mean deleting them or something like that, but rather
> marking them as 'no more developed'.
> The reason is for that is pure fact. For example we have lotus domino
> connector that works only for domino5. Think that later versions don't
> even have compatible api. I'm not aware anyone in the
> world used jk to connect domino with tomcat (at least never saw
> bugzilla entry on that). So it is deprecated by that fact.
> The same applies to JNI. Who uses that?
>
> Regarding locality, you mean local_worker and local_worker_only flags?
> IMHO that was one of the fuzziest things about jk that no one ever
> understood, not to mention that this never worked actually.
> Take for example the current documentation about local_worker:
>
> "If local_worker is set to True it is marked as local worker. If in
> minimum one worker is marked as local worker, lb_worker is in local
> worker mode. All local workers are moved to the beginning of the
> internal worker list in lb_worker during validation."
>
> Now what that means to the actual user? I reeded that zillion times
> and never understood.
> Also further more:

This one is crucial for our Maintenance switchover see later.

>
> "We need a graceful shut down of a node for maintenance. The balancer in
> front asks a special port on each node periodically. If we want to
> remove a node from the cluster, we switch off this port."
>
> WTF !? How? Which port? How to switch of this port?
>
> What counts the most is that you where unable to mark the node for
> shutdown, and not to accept new connections without session id.
> I suppose that was the purpose for those two directives, but I was
> never able to setup the jk in that direction.
>


First we use TC 3.3.2 (moving to 5.5.7)  behind Apache 1.3 on Solaris.
mod_jk version is a patched version based on mod_jk1.2.5

We only use one tomcat at a time to get traffic with a standby tomcat for maineneance.
This scenario also covers failover. We do not use the loadbalancer to actually balance
by factors.


We use sticky_sessions=true

This is our mod_jk setup if Tomcat-01 is serving the requests:

worker.list=loadbalancer
worker.loadbalancer.balanced_workers=ajp13-01, ajp13-02
worker.loadbalancer.local_worker_only=0

worker.ajp13-01.port=8009
worker.ajp13-01.host=tomcat-01
worker.ajp13-01.type=ajp13
worker.ajp13-01.lbfactor=1
worker.ajp13-01.local_worker=1

worker.ajp13-02.port=8019
worker.ajp13-02.host=tomcat-02
worker.ajp13-02.type=ajp13
worker.ajp13-02.lbfactor=1
worker.ajp13-02.local_worker=0


Now, all requests go to worker.ajp13-01, since local_worker=1 only for tomcat-01
so it "is first in the queue".

Failover (in case tomcat-01 crashes) works, since local_worker_only=0 meaning
"it also distributes the requests to the other machine if ajp13-01 is in error state"


Now lets do maintenance (tomcat-01 should be shut down, tomcat-02 shall take the load):

What we do is just link in an other worker.property file on the webserver and
gracefully restart Apache to take effect.

The second worker.properties looks like this (almost the same):

worker.list=loadbalancer
worker.loadbalancer.balanced_workers=ajp13-01, ajp13-02
worker.loadbalancer.local_worker_only=0

worker.ajp13-01.port=8009
worker.ajp13-01.host=tomcat-01
worker.ajp13-01.type=ajp13
worker.ajp13-01.lbfactor=1
worker.ajp13-01.local_worker=0

worker.ajp13-02.port=8019
worker.ajp13-02.host=tomcat-02
worker.ajp13-02.type=ajp13
worker.ajp13-02.lbfactor=1
worker.ajp13-02.local_worker=1


The only difference is that now ajp13-02 has local_worker=1 and ajp13-01 has local_worker=0

Now, since local_worker_only=0, existing (sticky) sessions still go to tomcat-01,
but local_worker=1 on ajp13-02 tells new sessions to go to tomcat-02.

When all sessions have expired on tomcat-01 we can shut it down for maintenance.

Exactly the same works in the other direction (including failover if tomcat-02 should crash)


I do not yet see how we can make this scenario work with the removed and local_worker_only
flags, but I have not tried yet hard.


Just my usecase,
Cheers, Hans


P.S.: Our read scenario actually has up to 5 Tomcats, which are periodically restarted with
the above
method. This way we actually get kind of loadbalancing, since a single tomcat gets local_worker=1
only for a short amount of time leaving sticky sessions to other tomcats.



> So locality is not deprecated. Quite opposite, now it works, just
> the local_worker_only is changed to sticky_session_force.
> IMHO this is more clearer and descriptive directive then previous one.
>
> New things like 'domain' (present from 1.2.8) and 'redirect' are just
> extra cookies to be able to finer tune the cluster topology.
>
> Regards,
> Mladen.
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-dev-help@jakarta.apache.org


Mime
View raw message