tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "John Smith" <>
Subject Re: Multihoming TC
Date Tue, 28 Dec 2004 16:40:41 GMT
 on my own previous post.

 frontend TC instances don't need to be restarted.

 Backend servers could run ant tasks on the front end insts. to reload each
webapp instead of restarting the TC instances

 It would also be a nice extra if there is kind of a voting system for
admins to approve updates or disapprove them and discus why with othe admins
in a secure way

----- Original Message -----
From: "John Smith" <>
To: "Tomcat Users List" <>
Sent: Tuesday, December 28, 2004 10:55 AM
Subject: Re: Multihoming TC

> I am replying to both posters tryin gto consolidate  both ideas
> > 1/ please post a *new* message when writing to the list.
> Sorry, I just got distracted after answering to some people's problems on
> the list.
> > 2/ What I've seen a lot of people (myself included) do: develop your app
> > on your test/dev machine; build it into a WAR file; push the WAR out to
> > the production servers at some scheduled time and restart/reload Tomcat.
> Well, that is doable and it is certainly not difficult. Let me recreate it
> the way I am thinking about it:
> > 2.1_ develop your app on your test/dev machine
> which could be a CVS based one, but I think the synch'ing should be
> from the CVS, dev. . . .
> > 2.2_ push the WAR out to the production servers . . .
> the 'pushing' part or better said 'synchronization' of all servers should
> atomic adn automatic, based on
> 2.2.1_ kind of a synchronization protocoll,
> 2.2.2_ that knows of the location of the other machines and that they all
> were time synch'ed
> 2.2.3_ their latest tree-like 'signature' structure for the data in:
> databases; down to a record level ('creation' and 'last updated'
> time stamps must be kept for each record which is always good anyway when
> you need (an you always do) optimistic locking, concurrent updates, etc)
> ('mirror/rsync' works for file systems only, right?) Separating DB updates
> from webapp ones is also good because in DB-driven sites must updates are
> made to the data . . .
> and the code; down to the classes' MD5 signatures (JARs are way
> grob for this, usually you just change a class, or a web.xml file not the
> whole webapp)
> > at some scheduled time
> I don't quite like the idea of a 'scheduled time', I would rather go with
> pushed 'landmark' updates, or maybe giving both as options. Also
> is always good for DOS attacks, I think updating a live site needs some
> blood and bony skulls backing/being aware of it
> 2.2.4 > restart/reload Tomcat
> I don't like the idea of having to restart TC in a production server, at
> least not as part of the replication strat.
> I would rather go with a backend "staging server" that would keep a copy
> the lastest sync'ed 'site images'. This is were all updates are made prior
> to 'restarting TC' and this backend "staging server" is also the one
> brokering all:
> HTTP 404-like errors
> and exceptions
> with customized redirections, searches, etc. There could also be 'master'
> stage servers (just in case that many people work concurrently) and
> slave/replicated ones
>  This backend server would be also connected to the same DB that front
> connect too
> 2.3._ Once these tree-like 'signatures' of all back end servers is the
> so we know that all copies of the data and code ar OK, the front end
> would be updated by either:
> 2.3.1_ 'restarting' the front instances (that would get their data feeds
> from the same backend directory structure) or
> 2.3.2_ CD-ROMs could be burned
> 2.3.3_ classes could be read/loaded from a DB . . .
>  I think this is good also because even if the updates are automatic the
> 'commited' ones are not and things can be still changed/fine tune prior to
> commiting an update. Basically 'deltas' will be visible to all mirror
> admins that can check them and decide what should be commited or not . . .
> > The push is OS-specific; in Unix-style environments, I've used
> from a scripted scp or rsync to a manual FTP.
> I was kind of thinking about making it happen as part of a synch'ing
> protocoll that does not need extra port or nothing it would be a HTTP/SSL
> (partially of totally) communication with data transfers and all between
> backend staging servers
> > Does this answer your question, or did I misunderstand it?
>  I think we understood each other well. We are just looking at the same
> problem from different perspective and with a different scope
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message