geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gianny Damour <gianny.dam...@optusnet.com.au>
Subject Re: Distribution and start/stop of clustered deployments
Date Wed, 14 Nov 2007 02:36:10 GMT
Hi Joe,

After some investigations, here is my understanding of problem 1:  
there are two deployments because by default, i.e. when no target is  
specified, the distribute command executes against all the  
configuration stores defined by a Geronimo instance. Note that this  
default behavior is also applied by other deployment components, such  
as the hot directory scanner or the installation portlet. To some  
extent, I believe this default behavior should be changed to deploy  
to only one configuration store. Indeed, I am not convinced that  
users distributing applications would expect their applications to be  
deployed as many times as the number of configuration stores defined  
by the targeted Geronimo server. Also, having the same configuration  
multiple times in a Geronimo instance does not make a lot of sense.

A potentially better default behavior would be: only distribute to  
the first target returned by DeploymentManager.getTargets().  
Internally, our implementation of getTargets returns as the first  
target the "default" configuration store.

Problem 3) is caused by problem 1).

What do you think?

Thanks,
Gianny


On 13/11/2007, at 7:14 AM, Joe Bohn wrote:

> Hi Gianny,
>
> Lots of newbie questions from me.  I'm not even going to pretend  
> that I understand your clustering changes just yet ... so please  
> bear with me.  I just want to point out a few things that I noticed  
> with a single server instance and get your take on them.
>
> 1)  Deploying a simple web app.  I deployed a simple snoop.war web  
> app without a plan to a Jetty server image using the command line.   
> It ended up deploying 2 configurations based upon the output  
> messages.  Based on your description I think this is correct but  
> from a user perspective it seems confusing and wrong.  I hadn't  
> configured anything for clustering and I was only deploying 1  
> thing.  I expected to see results of just 1 configID for the  
> deployed item.  Perhaps everything would have been fine if I had  
> used a plan but I don't think we can assume that users will always  
> use a plan.  Here are the messages that were output:
>     Completed with id default/snoop/1194895785124/war
>     Completed with id default/snoop/1194895785559/war
>     Deployed default/snoop/1194895785124/war to
> org.apache.geronimo.configs/clustering/2.1-SNAPSHOT/car? 
> ServiceModule=org.apache.geronimo.configs/clustering/2.1-SNAPSHOT/ 
> car,j2eeType=ConfigurationStore,name=MasterConfigurationStore
>     @ /snoop
>     Deployed default/snoop/1194895785559/war to
> org.apache.geronimo.configs/clustering/2.1-SNAPSHOT/car? 
> ServiceModule=org.apache.geronimo.configs/clustering/2.1-SNAPSHOT/ 
> car,j2eeType=ConfigurationStore,name=ClusterStore
>     @ /snoop
>
> 2) Undeploy?  What would I undeploy if I wanted to undo what I just  
> did?  Do I need to undeploy each configuration individually?  What  
> do you think about leaving the current deploy capability as is and  
> adding new commands/functions when deploying into a cluster so as  
> not to confuse users in the more simple case without clustering?
>
> 3)  Web Console. From the web console instead of 1 configuration I  
> initially expected, or the 2 configurations indicated in the  
> messages at deploy time ... I actually see 3 configurations (2 of  
> them started and 1 stopped ... now I'm even more confused ;- ) ):
>   - default/snoop/1194895785124/war  started
>   - default/snoop/1194895785559/war  started
>   - default/snoop/1194895785702/war  stopped
> Again, I'm not sure how the user is supposed to manage/interpret  
> this. It seems that if we implement these concepts there are a  
> number of comparable console and cli changes that will be necessary  
> to manage the multiple CARs in a clustered scenario.  Is there  
> anyway we can keep the single server use cases intact until we have  
> those capabilities?
>
> 4)  TCK for Jetty is toast.  I started to play with the individual  
> server because when I attempted to run Jetty TCK tests everything  
> was failing with lifeCycleExceptions.  I image that we need to  
> rework some of the tck for this change.  We might be able to avoid  
> that if we can keep the single server use cases unchanged.  If that  
> isn't possible will you be looking into the necessary TCK changes?
>
> Thanks,
> Joe
>
> Gianny Damour wrote:
>> Hi,
>> I have just checked in support for distribution of configurations  
>> to clusters and also management, i.e. start/stop, of such  
>> clustered deployments.
>> I will try to explain how everything hangs together so that people  
>> can jump in, provide feedback, request enhancements etc.
>> There is now a secondary configuration store:
>> org.apache.geronimo.configs/clustering/2.1-SNAPSHOT/car? 
>> ServiceModule=org.apache.geronimo.configs/clustering/2.1-SNAPSHOT/ 
>> car,j2eeType=ConfigurationStore,name=MasterConfigurationStore  
>> which is a configuration store, which is aware of the cluster  
>> members statically configured by users (more on this later). Its  
>> responsibilities are:
>> * (un)installation of configurations on cluster members; and
>> * creation of "master" configurations defining GBeans able to  
>> remote start and stop a given configuration on a specific cluster  
>> member.
>> Here is what happens when a configuration, e.g. groupId/artifactId/ 
>> 2.0/car, is distributed to this store:
>> 1. The usual configuration processing is executed. This results  
>> into a backed configuration, i.e. with its associated GBeans,  
>> ready to be installed by the clustered store.
>> 2. The clustered store uploads the backed configuration to the  
>> registered cluster members, which subsequently locally install  
>> them. If the "remote" installation fails for one of the members,  
>> then the clustered store removes the configuration from all the  
>> members having successfully installed it so far.
>> 3. The clustered store installs the configuration locally.
>> 4. The clustered store creates from scratch a master  
>> configuration, e.g. groupId/artifactId_G_MASTER/2.0/car. This  
>> master configuration is made of GBeans, one for each member, which  
>> can remote start or stop the configuration on a given member: when  
>> the master configuration starts, its GBeans start, which in turn  
>> remote start the configuration on a given member. In order to be  
>> able to start the master configuration without all the members up,  
>> these GBeans "fail" silently when a remote start fails. However,  
>> as these GBeans expose startConfiguration and stopConfiguration  
>> managed operations, it is pretty easy to remote start a  
>> configuration on a given member later via JMX. As expected, when  
>> the master configuration is stopped, its GBeans stop, which in  
>> turn remote stop the configurations.
>> The clustered store relies on the static configuration of cluster  
>> members. This static configuration MUST be done within  
>> org.apache.geronimo.configs/clustering//car as nodes must be  
>> registered before the start of any master configurations. Indeed,  
>> master configurations are injected with this static cluster  
>> configuration to retrieve the necessary JMX connection info to  
>> connect and cluster members and remote start/stop configurations.
>> At step 3. of the above deployment process, I wrote that the  
>> configuration is locally installed, i.e. into the clustered  
>> configuration store. At this stage, this is pretty much useless;  
>> however, I believe that keeping a carbon-copy of the configuration  
>> in the master repository may become quite handy. For instance,  
>> within the master configuration, we could add a GBean able to  
>> upload on demand this configuration to a given member. This way,  
>> when you add a new member to an existing clustered deployment, you  
>> simply need to add a new GBean to remote start/stop the  
>> configuration on this new member and upload the configuration to  
>> this new member via the utility GBean.
>> Hope the above is clear enough.
>> I will comment the org.apache.geronimo.configs/clustering//car  
>> deployment plan as there are new GBeans declarations not too  
>> obvious to understand without reading the code.
>> Following this, I will move to the remote start/stop of Geronimo  
>> instances from a single Geronimo server. This should provide a set  
>> of administration GBeans admin console people may want to leverage  
>> to improve the remote management of Geronimo instances. These  
>> GBeans will talk to GShell instances and send arbitrary groovy  
>> scripts for execution within GShells.
>> Meanwhile, if people are interested by working on the clustering  
>> of Tomcat or OpenEJB via WADI, then please reply as I am keen and  
>> happy to provide help. One of those two new features will be the  
>> next stuff I will work on after completion of the above management  
>> enhancement.
>> Thanks,
>> Gianny


Mime
View raw message