geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gianny Damour <gianny.dam...@optusnet.com.au>
Subject Re: [DISCUSS] 2.1 Release
Date Tue, 06 Nov 2007 02:17:53 GMT
Hi,

What I am doing does not rely on any clustering implementation. This  
is how it will work:

1. User configures a cluster and a set of nodes belonging to it. You  
can expect this type of GBeans:

    <!-- Cluster confuguration -->
     <gbean name="ClusterInfo"  
class="org.apache.geronimo.clustering.config.BasicClusterInfo">
         <attribute name="name">${PlanClusterName}</attribute>
         <reference name="NodeInfos"></reference>
     </gbean>

   <!-- Node configuration -->
   <gbean name="NodeInfo"  
class="org.apache.geronimo.clustering.config.BasicNodeInfo">
         <attribute name="name">${PlanNodeName}</attribute>
         <xml-attribute name="jmxConnectorInfo">
             <ns:javabean xmlns:ns="http://geronimo.apache.org/xml/ns/ 
deployment/javabean-1.0"  
class="org.apache.geronimo.clustering.config.BasicExtendedJMXConnectorIn 
fo">
                 <ns:property name="username">system</ns:property>
                 <ns:property name="password">manager</ns:property>
                 <ns:property name="portocol">rmi</ns:property>
                 <ns:property name="host">localhost</ns:property>
                 <ns:property name="port">1099</ns:property>
                 <ns:property name="urlPath">/jndi/rmi://localhost: 
1099/JMXConnector</ns:property>
             </ns:javabean>
         </xml-attribute>
     </gbean>

     <!-- Node configuration -->
     <gbean name="SampleRemoteNodeInfo"  
class="org.apache.geronimo.clustering.config.BasicNodeInfo">
         <attribute name="name">SAMPLE_REMOTE_NODE</attribute>
         <xml-attribute name="jmxConnectorInfo">
             <ns:javabean xmlns:ns="http://geronimo.apache.org/xml/ns/ 
deployment/javabean-1.0"  
class="org.apache.geronimo.clustering.config.BasicExtendedJMXConnectorIn 
fo">
                 <ns:property name="username">system</ns:property>
                 <ns:property name="password">manager</ns:property>
                 <ns:property name="portocol">rmi</ns:property>
                 <ns:property name="host">localhost</ns:property>
                 <ns:property name="port">1100</ns:property>
                 <ns:property name="urlPath">/jndi/rmi://localhost: 
1100/JMXConnector</ns:property>
             </ns:javabean>
         </xml-attribute>
     </gbean>


2. User configures a master repository for clustered artifacts:

     <gbean name="MasterRepository"  
class="org.apache.geronimo.system.repository.Maven2Repository">
         <attribute name="root">master-repository/</attribute>
         <reference name="ServerInfo">
             <name>ServerInfo</name>
         </reference>
     </gbean>

     <gbean name="MasterConfigurationStore"  
class="org.apache.geronimo.clustering.deployment.MasterConfigurationStor 
e">
         <reference name="Repository">
             <name>MasterRepository</name>
         </reference>
         <reference name="ClusterConfigurationStoreDelegate">
             <name>ClusterConfigurationStoreDelegate</name>
         </reference>
     </gbean>

     <gbean name="ClusterConfigurationStoreDelegate"  
class="org.apache.geronimo.clustering.deployment.BasicClusterConfigurati 
onStoreDelegate">
         <reference name="ClusterInfo">
             <name>ClusterInfo</name>
         </reference>
     </gbean>

Note that above configurations are done against a Geronimo server,  
which may or not may be a cluster node. In other words, this  
configuration could be done against a kind of administration server  
having all the necessary deployers.

3. Users deploys its artifacts against the master repository. The  
target server builds locally the corresponding ConfigurationData and  
sends it to the configured nodes. More accurately, ConfigurationData  
is "sent" through standard RPC over the JMX communication infra. The  
content of the ConfigurationData, e.g. jar, war et cetera, is sent  
via the remote upload servlet used by the deployer CLI. Note that if  
all the servers have access to the master repository, then a user  
will simply configure a no-op ClusterConfigurationStoreDelegate so  
that the artifact upload step is skipped.

You can expect the same type of approach for the control, i.e. start,  
stop et cetera, of cconfigurations.

I also intend to implement remote start and stop of servers by  
talking to gshell instances. However, I will work on it after the  
above features.

As you are also working on clustering stuff, could you please give us  
some heads-up?

Thanks,
Gianny


On 06/11/2007, at 9:34 AM, Jeff Genender wrote:

> Gianny,
>
> Since there are multiple clustering implementations going on at the  
> same
> time, could you please keep us aprised of what you are doing so we  
> don't
> clash?
>
> Thanks,
>
> Jeff
>
> Gianny Damour wrote:
>> Hi,
>>
>> I resumed this week-end some work on clustered deployment. I think  
>> this
>> will be completed in about 2-3 weeks. This will allow distribute,
>> uninstall, start, stop, et cetera of configurations to a cluster as a
>> single logic operation. I am keen to get this change in for 2.1,  
>> if it
>> does not delay 2.1.
>>
>> Thanks,
>> Gianny
>>
>> On 02/11/2007, at 4:00 AM, Kevan Miller wrote:
>>
>>> I think it's time to start discussing the particulars of a 2.1  
>>> release.
>>>
>>> There's been a lot of advancements made in our plugin  
>>> infrastructure.
>>> There's also been the pluggable console enhancements. It would be  
>>> good
>>> to get a release out, with these capabilities. They provide a more
>>> solid platform for future enhancements, I think.
>>>
>>> There's also GShell and new monitoring capabilities. I'm probably
>>> missing a few other new functions.
>>>
>>> Finally, IIUC, 2.1 would be able to support a Terracotta plugin. I'd
>>> also be very interested to hear what WADI capabilities that could be
>>> exposed.
>>>
>>> I'm willing to bang the release manager drum. I see that Joe has
>>> already started tugging on the TCK chain
>>>
>>> What do others think? How close are we to a 2.1 release? What
>>> additional capabilities and bug fixes are needed? Can we wrap up
>>> development activities in the next week or two?
>>>
>>> --kevan


Mime
View raw message