geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shawn Jiang <genspr...@gmail.com>
Subject Re: failover demo in sandbox
Date Wed, 03 Feb 2010 08:52:36 GMT
On Wed, Feb 3, 2010 at 2:51 AM, Kevan Miller <kevan.miller@gmail.com> wrote:

>
> On Feb 1, 2010, at 1:59 AM, Shawn Jiang wrote:
>
>
>
> On Wed, Jan 20, 2010 at 7:15 AM, Kevan Miller <kevan.miller@gmail.com>wrote:
>
>> I took a look at the failover demo currently in sandbox, over the weekend.
>> I made some updates to get it building/running on my Mac OS machine.
>>
>> I like the demo. It's a good demonstration of Geronimo's capabilities. I'd
>> be interested in seeing a formal release of the demo. A few demo and
>> Geronimo related issues we could be thinking about:
>>
>> 1) Windows support. The current demo is *nix-based. For all I know it may
>> only run on Mac OS.
>>
>
> Added windows scripts to make the sample support windows.
>
>>
>> 2) The failover demo is including Grinder. Grinder is a great tool.
>> However, it contains LGPL-licensed artifacts. So, we are going to remove it
>> from the failover demo. We can provide configuration files and the
>> grinder.py "client". However, if users want to run the demo using Grinder,
>> they will need to download Grinder on their own. We can provide download
>> instructions, but should instruct users to review the Grinder licensing
>> before doing so...
>>
>
> Modified/added some scripts to strip grinder from our build.  Also updated
> the instruction to tell users how to download and configure the grinder by
> themselves.
>
>
>>
>> 3) Geronimo currently requires multicast for the failover scenario. This
>> is great. However, we should also offer unicast-based support, also. I
>> frequently encounter users who are unable to use multicast in their
>> environments. Providing unicast support would be a valuable addition, I
>> think.
>
>
>> --kevan
>
>
> Nice. Thanks Shawn.
>
> I think it's time to move failover out of sandbox.
>
> BTW,
> I've noticed that the farm-controller is advertising membership in cluster1
> -- leading to intermittent lookup failures. Using the MulticastTool:
>
> $ java org.apache.openejb.client.MulticastTool
>
> Connecting to multicast group: 239.255.3.2:6142
> LoopbackMode:false
> TimeToLive:1
> SoTimeout:0
> -------------------------------
> 12:56:03 - 10.0.1.194 - cluster1:ejb:ejbd://coltrane:4201
> 12:56:03 - 10.0.1.194 -
> farm:rmi://coltrane:1109/JMXConnector?cluster=cluster1
> 12:56:03 - 10.0.1.194 - cluster1:ejb:ejbd://coltrane:4211
> 12:56:03 - 10.0.1.194 - cluster1:ejb:ejbd://coltrane:4201
> 12:56:03 - 10.0.1.194 -
> farm:rmi://coltrane:1109/JMXConnector?cluster=cluster1
>
> 4201 == my farm-controller
> 4211 == my farm-node
>
> I'll get intermittent failures using multicast as the provider url in a
> standalone client -- depending on which member is chosen by the client --
> ejbd://coltrane:4201 will fail and ejbd://coltrane:4211 will succeed.
>

I also found this issue, but I don't get failures when using the ginder
client to access the cluster including controller as a node.


I guess the controller's farm clustering name should be rename to something
else than the default cluster1.  So that it can be filtered by the cluster1
in provider url.



>
> --kevan
>



-- 
Shawn

Mime
View raw message