river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sim IJskes - QCG <...@qcg.nl>
Subject Re: Service Wrapper Example
Date Mon, 15 Feb 2010 17:43:16 GMT
Tom Hobbs wrote:

> Certainly in my experience, detecting errors and recovering is always the
> job of the client.  To use a daft example; why would a web page detect that
> a browser has unexpectedly disappeared and try to find a new browser to
> display itself on?  But in the event of a web server going down, it's always
> the browser/etc that needs to go any find another copy of the page
> somewhere.

This is not always the case. For instance in the transport layer. A 
server can detect that an ack/nack is overdue and start a retransmission.

But thats not what i tried to express. In that specific email i meant a 
client of the service. I haven't seen any self healing behaviour in the 
jeri transports, or the layers between the actual java.lang.Proxy of the 
service and its transport, so any hickup there will lead to a 
RemoteException. So i guess, with the current state of affairs, the only 
place for selfhealing (with keeping the RemoteReference the same) is for 
a SmartProxy.

What you have done, is created a ServiceWrapper which does the 
wrapping/proxying on the clients initiative, and retrieves a new 
RemoteReference for every transport error. This is also a perfectly 
valid approach.

The only problem i see, is (in both scenarios) that when an anonymous 
(not registered, bu exported) remote reference gets serialized as for 
instance a return value from a call to the service, and this reference 
is passed through the system, it will still experience transport errors. 
So this remote reference needs to be wrapped also, either at the server 
or the client side.

While writing this, i'm thinking this might be also fixed in the 
invocation layer. Altough it still only guards against transport errors, 
and not against dropping a member of a server cluster.

> This style of service wrapping worked very well in a complex trading
> platform that I was previously involved in.  It enabled us, with the
> provision of some additional business rules - especially regarding state, to
> take down services at random and have the system automatically recover
> without interrupting the client.  It truly was a self-healing system.

Indeed, i can see this. And very practical for dynamic cluster scaling 
issues, for instance during a deployment of a new version. (i'm thinking 
about: during the change reducing the number of cluster members, 
upgrading the freed cluster members, and a hot switchover for the 2 groups).

Gr. Sim

QCG, Software voor het MKB, 071-5890970, http://www.qcg.nl
Quality Consultancy Group b.v., Leiderdorp, Kvk Leiden: 28088397

View raw message