cxf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Glynn, Eoghan" <eoghan.gl...@iona.com>
Subject RE: Client API, EPRs
Date Thu, 15 Mar 2007 17:37:11 GMT


Dan,

There are a bunch of orthogonal issues becoming conflated in this
discussion. 

So I'm going to take a step back and try disentangling the following:

1. transport-specific versus generic mechanism to set the decoupled
response endpoint (DRE)

2. policy-driven (either specified via XML config or programmatically)
versus some other API to set the DRE

3. cardinality of the DRE, i.e. one-per-something (Conduit or Client),
or unlimited

4. association of the DRE with the Conduit or the Client instance

5. lifecycle mgmt of the DRE, or how to trigger a shutdown

6. explicit application creation and shutdown of DREs


Dealing with each issue in turn, here's my position:

1. If its claimed a proposed new mechanism for setting the DRE is
superior to the existing mechanism by virtue of its genericity, then I
don't think its unreasonable to expect it be really generic, as opposed
to sortta generic except for 'edge-case' transports. If some transport
specifics are required, lets at least try to dove-tail these with the
general mechanism as neatly as possible.

2. Since we already control transports via policies specified either in
XML config or programmatically, then for consistency it makes sense IMO
to stick with this model. We could genericize the policy by moving the
DecoupledEndpoint attribute to a new ConduitPolicy and having the
existing HTTPClientPolicy pick it up from there by type extension. Other
transports would also extend ConduitPolicy with their own client policy
type if necessary and could I guess add any additional info they need
(thus neatly solving the JMS issue). The generic ConduitPolicy could be
exposed via the AbstractConduit. Then programmatic approach (as used for
example in the RM SequenceTest) could then also become generic (i.e.
call AbstractConduit.getConduitPolicy() as opposed to
HTTPConduit.getClient() to get a hold of the policy object).

3. My main issue was to avoid a proliferation of automatically launched
DREs, as the responsibility for the lifecycle mgmt of these DREs should
be the responsibility of CXF. Fair enough, the DRE shutdown isn't done
properly currently, but it would be a tractable problem to do so, by
virtue of the cardinality being limited to one-per-Conduit. My whole
point was that we shouldn't move to a scenario where for example the
application setting a replyTo on an AddressingProperties instance set in
request context would cause CXF to automatically launch a new listener.
The problem here of course is that allowing a per-request setting would
facilitate the application causing many listeners to be created by CXF,
without a good way for the CXF runtime to know when these should be
shutdown. I take it from your comments in more recent mails on this
thread that this is not the sort of mechanism you're looking for,
correct?

4. I would favour continuing to associate the DRE with the Conduit as
opposed to the Client, because a) the DRE is transport-level concern IMO
and b) there may not even be a Client instance involved in mediating the
invocation (e.g. when using the JAX-WS Dispatch mechanism).

5. The lifecycle management is much easier to deal with if the
cardinality is limited as per #3. As I said before I'm happy with your
suggested explicit Client.close() API (which would presumably call down
to Conduit.close()).

6. The application should IMO be free to set a replyTo for a DRE *not*
created by CXF, for example if that DRE is controlled by the some third
party, or if the application explicitly calls
DestinationFactory.getDestination() itself and is prepared to handle the
shutdown when its done. Not sure if your comment "I'm -1 to having two
mechanisms to do the same exact same thing" indicates disagreement on
this point(?)

/Eoghan



> -----Original Message-----
> From: Dan Diephouse [mailto:dan@envoisolutions.com] 
> Sent: 15 March 2007 14:34
> To: cxf-dev@incubator.apache.org
> Subject: Re: Client API, EPRs
> 
> On 3/14/07, Glynn, Eoghan <eoghan.glynn@iona.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > We still need to support it at some level of course. And in the 
> > > future we can support it at the EPR level too. What is so 
> bad about 
> > > that?
> > >
> > >
> > > > And as I stated, that would seem to defeat the purpose.
> > >
> > >
> > > For this single case. And in the future this single case probably 
> > > won't be valid. So your objection here really doesn't carry much 
> > > weight.
> >
> >
> > Well, I beg to differ ... you're motivating your proposal as a 
> > mechanism that would be *consistent across transports*.
> >
> > Its not logical IMO to then turn around and disregard one transport 
> > that the (supposedly standard) mechanism wouldn't work for.
> 
> 
> By your same logic it wouldn't make any sense to create the 
> JAX-WS standard.
> The JAX-WS Handler APIs don't provide enough extension points 
> for everyone to do everything they would want to do with 
> messages (like work at the stream level). Yet, that doesn't 
> mean that a standard isn't useful. And it doesn't mean that 
> the standard can't be improved on in the future to 
> accommodate future use cases. Standards almost always don't 
> meet everyone's needs.
> 
> You could also claim the same thing about the WS-Addressing 
> standard because it doesn't offer a standard way to address 
> JMS endpoints. These things will be addressed in time though, 
> and for now there are proprietary extensions to handle such 
> cases. It still works great for HTTP, TCP, XMPP, SMTP, etc though.
> 
> 
> > > I'm fine with limiting automatic launching to be per-Client.
> >
> >
> > Great as that's my main issue, i.e. to avoid a proliferation of 
> > automatically launched decoupled endpoints.
> 
> 
> I was never asking for anything other than that.
> 
> >
> > > > Instead the idea in the original Celtix code was to use a 
> > > > reference counting scheme for the decoupled response 
> endpoint, and 
> > > > to
> > > allow this
> > > > to be shared across client transport instances. This was simply 
> > > > not ported over properly to CXF.
> > > >
> > > > The original scheme worked as the HTTPClientTransport was
> > > created once
> > > > per binding instance, had well-defined shutdown semantics,
> > > and reused if
> > > > possible a pre-existing listener for the decoupled 
> endpoint, even 
> > > > if this was created from another HTTPClientTransport. This
> > > reuse was easy
> > > > to do as HTTPClientTransport registered the Jetty handler 
> > > > directly, instead of going thru' the 
> DestinationFactory, and thus 
> > > > could easily check if a pre-existing handler was 
> already registered.
> > >
> > >
> > > I don't see how this gets around the issues I mentioned 
> in (a). It 
> > > sounds like the deocupled destination would stick around 
> until you 
> > > shut down the HTTPClientTransport. And there is no way to 
> > > automagically shut down the client transport really.
> >
> >
> > But you're proposing an explicit Client.close() API to 
> handle this, no?
> 
> 
> That is an option on the table.
> 
> My point is that you're claim that this introduces a whole 
> bunch of lifcycle issues is wrong. *These lifecycle issues 
> were here before.* They have absolutely nothing to do whether 
> or not the endpoint is automatically launched by the client 
> or by configuration.
> 
> > > > This brings up an interesting point: Currently I can only
> > > > > associate a decoupled destination with a client's 
> conduit AFAIK. 
> > > > > But this makes absolutely no sense to me - there are many 
> > > > > decoupled destinations that could be associated with 
> a client. 
> > > > > For instance it might have a different acksTo then 
> ReplyTo. Or I 
> > > > > might have a different FaultTo.
> > > >
> > > >
> > > > I don't think you're correct here. If I go and 
> explicitly set the 
> > > > replyTo to a Destination that I've created (via a
> > > DestinationFactory)
> > > > then this will be used for the <wsa:ReplyTo> in the
> > > outgoing message, as
> > > > opposed to the back-channel destination overwriting the 
> explicit 
> > > > setting.
> > > >
> > > > Similarly the acksTo could be set to any Destination, 
> but RM just 
> > > > happens to be implemented to use the back-channel 
> destination for 
> > > > convenience. By convenience, I mean it avoids the RM layer
> > > having to set
> > > > up a separate in-interceptor-chain to handle incoming 
> out-of-band 
> > > > messages.
> > > >
> > > > The per-Conduit restriction only applies to *automatically 
> > > > launched* decoupled response endpoints. The application can go 
> > > > nuts explicitly creating response endpoints all over 
> town if it wants ...
> > > >
> > >
> > > First, I was talking about from a configuration point of view.
> > >
> > > Second, doesn't this kind of defeat the point of having the 
> > > decoupled destination in the conduit?
> >
> >
> > Nope I don't think it defeats the point.
> >
> > The point being that the lifecycle of any automatically launched 
> > decoupled endpoint is the *responsibility of the CXF 
> runtime*, whereas 
> > the lifecycle of any Destinations explicitly launched by the 
> > application is of course the *responsibility of the 
> application itself*.
> 
> 
> The application developer has to be aware of the lifecycle 
> regardless (as you seem to admit when saying the user will 
> need to call Client.close() below). It could be creating new 
> clients every so often on different ports in which case it 
> would still quickly exhaust its resources. Limiting the 
> cardinality, as you say below, doesn't prevent that.
> 
> If we limit the cardinality of the automatically launched decoupled
> > endpoint to one-per-Conduit (equivalently, one-per-Client), then we 
> > have a well-defined point at which it makes sense to close 
> the endpoint (i.e.
> > when the Conduit is closed, as a side-effect of your proposed new
> > Client.close() API).
> 
> 
> Sure thats true, BUT it would be equally easy to close the 
> decoupled endpoint if it wasn't part of the Conduit. It is 
> very easy for the client to call destination.close() in 
> addition to conduit.close() when the Client itself is closed.
> 
> In this case the cardinality of the automatically launched 
> decoupled endpoint would be one per Client.
> 
> If we do not limit the cardinality of the automatically launched
> > decoupled endpoints, then we'd have to either let these accumulated 
> > endpoints remain active until either the Client is close()d or the 
> > application exit()s, or we'd have to guess when it would 
> make sense to 
> > shutdown a seemingly inactive decoupled endpoint. But this 
> guesswork 
> > is problematic, as the decoupled endpoint could have been 
> specified as 
> > the acksTo for some RM sequences. It would be invalid for 
> example to 
> > take the approach ... hey there's no outstanding MEPs for 
> which this 
> > endpoint was specified as the replyTo so lets just shut it down. 
> > Obviously that would pull the rug out from under RM, which 
> may receive 
> > any number of incoming out-of-band messages on that 
> endpoint until the 
> > sequence is terminated, and AFAIK by default we to allow 
> the sequence 
> > to proceed indefinitely rather than actively terminating 
> and starting 
> > up a new one every N messages or whatever.
> >
> 
> 
> I agree that we shouldn't just go willy nilly launching 
> deocupled endpoints on every request. But it doesn't follow 
> that the response Destination should be part of the Conduit from that.
> 
> On the other hand, if the application wants to make many 
> invocations on
> > a single Client, each with a different replyTo, then its welcome to 
> > set up the relevant Destinations itself and then explicit 
> shutdown() 
> > when its done with each. The app knowing the appropriate 
> point for the 
> > shutdown to occur is the crucial point.
> >
> 
> 
> I'm -1 to having two mechanisms to do the same exact same thing.
> 
> - Dan
> 
> --
> Dan Diephouse
> Envoi Solutions
> http://envoisolutions.com | http://netzooid.com/blog
> 

Mime
View raw message