qpid-proton mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rafael Schloming <...@alum.mit.edu>
Subject Re: Flow Control callback...
Date Wed, 16 Apr 2014 15:26:24 GMT
Hi Clebert,

First off, a patch is a great way to get started contributing something.
For bigger/more involved patches we often use review board (
reviews.apache.org) to share, discuss and comment on patches in detail, but
for something small like this that's not really necessary.

Now for your patch, I have a couple of thoughts. I'm not sure if you're
aware of it, but the event API is intended to provide the sort of
functionality you're going for here. I don't know offhand if it covers the
exact events you've defined, but I would think it would make sense to look
into extending it to cover your scenario if it doesn't already.

One thing to be aware of is that the event API is attempting to offer the
same functionality while avoiding the sort of deeply nested callbacks that
your patch is adding. The reason for this is due to some general reentrance
issues that occur with those kinds of callbacks. This is probably easiest
to explain in terms of some specific scenarios, but still a bit difficult
without a whiteboard, so bear with me. ;-)

Let's say you've embedded the protocol engine in a concurrent application
that is attempting to bridge messages between connections. Now in this
context the engine itself is a shared data structure and needs to be
protected with a mutex whenever it is accessed. Now let's say this bridging
app is attempting to handle a credit increment callback by pulling messages
off of another connection. We now have the following logical stack traces
that are possible:

  Thread 1:
    grab mutex for engine A
    invoke transport's process()
    ...
    ...
    engine invokes deeply nested callback
    application tries to grab a message from engine B
    application grabs mutex for engine B
    ...

So far so good, but now let's suppose that we have Thread 2 shuffling
messages in the other direction:

  Thread 2:
    grab mutex for engine B
    invoke transport's process()
    ...
    ...
    engine invokes deeply nested callback
    application tries to grab a message from engine A
    application grabs mutex for engine A
    ...

Now here is when we run into trouble since we now have two different
threads attempting to acquire the same two locks in opposite order and of
course that results in a deadlock. While this is just a specific scenario,
this same general class of deadlock is really easy to run into if you
aren't very careful. Even in a completely single threaded app you can run
into similar issues if A and B end up being the same engine (say you're
replying to a request or something). In such a case the engine ends up
becoming reentrant in unexpected and difficult to anticipate ways.

The way the event API avoids this issue is rather than executing a callback
from deep inside engine code, it basically defers executing of the callback
by posting an event to an event queue. This way the event can be pulled off
of the queue and handled after the engine has completed its processing and
returned itself to a consistent state. This also avoids the mutex issue
since the application will have released the mutex for an engine when it
handles any events for that engine.

So as far as I can tell there are really three (potential) differences in
the way the event API works and what your patch is providing:

  1. The semantics of the events. There may be some state change you are
capturing that isn't exposed through the event API.

  2. The style of dispatch. The event API models event types as enums, so
currently you'd have to write a switch statement rather than overriding a
callback.

  3. The timing of the dispatch. With your style of callbacks the callbacks
are always executed for you. With the event API it's a little bit more
do-it-yourself. Proton doesn't have a main loop so it can't actually
execute the callbacks for you, instead it is giving you back the events so
you can dispatch them yourself from the relative safety of your own event
loop.

Given that it would be nice to have one consistent event model rather than
two disjoint ones, I'd suggest the following approach. I think (1) is
pretty easily addressed by making sure we have event types covering the
state changes you are looking for. As for (2), it should be pretty easy and
natural to add event handler callbacks so we can provide both enum-style
dispatch and callback style dispatch. I think the biggest question is
around (3). If you depend on the deeply nested nature of the dispatch and
the fact that they are invoked from inside the engine, I'd first like to
understand why, and then if you really really really need that style of
execution, I'd suggest we offer that by giving you the option of
implementing your own Collector. The Collector interface is essentially
just an abstraction over an event queue, so if you were able to implement
your own you could dispatch the event directly where it is posted rather
than queueing it for later dispatch.

--Rafael

On Tue, Apr 15, 2014 at 10:25 PM, Clebert Suconic <csuconic@redhat.com>wrote:

> I have changed trunk on my local fork to accept a callback on flow Control
> on a link. That way I can encapsulate my own Semaphore before calling a
> producer, or intercept flow control before routing on the HornetQ's server.
>
>
> Right now the flow control is pretty much only done inside Proton meaning
> that either side will just keep sending forever until the memory was
> exausted.. with this sort of callback I could intercept what I needed and
> do the proper pauses accordingly.
>
>
> I didn't find this functionality anywhere else, in case the implementation
> is valid.. what's the right way to contribute this? (notice I didn't write
> a testcase yet as I'm still learning the way of things on the project.. but
> I'm doing pretty well already on hacking this).
>
>
> It's a simple change I know. But I believe I could have more to come...
>
>
>
> https://github.com/clebertsuconic/qpid-proton/commit/681b129963639cae52fd83d1abb453bedbdb3955
>
>
>
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message