qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tom Mathews <darkphi...@hotmail.com>
Subject RE: The waiting game [client sends 0 outgoing size]
Date Fri, 06 Jun 2014 00:15:30 GMT



We are indeed planning on handling a large number of clients (millions of concurrent connections,
multiple links per connection, distributed of course across load-balanced servers).
What would set_offer look like?  I see pn_link_offered, but I can't tell that it does anything
effective (link->available doesn't seem to be used).
-TomM

> Date: Thu, 5 Jun 2014 14:59:00 -0400
> From: tross@redhat.com
> To: users@qpid.apache.org
> Subject: Re: The waiting game [client sends 0 outgoing size]
> 
> Tom,
> 
> I'm not sure I understand why the server sets the incoming window the
> same as the client's outgoing window.  Shouldn't the server set the
> incoming window to some value large enough to prevent pipeline-stalling
> and small enough to prevent incoming frames from consuming too much memory?
> 
> If your objective is to manage a very large number of clients and you
> don't want to provide incoming capacity until there are messages to be
> sent, I think pn_session_t would need to add something like "set_offer"
> so the sender can indicate that there are bytes/frames to send.
> 
> -Ted
> 
> On 06/05/2014 02:19 PM, Tom Mathews wrote:
> > 
> > 
> > AMQP Qpid sets the outgoing window size (maximum
> > transfer frames to expect from client) when negotiating the BEGIN of a
> > session equal to the currently enqueued message count. Our AMQP service honors
> > this when replying with the initial FLOW message, setting the incoming
> > window size (maximum transfer frames allowed to be sent) to the same
> > value.
> > 
> >  
> > 
> > The problem is that there is rarely a message enqueued when
> > the session is started, and so the outgoing/incoming window size is set to 0,
> > which prevents the client from further communication. The developer in charge of
the service points out that they are honoring the expectations of the client, and I tend to
agree with them: it makes sense that they could optimize a link while it has 0 expected transfers,
and wait for an updated flow to renegotiate a new window.
> > 
> >  We're not using the messenger class, we're using the lower-level classes. I can
reproduce this behavior by using the proton project with the commandline parameters -c 127.0.0.1
-a TESTING against a version of the service running locally. Diving into the code, 
> > pn_session_outgoing_window looks only at currently pending session->outgoing_deliveries.That's
correctly updated in pn_advance_sender when I submit a message... but in pn_process_tpwork_sender
we have a 0 remote_incoming_window, so we never send a transfer. Naturally, the one place
a pn_post_flow occurs on a sender link is in pn_do_transfer... after a transfer:  // XXX:
need better policy for when to refresh window
> > 
> > if (!ssn->state.incoming_window && (int32_t) link->state.local_handle
>= 0) {pn_post_flow(transport, ssn, link);
> >  
> >  
> > I can't call pn_link_flow, as that's only for modifying receiver link credits, and
it asserts on a sender. Questions:Am I using AMQP wrong? :)Is there any way to send a flow
for the sending link to set a new anticipated window? How do we renegotiate as our window
shrinks? Thank you very much for your time, -Tom Mathews
> > 
> >  		 	   		  
> > 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> 

 		 	   		  
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message