hc-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michel Onoff <michel.on...@web.de>
Subject Re: Are requests and responses self-contained?
Date Wed, 03 Aug 2011 10:00:52 GMT
On 2011-08-03 11:33, Oleg Kalnichevski wrote:
> On Sun, 2011-07-31 at 14:30 +0200, Michel Onoff wrote:
>> On 2011-07-30 00:12, Oleg Kalnichevski wrote:
>>> On Fri, 2011-07-29 at 20:10 +0200, Michel Onoff wrote:
>>>> Hello,
>>>>
>>>> consider the following code, which form some reason assumes that
>>>> requests have enclosing entities
>>>>
>>>> // conn is a DefaultHttpServerConnection, for example
>>>> req1 = (HttpEntityEnclosingRequest) conn.receiveRequestHeader();
>>>> conn.receiveRequestEntity(req1);
>>>>
>>>> req2 = (HttpEntityEnclosingRequest) conn.receiveRequestHeader();
>>>> conn.receiveRequestEntity(req2);
>>>>
>>>> Can I later say req1.getRequestLine() or req1.getEntity(), even if in
>>>> the meantime I got req2 from the same connection?
>>>
>>> No, you can't. You cannot obtain the second request object from the
>>> connection before the request entity of the first one has been fully
>>> consumed. 
>>>
>>>> In other words, are requests fully self-contained and totally
>>>> independent from the connection once both the header and the entity (if
>>>> existing) have been received?
>>>>
>>>
>>> No, they are not, if content entities are being streamed (not buffered
>>> in memory).   
>>>
>>
>> So if I need to keep requests and responses I have to copy them to
>> buffered equivalents using BasicHttpEntityEnclosingRequest,
>> BasicHttpResponse, ByteArrayEntity and friends, right?
>>
>> Thanks
>>
> 
> Yes, you do. One thing I do not understand, though. Why don't you simply
> use two connections if you really need to process two messages
> concurrently?
> 
> Oleg
> 
> 

I'm implementing a load balancer.

Requests from browsers are first put on a common queue by producer
tasks. Requests can be part of a HTTP pipeline. Consumer tasks then take
them from the queue to send them to available web servers from a pool.
Requests from the same pipeline can be sent to different servers.

A similar processing happens for responses. In case a server crashes,
the requests sent to it that did not get a response are reallocated to
the other consumer tasks to be resent to the other servers.

Hence, processing is decoupled between the client tasks and the server
tasks and is thus asynchronous. I need to preserve the requests to
resend them in case of server failure and also need to preserve
responses to guarantee the right delivery order of HTTP pipelines.
Responses for the same pipeline can arrive to the load balancer out of
order.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@hc.apache.org
For additional commands, e-mail: dev-help@hc.apache.org


Mime
View raw message