cxf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oleg Kalnichevski <>
Subject Re: Async http client experiments....
Date Sat, 28 Jul 2012 13:14:16 GMT
On Fri, 2012-07-27 at 15:20 -0400, Daniel Kulp wrote:
> I've committed a few experiments I've been working on to:
> Basically, I've been trying to find an async client that is somewhat usable 
> for CXF without completely re-writing all of CXF.   Not exactly an easy 
> task.  For "POST"s, they pretty much are all designed around being able to 
> blast out pre-rendered content (like File's or byte[]).   Doesn't really fit 
> with CXF's way of streaming out the soap messages as they are created.   


> 4) Apache HTTP Components (HC)- this was the first one I tried, ran into 
> performance issues, abandoned it to test the others, then came back to it 
> and figured out the performance issue.  :-)   I had this "working", but a 
> simple "hello world" echo in a loop resulted in VERY VERY slow operation, 
> about 20x slower than the URLConnection in the JDK.   Couldn't figure out 
> what was going on which is why I started looking at the others.   I came 
> back to it and started doing wireshark captures and discovered that it was 
> waiting for ACK packets whereas the other clients were not.   The main issue 
> was that the docs for how to set the TCP_NO_DELAY flag (which, to me, should 
> be the default) seem to be more geared toward the 3.x or non-NIO versions.   
> Anyway, once I managed to get that set, things improved significantly.  For 
> non-chunked data, it seems to be working very well.  For chunked data, it 
> seems to work well 99% of the time.   It's that last 1% that's going to 
> drive me nuts.  :-(   It's occassionally writing out bad chunk headers, and 
> I have no idea why.   A raw wireshark look certainly shows bad chunk headers 
> heading out.   I don't know if it's something I'm doing or a bug in their 
> stuff.  Don't really know yet.

Hi Daniel

If my memory serves me well, we have not had a single confirmed case of
message corruption I could think of for many years. I took a cursory
look at your code and could not spot anything obviously wrong. I am
attaching a patch that adds wire and i/o event logging to your HTTP
conduit. If you set 'org.apache.http' category to DEBUG priority you
should be able to see what kind of stuff gets written to and read from
the underlying NIO channel and compare it with what you see with

If you let me know how to reproduce the issue I'll happily investigate
and try to find out what causes it. If there is anything wrong with
HttpCore I'll get it fixed. 

> In anycase, I'm likely going to pursue option #4 a bit more and see if I can 
> figure out the last issue with it.
> >From a performance standpoint, for synchronous request/response, none of 
> them perform as well as the in-jdk HttpURLConnection for what we do.   Netty 
> came the closest at about 5% slower.  HC was about 10%, Jetty about 12%.  
> Gave up on Ning before running benchmarks.   

I am quite surprised the difference compared to HttpURLConnection is
relatively small. In my experience a decent blocking HTTP client
outperforms a decent non-blocking HTTP client by as much as 50% as long
as the number of concurrent connections is moderate (<500). NIO starts
paying off only with concurrency well over 2000 or when connections stay
idle most of the time.



View raw message