tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Schultz <>
Subject Re: Performance problem on HTTP PUT of large binary data
Date Wed, 25 Jul 2007 17:33:41 GMT
Hash: SHA1


Daniel Hagen wrote:
> I am currently developing an application that handles uploads of big 
> files using HTTP PUT.

> The client is a Java client writing 32K blocks to the server using
> chunked streaming mode with 32K chunk size.
> On performance tests I noticed the CPU load of the server going up to
> 100% and remaining there during the complete upload. I did some
> further profiling and finally got one (in my eyes) very propable
> candidate: The read(byte[]) methods of the (Coyote)InputStream return
> only blocks of ~1000 to ~7500 bytes resulting in an excessive amount
> of calls to the aforementioned methods in the process.

I wonder if this is due to the IP and ethernet chunking of data.
Ethernet (and IP) packets /can/ get really big, but are often something
small like 1500 bytes or so. Your OS (or OSs) might be getting lazy and
just returning each packet in its own block.

It looks like you are using non-buffered streams. Have you tried using
BufferedInputStream? That might allow more bytes to pile up before the
bytes are actually returned. Simply using a 32k byte array doesn't
really set any chunking size when reading. The InputStream class will
just give you what's available, not block to fill your buffer entirely.

- From javadoc:
"Reads some number of bytes from the input stream and stores them into
the buffer array b. The number of bytes actually read is returned as an
integer. This method blocks until input data is available, end of file
is detected, or an exception is thrown."

To me, that's a little vague. It only says that it will block until data
is available... it doesn't say how much is required for a successful
return. I suspect that it will return immediately when any amount of
data is available.

> I also noticed a funny pattern in the number of bytes read, there
> seems to be a fixed maximum of ~7000 bytes (windows) and a similar
> but not equal number (~7700) on linux.

Sounds like a buffering issue. Try BufferedInputStream and re-test.

> Do you have any idea what could cause the described behavior and
> prevent the server from returning larger buffers? Any parameters I
> could check/tweek to overcome that problem?

I suspect that your server is fast enough to be able to steal small
amounts of data from the TCP stack each time, rather than actually
getting 32k all at once. Since you aren't buffering your input, you are
getting small bytes (ha!) of data instead of large ones.

Give buffering a try and let us know how it goes. If that doesn't so it,
you might want to look into Comet which features non-blocking IO
capability, though I'm not entirely sure how that would help you, here ;)

- -chris

Version: GnuPG v1.4.7 (MingW32)
Comment: Using GnuPG with Mozilla -


To start a new topic, e-mail:
To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message