db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Knut Anders Hatlen <Knut.Hat...@Sun.COM>
Subject Re: network server overhead
Date Tue, 21 Aug 2007 12:20:34 GMT
Daniel John Debrunner <djd@apache.org> writes:

> I ran a simple test of executing VALUES 1 using a prepared statement
> over the network server in auto-commit mode. 10.3 seemed to be the
> same performance as 10.2, maybe up to 4% slower. The 10.2 numbers for
> me were consistent, but the 10.3 numbers seemed to vary from 96% to
> 100% of the 10.2 numbers.
> I was actually surprised because I thought I seen some claims of much
> faster performance with 10.3 server.
> I was getting around 1,630 transactions per second (each transaction
> is a VALUES 1 statement) on a 100Mbit network.

I tried to run a similar test with VALUES 1 in auto-commit mode over the
network (the actual test is attached as Values1.java). The test included
30 runs of each of the following configurations:

  - Derby (no security manager)
  - Derby (with security manager)
  - Derby (without security manager)

All tests were run on Solaris 10 with Sun Java SE 6, both on the server
side and on the client side. Each run had a warm-up period of 45 seconds
and a steady-state period of 60 seconds. The network server was
restarted between each run.

What I noticed, was that

  a) the variability was about the same for and

  b) any difference between 10.2 and 10.3 was small compared to the

  c) on average for all 30 runs, 10.3 with security manager had 0.64%
     higher throughput than 10.2, and 10.3 without security manager had
     0.68% higher throughput than 10.2

  d) when looking at the throughput graphs (attached as values1.png),
     the peaks for 10.3 seem to be at a higher level than the peaks for
     10.2. The 13 best runs with 10.3 are better than any of the 10.2
     runs, and the 5 worst runs with 10.2 are worse than any of the 10.3
     runs. (Since there were twice as many runs with 10.3, these numbers
     are of course a little skewed, but even if you remove any one of
     the 10.3 batches, you get the 6 or 7 best runs with 10.3 and the 5
     worst runs with 10.2.)

These numbers are as I would have expected. Since the test is
single-threaded, and the work that needs to be done by the embedded
driver to execute VALUES 1 is very small, the test is completely bounded
by the network latency. The ~0.7% improvement I saw might be caused by
reduced CPU usage in the 10.3 engine, but it could just as well be
noise. I would however expect a larger improvement if the VALUES 1 test
were run with a larger number of concurrent users, since then we might
get closer to saturating the CPU on the server.

It might be possible to improve single-threaded network server
performance for the VALUES 1 test by reducing the CPU usage, but I think
it would be hard to get a noticeable improvement without reducing the
number of round-trips. Currently, there are two round-trips (one to
fetch data and one to (auto-)commit). I don't think any of them can be
skipped easily, though.

Knut Anders

View raw message