db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Olav Sandstaa <Olav.Sands...@Sun.COM>
Subject Re: Performance regressions
Date Fri, 17 Feb 2006 15:10:43 GMT
Mike Matrigali <mikem_app@sbcglobal.net> wrote:
> Thanks for the info, anything is better than nothing.
> Any chance to measure something like 1000 records per commit.
> With one record per commit for the update operations you are
> not really measuring the work to do the operation just the
> overhead of commit -- at least for the single user case --
> assuming your machine is set up to let derby do real disk
> syncs (no write cache enabled).

The write cache on the disks are enabled in order make this test CPU
bound also for insert, update and delete load instead of disk bound. I
agree that only having one insert/update/delete operation per
transaction/commit we include a lot of overhead for the commit. The
intention is not to measure throughput, but to identify regressions
and even if the commit takes 50 percent (just guessing) of the CPU
cost/work of doing an update transaction it should still be possible to
identify if there are changes in the update operation itself that
influence on the CPU usage/throughput.

Unfortunately I will have to make major changes to the test client if
it should do 1000 updates per commit. All clients work on the same
table and perform the operation on a random record. With multiple
updates per transaction it would lead to a lot of deadlocks. I think
it would be better to write a new load client than to try to tweek the
one I run right now.

I am also running some tests where the write cache on the disks are
disabled (as it should be), but I have not included the results on
the web page yet (mostly due to much higher variation in the test
results).

..olav


Mime
View raw message