db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: Performance regressions
Date Fri, 17 Feb 2006 17:00:28 GMT
ok, I was hoping that for single user testing it wouldn't
be a big change.  The single user commit per update is
a problem when comparing derby to other db's which don't
do real transaction guarantees.  It would be great if
someone reading the derby web site would pick the 1000
row per commit single user case to look at first.

I just looked at the insert case, and on the following
page it looks to me like the single user case is taking
about 6% user time and 2% system time.  Am I reading
the %cpu graph correctly? From the description
I think this is a 2 processor machine.  With 2 processors
will it be possible to register 200% of cpu or just 100%
of cpu (I have seen both possibilities on multiprocessor
machines depending on the tool).

Olav Sandstaa wrote:
> Mike Matrigali <mikem_app@sbcglobal.net> wrote:
> 
>>Thanks for the info, anything is better than nothing.
>>Any chance to measure something like 1000 records per commit.
>>With one record per commit for the update operations you are
>>not really measuring the work to do the operation just the
>>overhead of commit -- at least for the single user case --
>>assuming your machine is set up to let derby do real disk
>>syncs (no write cache enabled).
> 
> 
> The write cache on the disks are enabled in order make this test CPU
> bound also for insert, update and delete load instead of disk bound. I
> agree that only having one insert/update/delete operation per
> transaction/commit we include a lot of overhead for the commit. The
> intention is not to measure throughput, but to identify regressions
> and even if the commit takes 50 percent (just guessing) of the CPU
> cost/work of doing an update transaction it should still be possible to
> identify if there are changes in the update operation itself that
> influence on the CPU usage/throughput.
> 
> Unfortunately I will have to make major changes to the test client if
> it should do 1000 updates per commit. All clients work on the same
> table and perform the operation on a random record. With multiple
> updates per transaction it would lead to a lot of deadlocks. I think
> it would be better to write a new load client than to try to tweek the
> one I run right now.
> 
> I am also running some tests where the write cache on the disks are
> disabled (as it should be), but I have not included the results on
> the web page yet (mostly due to much higher variation in the test
> results).
> 
> ..olav
> 
> 
> 


Mime
View raw message