db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Knut Anders Hatlen <Knut.Hat...@Sun.COM>
Subject Re: ArrayInputStream and performance
Date Thu, 30 Nov 2006 16:09:10 GMT
Mike Matrigali <mikem_app@sbcglobal.net> writes:

> A discussion on the list would be great.  Can anyone post a complete
> test run with exact description of test, and flat and hierarchical
> results performance monitor reports.  It is interesting if info has
> both number of calls and time.

I could see if I could extract some hierarchical data, at least for
the number of calls.

> I was starting to look at:
> http://wiki.apache.org/db-derby/Derby1961MethodCalls
>     I find it really useful to start looking top down at number of
>     operations per test rather than bottom up.  So for instance,
>     some things jumped out:
> o derby.iapi.services.io.ArrayInputStream.setPosition     58.4240
>     At first I was expecting this to be something like 1 per row + 1
> per column.  I assume it got big through btree search - hierarchical
> data would show.  It might be interesting to know btree overhead
> vs. heap overhead.  maybe btree compare can be optimized?
> o  derby.iapi.store.raw.ContainerLock.isCompatible  14.7802
>     This one looks wierd, I don't even see it in the
>     cleanup_flat_profile.txt posted with DERBY-2118.  I sort of
>     expected 1 call for the btree and one call for the heap.  Maybe
>     this is a multi-user run with a long list of table locks, if so
> maybe this can be optimized by somehow grouping the locks so that it
> is not necessary to compare to each one?

I think this number is twice as high as it should have been since
ContainerLock.isCompatible(ContainerLock) calls
ContainerLock.isCompatible(int) and my script only compares class name
and method name, not the parameter list. You are correct that the
numbers come from a multi-user run. This particular run had 10
concurrent clients, whereas I believe the numbers in DERBY-2118 came
from a single-user run (in which I wouldn't expect isCompatible() to
be called at all).

Grouping the locks sounds like an interesting idea, but I guess it
would require significant changes in the lock manager and the Lockable
objects. As it is now, all compatibility checking happens in the
Lockable objects, which are implemented outside the lock manager. To
implement the grouping efficiently, we would probably need to give the
lock manager some knowledge about the compatibility matrix and let
more of the compatibility checking happen inside the lock manager.

>   The
>     description on the wiki doesn't say much about the environment
>     the test was run under (ie. # processors, speed processors,
> write-sync state of disks, how many disks, was log/data on same disk?)

Yes, the page doesn't have much information about the
configuration. I'll update the main wiki page for the DERBY-1961
investigation (Derby1961ResourceUsage2) with some more details. The
tests I have run, have used this configuration:

  - Solaris 10
  - Sun Java SE 6.0 (build 102-104)
  - Derby
  - Dual Opterons (2 x 2.4 GHz)
  - data and log on separate disks
  - SCSI disks with write cache enabled (bad, I know, but I wanted the
    CPU to be fully utilized for the update tests without needing
    100-150 clients)
  - Derby running in a client/server configuration
  - derby.storage.pageCacheSize=12500
  - single-record select tests had 10 concurrent (number chosen
    because this load has max throughput at about 10 concurrent
  - single-record update tests had 20 concurrent clients (more or less
    randomly chosen, but with the goal of full CPU utilization)
  - join tests had 4 concurrent client (also more or less randomly

Kristian and Dyre have also posted results on that page, and I believe
they have used similar configurations.

> Also I do encourage those interested in looking at changing the network
> server code to look at these performance results.  This code is new to
> Derby and I am not aware of much previous work in the network server
> performance area so there is likely some low hanging fruit
> there.

Indeed! There was an effort to improve network server performance
about a year ago, and a number of low hanging fruits were found and
fixed for 10.2. But I'm sure there still is more which could be
improved relatively easily.

Knut Anders

View raw message