db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: Suggestions for improving performance?
Date Thu, 16 Dec 2004 18:29:42 GMT
Also for initial load into a table, if your application allows, you
should look at the various import system procedures.  These are
designed to allow a bulk load from a file or from memory.  If loading
into an empty table using these interfaces, derby will not log the
individual inserts (recovery understands that a backout of this
operation is an empty table, and the data in the table is forced to
disk before the transaction commits).

Also try to use prepared statements whenever possible in derby, most
performance work that has gone into the system has been to make prepared
statements that are executed more than once run fast.

I do have experience with mySQL, but it is likely some of our default
behaviors in the following areas are different which can lead to
different performance out of the box:
derby default out of box configuration:
    o isolation level is read committed
    o autocommit is on, ie. a commit after every statement - big
bottleneck which executing any large sequence of insert, delete, or
update statements.
    o each commit will not return until a physical disk write has been
executed.  For single user this translates to a real I/O per commit.  If
there are multiple concurrent threads derby may be able to grop multiple
commits per sync'd log I/O.

/mikem

Suresh Thalamati wrote:

> Barnet Wagman wrote:
> 
> 
>>A couple questions/issues:
>>
>>Re logging: Something I read in the Derby documentation (or perhaps in
>>the mailing list archive) indicated that logging may be expensive. Is
>>there any way to disable logging completely?
>>
> 
> 
> I think there is no option available in derby to disable  logging . 
> Might be good option for Derby to provide in future releases.
> 
> 
> 
>>Re record vs table locking:  "Tuning Derby" indicates that record
>>level locking can add a lot of overhead and implied that there's a way
>>to force table locking, but it wasn't clear to me how to do this.
>>
> 
> 
>  1) you can acquire a table level lock  using  LOCK TABLE sql statement .
>      ex: lock table t1 in exclusive mode
>  2) There is also lock escalation mechanism in derby. When locks on
> particular table  in a transaction reaches a threshold
> values(default:5000). it automatically escalates  the row level  locks
> to a table level lock. Lock escalation threshold  value can be changed
> by setting derby.locks.escalationThreshold  property.   I would not
> recommend reducing the threshold  if the tables are being accessed
> concurrently.
> 
> -suresh
> 
> 

Mime
View raw message