db-derby-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Matrigali <mikem_...@sbcglobal.net>
Subject Re: Auto-increment values, nested transactions & locks...
Date Tue, 04 Mar 2008 18:41:30 GMT
Bryan Pendleton wrote:
>  > My question is basically - why does the nested-transaction need to fail
>  > immediately if it encounters a lock?
> 
> It may be because the code is concerned that it is the parent transaction
> itself which holds the lock, and if we waited for the lock we'd have a
> self deadlock which would never be resolved.
> 
> That is:
>  - parent transaction does something which causes it to hold a lock on
>    the system table.
>  - parent transaction initiates a nested transaction which needs to lock
>    the system table.
>  - nested transaction can't get the lock, blocks waiting for it.
>  - parent transaction never proceeds, because it's waiting for the
>    nested transaction.
> 
> thanks,
> 
> bryan
> 
I have not had time to look at code or the test case, but can explain
the nowait issue. Bryan's comment is exactly why the code exists.  The 
problem is that update locks of a subtransaction
are not compatible with the parent transaction. So a conflict between a 
parent and a subtransaction is basically an undetected deadlock. It is
good to do the system catalog update in a separate transaction as it
allows it to commit separate from the user transaction, but as you can
see can cause problems if locks conflict.  I would see no problem 
allowing the lock to wait if it is waiting on another user but do see
a problem if it is self waiting.  When coded there was no way to tell
which of these 2 cases the wait was happening on.

What I don't remember
is if the main problem is the row lock on the system catalog or some
sort of problem with row locking vs table locking on the base table.
Do you have a stack trace from the failed lock in the sub transaction?

This part of the system could do with work.  The current implementation
was from a long time ago, before derby where the main user of the system
was more likely a single user than 100 concurrent threads.  For instance
the current default of one transaction committed system catalog update 
per 100 inserts could be way better optimized for your application - 
with 100 concurrent inserters it could be that every insert may be 
competing to update the system catalog.  The tradeoff in the current 
system is that making the range larger can mean more "lost" numbers when
the system shuts down.

An interesting project might be to compare/contrast concurrent insert 
performance on tables with autoincrement fields and not.  With today's
multi-cores there are probably performance improvements to be had in the
derby autoincrement tables.
> 
> 
> 


Mime
View raw message