db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Randy Letness <ra...@osafoundation.org>
Subject Re: deadlock question
Date Thu, 01 Feb 2007 18:40:31 GMT
Mike Matrigali wrote:
>>
>> Ah this makes sense.  I didn't take into account internal 
>> transactions.  Index split deadlock huh?  I would have never thought 
>> of that.  Does this happen often?
> I almost never see it, but of course every app is different.  The 
> problem is that your test case, I think
> guarantees it happens since you lock every row in the delete and the
> scanner is stuck on waiting on one of the rows.
> What it takes is the writer adding rows to the same leaf index page as
> the scanner is blocked on, and needing to split the page that the
> scanner is blocked on.
>
> In the 3 million row app is it likely the scanner looks at same rows
> as writer?
>

By scanner are you referring to the read transaction?  Yes this will be 
the case.  The problem is that the majority of transactions are 
reading.  The update transaction deletes a bunch of rows (300 here) and 
adds a bunch (another 300), but at the same time, multiple transactions 
could be reading the rows that are being deleted.  I would think the 
following would happen:

TX1 = update, TX2=read

TX1 gets x locks on rows 1-300
TX1 deletes rows 1-300
TX2 selects rows 1-300
TX2 attempts to lock row 1 (waits because TX1 has ex lock)
TX1 inserts rows 301-600
TX1 commits
TX2 gets s lock on 1, reads 1
TX2 gets s lock on 2, reads 2...etc
TX2 commits

But according to what you are saying this is what happens:

TX1 gets x locks on rows 1-300
TX1 deletes 1-300
TX2 selects 1-300
TX2 gets s lock special index page lock
TX2 attempts to lock row 1 (waits because TX1 has x lock)
TX1 inserts some number of rows
TX1 spawns TX3
TX3 attempts to lock special index page lock (waits because TX2 has the 
lock)
deadlock!

Bummer, but good to know how the internals of Derby work.  So its the 
inserts that are the problem.

> Altering the page size of the index may give you a partial workaround.
> Bigger index pages would mean less total splits, thus less chance for
> this deadlock case.  But that is not guaranteed.

I tried different settings for derby.storage.pageSize and still get the 
deadlock.  Of course I'm still using the test program where all the rows 
are being deleted.  I need to test on a larger dataset.

-Randy

Mime
View raw message