Return-Path: Delivered-To: apmail-db-derby-dev-archive@www.apache.org Received: (qmail 99164 invoked from network); 22 Oct 2007 04:27:42 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 22 Oct 2007 04:27:42 -0000 Received: (qmail 49543 invoked by uid 500); 22 Oct 2007 04:27:29 -0000 Delivered-To: apmail-db-derby-dev-archive@db.apache.org Received: (qmail 49510 invoked by uid 500); 22 Oct 2007 04:27:29 -0000 Mailing-List: contact derby-dev-help@db.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: Delivered-To: mailing list derby-dev@db.apache.org Received: (qmail 49501 invoked by uid 99); 22 Oct 2007 04:27:29 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 21 Oct 2007 21:27:29 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Oct 2007 04:27:40 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id A86D47141EB for ; Sun, 21 Oct 2007 21:26:50 -0700 (PDT) Message-ID: <22694197.1193027210664.JavaMail.jira@brutus> Date: Sun, 21 Oct 2007 21:26:50 -0700 (PDT) From: "Myrna van Lunteren (JIRA)" To: derby-dev@db.apache.org Subject: [jira] Commented: (DERBY-3093) Intermittent transaction failure caused by internal duplicate savepoint name for triggers In-Reply-To: <23733595.1191014570803.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/DERBY-3093?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:comment-tabpanel#action_12536565 ]=20 Myrna van Lunteren commented on DERBY-3093: ------------------------------------------- I am hoping that this issue is the cause of the intermittent test failures = that have been reported with some of the IBM jvms, see DERBY-2808, and see = for instance http://people.apache.org/~fuzzylogic/derby_test_results/main/t= estSummary-582051.html (Although, apparently, these results haven't been updated for several weeks= ). I'll keep an eye out for new reports, or run some tests myself, and see if = we can close that one off as a duplicate, or as 'fixed'. > Intermittent transaction failure caused by internal duplicate savepoint n= ame for triggers > -------------------------------------------------------------------------= ---------------- > > Key: DERBY-3093 > URL: https://issues.apache.org/jira/browse/DERBY-3093 > Project: Derby > Issue Type: Bug > Components: SQL > Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.1.4 > Environment: Java 1.6.0_02 > Reporter: Jim Newsham > Assignee: James F. Adams > Fix For: 10.3.1.5, 10.4.0.0 > > Attachments: derby-3093_diff.txt > > > While running our app overnight, performing some intensive database opera= tions (primarily deletes), we experienced a transaction failure, with error= message "A SAVEPOINT with the passed name already exists in the current tr= ansaction". This failure appears to be very intermittent, as I've run the = same operation successfully for hours and hours without failure. Some more= information: > 1. I see that there are four JIRA issues which report the same message (= DERBY-2773, DERBY-1457, DERBY-2808, DERBY-2804). I am not sure to what ext= ent if any, the problem I experienced is related to these issues. > =E2=80=A2=09At least three of the above JIRA issues are related to trigge= rs. Our transaction failure occurred while performing a delete in a table = which has delete triggers. > =E2=80=A2=09We aren't using savepoints explicitly. Of course, it can be = seen in the stack trace that the trigger operation is setting a savepoint. > =E2=80=A2=09One of the JIRA issues mentions exceeding maximum depth of ne= sted triggers. Not sure what a nested trigger is, but the trigger we are p= erforming should not (in theory) cause another trigger event. > 2. Here's the stack trace: > Caused by: java.sql.SQLException: A SAVEPOINT with the passed name alread= y exists in the current transaction. > at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLExcepti= on(Unknown Source) > at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown= Source) > at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLEx= ception(Unknown Source) > at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleExcep= tion(Unknown Source) > at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unk= nown Source) > at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unk= nown Source) > at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unk= nown Source) > at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeState= ment(Unknown Source) > at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeUpdat= e(Unknown Source) > at com.referentia.sdf.monitor.samplebase.derby.DerbySampleBase$3.= perform(DerbySampleBase.java:682) > at com.referentia.sdf.monitor.samplebase.derby.DerbySampleBase.pe= rformInTransaction(DerbySampleBase.java:2747) > ... 7 more > Caused by: java.sql.SQLException: A SAVEPOINT with the passed name alread= y exists in the current transaction. > at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException= (Unknown Source) > at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTr= ansportAcrossDRDA(Unknown Source) > ... 18 more > Caused by: ERROR 3B501: A SAVEPOINT with the passed name already exists i= n the current transaction. > at org.apache.derby.iapi.error.StandardException.newException(Unk= nown Source) > at org.apache.derby.impl.store.raw.xact.Xact.setSavePoint(Unknown= Source) > at org.apache.derby.impl.store.access.RAMTransaction.setSavePoint= (Unknown Source) > at org.apache.derby.impl.sql.conn.GenericStatementContext.setSave= Point(Unknown Source) > at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unk= nown Source) > at org.apache.derby.impl.sql.execute.GenericTriggerExecutor.execu= teSPS(Unknown Source) > at org.apache.derby.impl.sql.execute.RowTriggerExecutor.fireTrigg= er(Unknown Source) > at org.apache.derby.impl.sql.execute.TriggerEventActivator.notify= Event(Unknown Source) > at org.apache.derby.impl.sql.execute.DeleteResultSet.fireAfterTri= ggers(Unknown Source) > at org.apache.derby.impl.sql.execute.DeleteResultSet.open(Unknown= Source) > at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unk= nown Source) > ... 12 more > 3. Here are the relevant tables and triggers. We have different tables f= or different "sample" data types in our application (currently int, long, = float, double, Boolean, string). I have shown the int_sample table; the ot= her tables are identical but have a different data type for the value field= . =20 > Sample records are first class; time records are second class - they only= exist to support sample records. When there are no remaining sample recor= ds for a given time record, the time record can be removed. It is the job = of the trigger to perform this cleanup: when a record is deleted from any = of the sample tables, the time record corresponding to *_sample.fk_time_id = is deleted only if there are no remaining records in any of the sample tabl= es which have the same fk_time_id. So although there are cascading deletes= (deleting a time record deletes all of its samples), the trigger should no= t recurse because the trigger only deletes when there are no associated rec= ords. > =20 > create table time ( > id int not null generated always as identity, > time timestamp not null, > constraint time_pk primary key (id), > constraint time_unique unique (time) > ); > create table int_sample ( > fk_band_id int not null, > fk_time_id int not null, > value int not null, > constraint int_sample_pk primary key (fk_band_id, fk_time_id), > constraint int_sample_fk_band foreign key (fk_band_id) references band = (id) on delete cascade, > constraint int_sample_fk_time foreign key (fk_time_id) references time = (id) on delete cascade > ); > create trigger cascade_delete_unused_time_for_int_sample_trigger > after delete on int_sample > referencing old as oldrow > for each row > delete from time where=20 > time.id =3D oldrow.fk_time_id > and id not in (select fk_time_id from int_sample)=20 > and id not in (select fk_time_id from long_sample)=20 > and id not in (select fk_time_id from float_sample)=20 > and id not in (select fk_time_id from double_sample)=20 > and id not in (select fk_time_id from boolean_sample)=20 > and id not in (select fk_time_id from string_sample); > 4. Invocation. This is very rough pseudocode focusing on the core logic= . In reality, there's a lot more code, but I'm doing the usual stuff, such= as using prepared statements, and closing all resources when done. The ke= y thing is that I'm using two statements in a nested fashion; in the outer = statement, I iterate over records in the time table, and in the inner loop = I use a second statement to delete records in one of the sample tables. I = can provide the actual code if desired. > deleteSamples(int bandId, String sampleTable, Object someQueryCriteria) { > within transaction { > create stmt1 iterating over time ids in time table (restricted by som= eQueryCriteria) > for (time_id in stmt1) { > PreparedStatement stmt2 =3D conn.prepareStatement(String.format( > "delete from %s where fk_band_id =3D ? and fk_time_id =3D ?", sam= pleTable)); > set stmt2 parameters and executeUpdate() > } > } > } --=20 This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.