Return-Path: Delivered-To: apmail-db-derby-dev-archive@www.apache.org Received: (qmail 43318 invoked from network); 13 Feb 2009 21:15:23 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 13 Feb 2009 21:15:23 -0000 Received: (qmail 81068 invoked by uid 500); 13 Feb 2009 21:15:23 -0000 Delivered-To: apmail-db-derby-dev-archive@db.apache.org Received: (qmail 81038 invoked by uid 500); 13 Feb 2009 21:15:23 -0000 Mailing-List: contact derby-dev-help@db.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: Delivered-To: mailing list derby-dev@db.apache.org Received: (qmail 81027 invoked by uid 99); 13 Feb 2009 21:15:23 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Feb 2009 13:15:23 -0800 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Feb 2009 21:15:20 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id A65E7234C495 for ; Fri, 13 Feb 2009 13:14:59 -0800 (PST) Message-ID: <1429561959.1234559699680.JavaMail.jira@brutus> Date: Fri, 13 Feb 2009 13:14:59 -0800 (PST) From: "Kathey Marsden (JIRA)" To: derby-dev@db.apache.org Subject: [jira] Commented: (DERBY-4054) Multithreaded clob update with exclusive table locking causes table growth that is not reclaimed In-Reply-To: <1127781334.1234461120181.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/DERBY-4054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12673376#action_12673376 ] Kathey Marsden commented on DERBY-4054: --------------------------------------- So the interesting thing here is that in ReclaimSpaceHelper.reclaimSpace() the call to openContainerNW(tran, container_rlock, work.getContainerId()); does not return null if it can't get the lock right away as I would have expected. It actually throws an Exception: ERROR 40XL1: A lock could not be obtained within the time requested at java.lang.Throwable.(Throwable.java:67) at org.apache.derby.iapi.error.StandardException.(StandardException.java:80) at org.apache.derby.iapi.error.StandardException.(StandardException.java:69) at org.apache.derby.iapi.error.StandardException.newException(StandardException.java) at org.apache.derby.impl.store.raw.data.BaseContainerHandle.useContainer(BaseContainerHandle.java:823) at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.openContainer(BaseDataFileFactory.java:735) at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.openContainer(BaseDataFileFactory.java:551) at org.apache.derby.impl.store.raw.xact.Xact.openContainer(Xact.java:1313) at org.apache.derby.impl.store.raw.data.ReclaimSpaceHelper.openContainerNW(ReclaimSpaceHelper.java) at org.apache.derby.impl.store.raw.data.ReclaimSpaceHelper.reclaimSpace(ReclaimSpaceHelper.java:246) at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.reclaimSpace(BaseDataFileFactory.java:1256) at org.apache.derby.impl.store.raw.data.ReclaimSpace.performWork(ReclaimSpace.java:148) at org.apache.derby.impl.services.daemon.BasicDaemon.serviceClient(BasicDaemon.java:331) at org.apache.derby.impl.services.daemon.BasicDaemon.work(BasicDaemon.java:668) at org.apache.derby.impl.services.daemon.BasicDaemon.run(BasicDaemon.java:394) at java.lang.Thread.run(Thread.java:735) This exception gets throw right away. It doesn't wait for the timeout. This exception causes us to leave reclaimSpace and somehow this exception gets gobbled up somewhere and never reported, so the space does not get reclaimed and we don't get a report of a problem I haven't quite figured out where we decide to ignore the exception and move on. If I run in the debugger the exception just get's thrown and not ignored for some reason. If I hack in the change: ContainerHandle containerHdl = null; try { containerHdl = openContainerNW(tran, container_rlock, work.getContainerId()); } catch (StandardException e) { e.printStackTrace(); if (e.getSQLState().equals("40XL1")) containerHdl = null; } Then I will proceed and will hit the message " gave up after 3 tries to get container lock " described in DERBY-4055. Based on this finding I think I am going to rearrange the Jira issues a little bit. I am going to make DERBY-4055 just be for the row lock case and keep this one for the table lock case. > Multithreaded clob update with exclusive table locking causes table growth that is not reclaimed > ------------------------------------------------------------------------------------------------ > > Key: DERBY-4054 > URL: https://issues.apache.org/jira/browse/DERBY-4054 > Project: Derby > Issue Type: Bug > Components: Store > Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.0.0 > Reporter: Kathey Marsden > > If I do a multithreaded clob update which gets an exclusive table lock on the table, space will not be reclaimed. This case is similar to DERBY-4050 except that the test gets an exclusive table lock and the growth happens whether or not the update is synchronized. I will add a disabled test for this to ClobReclamationTest and reference this bug. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.