Return-Path: Delivered-To: apmail-db-derby-dev-archive@www.apache.org Received: (qmail 54947 invoked from network); 17 Nov 2006 19:45:26 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 17 Nov 2006 19:45:26 -0000 Received: (qmail 60021 invoked by uid 500); 17 Nov 2006 19:45:35 -0000 Delivered-To: apmail-db-derby-dev-archive@db.apache.org Received: (qmail 59997 invoked by uid 500); 17 Nov 2006 19:45:35 -0000 Mailing-List: contact derby-dev-help@db.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: Delivered-To: mailing list derby-dev@db.apache.org Received: (qmail 59988 invoked by uid 99); 17 Nov 2006 19:45:35 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Nov 2006 11:45:35 -0800 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests=UNPARSEABLE_RELAY X-Spam-Check-By: apache.org Received-SPF: pass (herse.apache.org: local policy) Received: from [192.18.19.7] (HELO sineb-mail-2.sun.com) (192.18.19.7) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Nov 2006 11:45:21 -0800 Received: from fe-apac-04.sun.com (fe-apac-04.sun.com [192.18.19.175] (may be forged)) by sineb-mail-2.sun.com (8.13.6+Sun/8.12.9) with ESMTP id kAHJiwm7021559 for ; Sat, 18 Nov 2006 03:44:59 +0800 (SGT) Received: from conversion-daemon.mail-apac.sun.com by mail-apac.sun.com (Sun Java System Messaging Server 6.2-6.01 (built Apr 3 2006)) id <0J8W0080146F4E00@mail-apac.sun.com> (original mail from Mayuresh.Nirhali@Sun.COM) for derby-dev@db.apache.org; Sat, 18 Nov 2006 03:44:57 +0800 (SGT) Received: from [192.168.0.47] ([88.88.123.129]) by mail-apac.sun.com (Sun Java System Messaging Server 6.2-6.01 (built Apr 3 2006)) with ESMTPSA id <0J8W00GCL46MW6Z3@mail-apac.sun.com> for derby-dev@db.apache.org; Sat, 18 Nov 2006 03:44:57 +0800 (SGT) Date: Fri, 17 Nov 2006 20:44:34 +0530 From: Mayuresh Nirhali Subject: Re: [jira] Commented: (DERBY-606) SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails on (very) large tables In-reply-to: <455DFCE3.6000800@sbcglobal.net> Sender: Mayuresh.Nirhali@Sun.COM To: derby-dev@db.apache.org Message-id: <455DD1DA.5030304@Sun.COM> MIME-version: 1.0 Content-type: text/plain; format=flowed; charset=UTF-8 Content-transfer-encoding: 7BIT X-Accept-Language: en-us, en References: <24799141.1163781822583.JavaMail.jira@brutus> <455DFCE3.6000800@sbcglobal.net> User-Agent: Mozilla Thunderbird 1.0.2 (X11/20050323) X-Virus-Checked: Checked by ClamAV on apache.org Thanks, for your response, Mike. I did consider this and the current repro takes about 4-5 mins and 42MB of disk space. I was able to hit this bug with just creating enough records that will create 2nd AllocPage and delete enough records so that the 2nd Alloc page is empty (to get newHighetPage = -1). I was hoping that this 5 mins increase in overall derbyall run testing time will be accepted. Please let me know. I believe this is a good case to be covered in derbyall. I was planning to just extend the test1 in OnlineCompressTest to include another number_of_rows parameter which is higher than 4000. As you mentioned in your reply, this extension might cause Lock escalation, I will try to find some work-around. I might have to create another testX method at worse I think. Thanks Mayuresh Mike Matrigali wrote: > also note that depending on the amount of disk space and the time to > create your "(very) large table" it may not be appropriate to add your > case to this test which is run as part of everyone's nightly run and > may need to be run by every developer as part of a checkin. I don't > think there is a fixed requirement, but I think for instance we decided > that tests dealing with 2 gig blobs were too big to be forced into > nightly dev run. > > How much disk space and time does your case take? > > There is another suite of tests called largeData which is intended for > tests with large disk requirements. If it saves you time you should > feel free to extend the OnlineCompressTest which your own test class and > reuse as much code as possible. > > Mayuresh Nirhali (JIRA) wrote: > >> [ >> http://issues.apache.org/jira/browse/DERBY-606?page=comments#action_12450785 >> ] Mayuresh Nirhali commented on DERBY-606: >> ---------------------------------------- >> >> I looked at the OnlineCompressTest and realized that to reproduce >> this case, the simplest way is to increase the number of rows added >> to the table in one of the existing testcases. However, I see a >> following comment in the testcase, >> >> >> * 4000 rows - reasonable number of pages to test out, still 1 >> alloc page >> * >> * note that row numbers greater than 4000 may lead to lock >> escalation >> * issues, if queries like "delete from x" are used to delete all >> the * rows. >> >> >> This is very relevant to the testcase which I would like to add and >> so, would like to know the Lock Escalation issue here. Has anyone >> seen this kind of issue before ? any pointers ?? >> >> The repro attached to the bug has almost similar testcase, I have not >> seen any problems with that so far. So, it might be that the Lock >> Escalation issue has already been fixed. (I did not find any related >> JIRA for this though). Can someone please confirm this ?? I can >> update the comments if that problem has been fixed. >> >> Thanks >> >> >> >>> SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails on (very) large tables >>> -------------------------------------------------------------------- >>> >>> Key: DERBY-606 >>> URL: http://issues.apache.org/jira/browse/DERBY-606 >>> Project: Derby >>> Issue Type: Bug >>> Components: Store >>> Affects Versions: 10.1.1.0 >>> Environment: Java 1.5.0_04 on Windows Server 2003 Web Edition >>> Reporter: Jeffrey Aguilera >>> Assigned To: Mayuresh Nirhali >>> Fix For: 10.3.0.0 >>> >>> Attachments: A606Test.java, derby606_v1.diff >>> >>> >>> SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE fails with one of the >>> following error messages when applied to a very large table (>2GB): >>> Log operation null encounters error writing itself out to the log >>> stream, this could be caused by an errant log operation or internal >>> log buffer full due to excessively large log operation. SQLSTATE: >>> XJ001: Java exception: ': java.io.IOException'. >>> or >>> The exception 'java.lang.ArrayIndexOutOfBoundsException' was thrown >>> while evaluating an expression. SQLSTATE: XJ001: Java exception: ': >>> java.lang.ArrayIndexOutOfBoundsException'. >>> In either case, no entry is written to the console log or to derby.log. >> >> >> >