Return-Path: Delivered-To: apmail-db-derby-commits-archive@www.apache.org Received: (qmail 59968 invoked from network); 22 May 2007 20:45:36 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 22 May 2007 20:45:36 -0000 Received: (qmail 31333 invoked by uid 500); 22 May 2007 20:45:41 -0000 Delivered-To: apmail-db-derby-commits-archive@db.apache.org Received: (qmail 31307 invoked by uid 500); 22 May 2007 20:45:41 -0000 Mailing-List: contact derby-commits-help@db.apache.org; run by ezmlm Precedence: bulk list-help: list-unsubscribe: List-Post: Reply-To: "Derby Development" List-Id: Delivered-To: mailing list derby-commits@db.apache.org Received: (qmail 31296 invoked by uid 99); 22 May 2007 20:45:41 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 22 May 2007 13:45:41 -0700 X-ASF-Spam-Status: No, hits=-99.5 required=10.0 tests=ALL_TRUSTED,NO_REAL_NAME X-Spam-Check-By: apache.org Received: from [140.211.11.3] (HELO eris.apache.org) (140.211.11.3) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 22 May 2007 13:45:35 -0700 Received: by eris.apache.org (Postfix, from userid 65534) id 1C2BE1A981A; Tue, 22 May 2007 13:45:15 -0700 (PDT) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r540743 - in /db/derby/code/branches/10.2/java: engine/org/apache/derby/impl/store/access/heap/ testing/org/apache/derbyTesting/functionTests/master/ testing/org/apache/derbyTesting/functionTests/tests/store/ Date: Tue, 22 May 2007 20:45:14 -0000 To: derby-commits@db.apache.org From: mikem@apache.org X-Mailer: svnmailer-1.1.0 Message-Id: <20070522204515.1C2BE1A981A@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: mikem Date: Tue May 22 13:45:14 2007 New Revision: 540743 URL: http://svn.apache.org/viewvc?view=rev&rev=540743 Log: DERBY-2549, backporting change 540657 from trunk to 10.2 line. contributed by Mayuresh Nirhali Fix null pointer when running inplace compress. Change code to correctly handle when more than 100 rows are moved from a single page. The new code returns to the caller after processing the 100 rows, and the next trip through the loop picks up the scan where it left off on that same page. Test case was added to existing test. Modified: db/derby/code/branches/10.2/java/engine/org/apache/derby/impl/store/access/heap/HeapCompressScan.java db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/master/OnlineCompressTest.out db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest.java Modified: db/derby/code/branches/10.2/java/engine/org/apache/derby/impl/store/access/heap/HeapCompressScan.java URL: http://svn.apache.org/viewvc/db/derby/code/branches/10.2/java/engine/org/apache/derby/impl/store/access/heap/HeapCompressScan.java?view=diff&rev=540743&r1=540742&r2=540743 ============================================================================== --- db/derby/code/branches/10.2/java/engine/org/apache/derby/impl/store/access/heap/HeapCompressScan.java (original) +++ db/derby/code/branches/10.2/java/engine/org/apache/derby/impl/store/access/heap/HeapCompressScan.java Tue May 22 13:45:14 2007 @@ -133,6 +133,12 @@ int ret_row_count = 0; DataValueDescriptor[] fetch_row = null; + // only fetch maximum number of rows per "group" as the size of + // the array. If more than one group is available on page, just + // leave the scan on the page and the next group will come from + // this page also. + int max_rowcnt = row_array.length; + if (SanityManager.DEBUG) { SanityManager.ASSERT(row_array != null); @@ -206,6 +212,7 @@ while ((scan_position.current_slot + 1) < scan_position.current_page.recordCount()) { + // Allocate a new row to read the row into. if (fetch_row == null) { @@ -221,6 +228,7 @@ // move scan current position forward. scan_position.positionAtNextSlot(); + int restart_slot = scan_position.current_slot; this.stat_numrows_visited++; @@ -256,7 +264,7 @@ new_handle) == 1) { // raw store moved the row, so bump the row count but - // postion the scan at previous slot, so next trip + // position the scan at previous slot, so next trip // through loop will pick up correct row. // The subsequent rows will have been moved forward // to take place of moved row. @@ -274,6 +282,24 @@ fetch_row = null; } + } + + // Derby-2549. If ret_row_count reaches the limit of the buffer, + // then return the maximum number and come back into the same + // method to fetch the remaining rows. In this block we ensure + // that the scan_position is appropriate. + if (ret_row_count >= max_rowcnt) + { + // filled group buffer, exit fetch loop and return to caller + + // save current scan position by record handle. + scan_position.current_rh = + scan_position.current_page.getRecordHandleAtSlot( + restart_slot); + + scan_position.unlatch(); + + return(ret_row_count); } } Modified: db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/master/OnlineCompressTest.out URL: http://svn.apache.org/viewvc/db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/master/OnlineCompressTest.out?view=diff&rev=540743&r1=540742&r2=540743 ============================================================================== --- db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/master/OnlineCompressTest.out (original) +++ db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/master/OnlineCompressTest.out Tue May 22 13:45:14 2007 @@ -78,3 +78,6 @@ Executing test: begin test5: 10000 row test. Executing test: end test5: 10000 row test. Ending test: test5 +Beginning test: test7 +Executing test: delete rows case succeeded. +Ending test: test7 Modified: db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest.java URL: http://svn.apache.org/viewvc/db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest.java?view=diff&rev=540743&r1=540742&r2=540743 ============================================================================== --- db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest.java (original) +++ db/derby/code/branches/10.2/java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest.java Tue May 22 13:45:14 2007 @@ -170,6 +170,147 @@ } /** + * Create and load a table with large columns. + *

+ * If create_table is set creates a test data table with indexes. + * Loads num_rows into the table. This table defaults to 32k page size. + *

+ * + * + * @param conn Connection to use for sql execution. + * @param create_table If true, create new table - otherwise load into + * existing table. + * @param tblname table to use. + * @param num_rows number of rows to add to the table. + * @param start_value Starting number from which num_rows are inserted + * @exception StandardException Standard exception policy. + **/ + protected void createAndLoadLargeTable( + Connection conn, + boolean create_table, + String tblname, + int num_rows, + int start_value) + throws SQLException + { + if (create_table) + { + Statement s = conn.createStatement(); + + // Derby-606. Note that this table is currently only used by Test6. + // Test6 needs data be to spread over 2 AllocExtents + // and this table schema is chosen so that the required scenario + // is exposed in minimum test execution time. + s.execute( + "create table " + tblname + + "(keycol int, indcol1 int, indcol2 int, data1 char(24), data2 char(24), data3 char(24)," + + "data4 char(24), data5 char(24), data6 char(24), data7 char(24), data8 char(24)," + + "data9 char(24), data10 char(24), inddec1 decimal(8), indcol3 int, indcol4 int, data11 varchar(50))"); + s.close(); + } + + PreparedStatement insert_stmt = + conn.prepareStatement( + "insert into " + tblname + " values(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)"); + + char[] data1_data = new char[24]; + char[] data2_data = new char[24]; + char[] data3_data = new char[24]; + char[] data4_data = new char[24]; + char[] data5_data = new char[24]; + char[] data6_data = new char[24]; + char[] data7_data = new char[24]; + char[] data8_data = new char[24]; + char[] data9_data = new char[24]; + char[] data10_data = new char[24]; + char[] data11_data = new char[50]; + + for (int i = 0; i < data1_data.length; i++) + { + data1_data[i] = 'a'; + data2_data[i] = 'b'; + data3_data[i] = 'c'; + data4_data[i] = 'd'; + data5_data[i] = 'e'; + data6_data[i] = 'f'; + data7_data[i] = 'g'; + data8_data[i] = 'h'; + data9_data[i] = 'i'; + data10_data[i] = 'j'; + } + for( int i=0; i < data11_data.length; i++) + { + data11_data[i] = 'z'; + } + + String data1_str = new String(data1_data); + String data2_str = new String(data2_data); + String data3_str = new String(data3_data); + String data4_str = new String(data4_data); + String data5_str = new String(data5_data); + String data6_str = new String(data6_data); + String data7_str = new String(data7_data); + String data8_str = new String(data8_data); + String data9_str = new String(data9_data); + String data10_str = new String(data10_data); + String data11_str = new String(data11_data); + + int row_count = 0; + try + { + for (int i = start_value; row_count < num_rows; row_count++, i++) + { + insert_stmt.setInt(1, i); // keycol + insert_stmt.setInt(2, i * 10); // indcol1 + insert_stmt.setInt(3, i * 100); // indcol2 + insert_stmt.setString(4, data1_str); // data1_data + insert_stmt.setString(5, data2_str); // data2_data + insert_stmt.setString(6, data3_str); // data3_data + insert_stmt.setString(7, data4_str); // data4_data + insert_stmt.setString(8, data5_str); // data5_data + insert_stmt.setString(9, data6_str); // data6_data + insert_stmt.setString(10, data7_str); // data7_data + insert_stmt.setString(11, data8_str); // data8_data + insert_stmt.setString(12, data9_str); // data9_data + insert_stmt.setString(13, data10_str); // data10_data + insert_stmt.setInt(14, i * 20); // indcol3 + insert_stmt.setInt(15, i * 200); // indcol4 + insert_stmt.setInt(16, i * 50); + insert_stmt.setString(17, data11_str); // data11_data + + insert_stmt.execute(); + } + } + catch (SQLException sqle) + { + System.out.println( + "Exception while trying to insert row number: " + row_count); + throw sqle; + } + + if (create_table) + { + Statement s = conn.createStatement(); + + s.execute( + "create index " + tblname + "_idx_keycol on " + tblname + + "(keycol)"); + s.execute( + "create index " + tblname + "_idx_indcol1 on " + tblname + + "(indcol1)"); + s.execute( + "create index " + tblname + "_idx_indcol2 on " + tblname + + "(indcol2)"); + s.execute( + "create unique index " + tblname + "_idx_indcol3 on " + tblname + + "(indcol3)"); + s.close(); + } + + conn.commit(); + } + + /** * Create and load a table with long columns and long rows. *

* If create_table is set creates a test data table with indexes. @@ -924,9 +1065,8 @@ * c varchar(300) * * @param conn Connection to use for sql execution. - * @param create_table If true, create new table - otherwise load into - * existing table. - * @param tblname table to use. + * @param schemaName the schema to use. + * @param table_name the table to use. * @param num_rows number of rows to add to the table. * * @exception StandardException Standard exception policy. @@ -1220,6 +1360,63 @@ } + /** + * Test 7 - Online compress test for fetching more rows than buffer limit. + *

+ * For smaller row size, if number of rows per page is more than max buffer + * size, then check if the remaining rows are also fetched for Compress + * Operation + *

+ **/ + private void test7( + Connection conn, + String test_name, + String table_name) + throws SQLException + { + beginTest(conn, test_name); + + Statement s = conn.createStatement(); + + s.execute("create table " + table_name + "(keycol int)"); + s.close(); + PreparedStatement insert_stmt = + conn.prepareStatement("insert into " + table_name + " values(?)"); + try + { + for (int i = 0; i < 1200; i++) + { + insert_stmt.setInt(1, i); + + insert_stmt.execute(); + } + } + catch (SQLException sqle) + { + System.out.println( + "Exception while trying to insert a row"); + throw sqle; + } + conn.commit(); + + // delete the front rows leaving the last 200. Post commit may reclaim + // space on pages where all rows are deleted. + executeQuery( + conn, "delete from " + table_name + " where keycol < 1000", true); + + conn.commit(); + + if (verbose) + testProgress("deleted first 1000 rows, now calling compress."); + + callCompress(conn, "APP", table_name, true, true, true, true); + + testProgress("delete rows case succeeded."); + + executeQuery(conn, "drop table " + table_name, true); + + endTest(conn, test_name); + } public void testList(Connection conn) throws SQLException @@ -1229,6 +1426,7 @@ test3(conn, "test3", "TEST3"); // test4(conn, "test4", "TEST4"); test5(conn, "test5", "TEST5"); + test7(conn, "test7", "TEST7"); } public static void main(String[] argv)