Return-Path: X-Original-To: apmail-db-derby-user-archive@www.apache.org Delivered-To: apmail-db-derby-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5B78D9D8F for ; Wed, 14 Mar 2012 12:55:27 +0000 (UTC) Received: (qmail 42446 invoked by uid 500); 14 Mar 2012 12:55:26 -0000 Delivered-To: apmail-db-derby-user-archive@db.apache.org Received: (qmail 42033 invoked by uid 500); 14 Mar 2012 12:55:25 -0000 Mailing-List: contact derby-user-help@db.apache.org; run by ezmlm Precedence: bulk list-help: list-unsubscribe: List-Post: List-Id: Reply-To: "Derby Discussion" Delivered-To: mailing list derby-user@db.apache.org Received: (qmail 42024 invoked by uid 99); 14 Mar 2012 12:55:25 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 14 Mar 2012 12:55:25 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [213.199.154.142] (HELO db3outboundpool.messaging.microsoft.com) (213.199.154.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 14 Mar 2012 12:55:17 +0000 Received: from mail83-db3-R.bigfish.com (10.3.81.229) by DB3EHSOBE005.bigfish.com (10.3.84.25) with Microsoft SMTP Server id 14.1.225.23; Wed, 14 Mar 2012 12:54:57 +0000 Received: from mail83-db3 (localhost [127.0.0.1]) by mail83-db3-R.bigfish.com (Postfix) with ESMTP id 726074A077A for ; Wed, 14 Mar 2012 12:54:57 +0000 (UTC) X-SpamScore: -6 X-BigFish: VPS-6(zz9371Ic85fh328cMzz1202hzz8275bh8275dhz2dh2a8h668h839hd25h) X-Forefront-Antispam-Report: CIP:74.62.37.82;KIP:(null);UIP:(null);IPV:NLI;H:CPHUB1.canoga.com;RD:rrcs-74-62-37-82.west.biz.rr.com;EFVD:NLI Received: from mail83-db3 (localhost.localdomain [127.0.0.1]) by mail83-db3 (MessageSwitch) id 133172969612879_24169; Wed, 14 Mar 2012 12:54:56 +0000 (UTC) Received: from DB3EHSMHS011.bigfish.com (unknown [10.3.81.253]) by mail83-db3.bigfish.com (Postfix) with ESMTP id E972A16004B for ; Wed, 14 Mar 2012 12:54:55 +0000 (UTC) Received: from CPHUB1.canoga.com (74.62.37.82) by DB3EHSMHS011.bigfish.com (10.3.87.111) with Microsoft SMTP Server (TLS) id 14.1.225.23; Wed, 14 Mar 2012 12:54:54 +0000 Received: from CPHUB2.canoga.com (172.16.1.94) by CPHUB1.canoga.com (172.16.1.93) with Microsoft SMTP Server (TLS) id 8.2.213.0; Wed, 14 Mar 2012 05:56:17 -0700 Received: from vserver1.canoga.com ([169.254.2.24]) by CPHUB2.canoga.com ([172.16.1.94]) with mapi; Wed, 14 Mar 2012 05:56:16 -0700 From: "Bergquist, Brett" To: Derby Discussion Date: Wed, 14 Mar 2012 05:54:45 -0700 Subject: RE: Compress large table Thread-Topic: Compress large table Thread-Index: Ac0BOVCTfwCVw99GS7C9zLsA8xmJugApyrbg Message-ID: <97EB699F861AD841B5908C7CA9C9565602065A07A444@VSERVER1.canoga.com> References: <008401cd013a$9d9ac7f0$d8d057d0$@gmail.com> In-Reply-To: <008401cd013a$9d9ac7f0$d8d057d0$@gmail.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US X-TM-AS-Product-Ver: SMEX-8.0.0.1307-6.500.1024-18772.005 X-TM-AS-Result: No--20.342700-0.000000-31 X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No Content-Type: multipart/alternative; boundary="_000_97EB699F861AD841B5908C7CA9C9565602065A07A444VSERVER1can_" MIME-Version: 1.0 X-OriginatorOrg: canoga.com X-Virus-Checked: Checked by ClamAV on apache.org --_000_97EB699F861AD841B5908C7CA9C9565602065A07A444VSERVER1can_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable The two procedures SYSCS_COMPRESS_TABLE and SYSCS_INPLACE_COMPRESS_TABLE) f= or compacting are going to hold a lock on the table during the total time o= f compression which for 20 million rows is going to be a long time. Not t= he answer that you want to hear but that is what it is. Have you tried querying the SYSCS_DIAG.SPACE_TABLE to make sure that you ha= ve space to be reclaimed. You might find that compressing the table is not= going to return much space in any case. SELECT * FROM TABLE (SYSCS_DIAG.SPACE_TABLE('MYSCHEMA', 'MYTABLE')) AS T2 Replace 'MYSCHEMA' and 'MYTABLE' with your particular ones. A couple of thoughts, it might be that you could copy the rows from one tab= le to a new table and then drop the original table and rename new table bac= k. Of course, this does require you to have about twice the disk usage whi= le being done and you would need to be smart about making sure that any new= rows after you start the copy are also moved to the new table, but it migh= t allow you to have the old table be used while you are building a new comp= ressed table. Hope this helps. Brett From: Stefan R. [mailto:elstefanito@gmail.com] Sent: Tuesday, March 13, 2012 12:59 PM To: derby-user@db.apache.org Subject: Compress large table Hi, In one of our current projects we are using derby and have some large db ta= bles (over 20 Million rows and some indexes). We decided to clean this table with a scheduled task. Now to be able to fre= e some disk space we want to compress the files using SYSCS_COMPRESS_TABLE.= What would be the best option to use this function and keep the table lock= as short as possible? Our app needs to keep running while the compression = is taking place. Do you have any suggestions? Thank you, Stefan --_000_97EB699F861AD841B5908C7CA9C9565602065A07A444VSERVER1can_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

The two procedures SYSCS_COMPRESS_TABLE and SYSCS_INPLACE_COM= PRESS_TABLE) for compacting are going to hold a lock on the table during th= e total time of compression which for 20 million rows is going to be a long= time.   Not the answer that you want to hear but that is what it= is.

 

Have you tried querying the SYSCS_DIAG.SPACE_TABLE to make sure that = you have space to be reclaimed.  You might find that compressing the t= able is not going to return much space in any case.

 

SELECT *

FROM TABLE (SYSCS_DIAG.SPACE_TABLE('M= YSCHEMA', 'MYTABLE'))

AS T2

 

Replace ‘MYSCHEM= A’ and ‘MYTABLE’ with your particular ones.

 

A couple of = thoughts, it might be that you could copy the rows from one table to a new = table and then drop the original table and rename new table back.  Of = course, this does require you to have about twice the disk usage while bein= g done and you would need to be smart about making sure that any new rows a= fter you start the copy are also moved to the new table, but it might allow= you to have the old table be used while you are building a new compressed = table.

 

Hope this helps.

 

Brett

<= span style=3D'color:#1F497D'> 

From: Stefan R. [mailto:elstefanito@gmail.com]
Sent:<= /b> Tuesday, March 13, 2012 12:59 PM
To: derby-user@db.apache.org=
Subject: Compress large table

<= p class=3DMsoNormal> 

Hi,

In one of our current projects we are using derby= and have some large db tables (over 20 Million rows and some indexes).

We decided to clean this table with a sche= duled task. Now to be able to free some disk space we want to compress the = files using SYSCS_COMPRESS_TABLE. What would be the best option to use this= function and keep the table lock as short as possible? Our app needs to ke= ep running while the compression is taking place.

Do you have any suggestions?

<= o:p> 

Thank you,

Stefan

 

= --_000_97EB699F861AD841B5908C7CA9C9565602065A07A444VSERVER1can_--