db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From T K <sanokist...@yahoo.com>
Subject Re: Horrible performance - how can I reclaim table space?
Date Thu, 24 Sep 2009 01:49:52 GMT
BTW, I have to ask this question: how exactly do we define OFFLINE compression? I assume I
still bring the database up and call the compression stored proc from ij, but no one else
connects, correct?




________________________________
From: T K <sanokistoka@yahoo.com>
To: Derby Discussion <derby-user@db.apache.org>
Sent: Wednesday, September 23, 2009 9:44:35 PM
Subject: Re: Horrible performance - how can I reclaim table space?


Yes there is multi-threaded updating going on. Although I did try compression I did NOT try
it offline.




________________________________
From: Brett Wooldridge <brett.wooldridge@gmail.com>
To: Derby Discussion <derby-user@db.apache.org>
Sent: Wednesday, September 23, 2009 9:39:31 PM
Subject: Re: Horrible performance - how can I reclaim table space?

Still, the fix is only for a multi-threaded update scenario.  I don't know the access pattern
of your application, so it may or may not help resolve your issue.  I would expected offline
compression of the table to have fixed your issue.



On Thu, Sep 24, 2009 at 10:35 AM, T K <sanokistoka@yahoo.com> wrote:

Ouch... I have 10.3.3.0! I will consider the upgrade
>
>Thanks Bret.
>
>
>
>
________________________________
From: Brett Wooldridge <brett.wooldridge@gmail.com>
>
>To: Derby Discussion <derby-user@db.apache.org>
>Sent: Wednesday, September 23, 2009 9:31:51 PM
>Subject: Re: Horrible performance - how can I reclaim table space?
>
>
>If you are on 10.3, you might consider 10.3.3.1, as a space reclamation issue for large
objects was resolved (http://issues.apache.org/jira/browse/DERBY-4050) between 10.3 and 10.3.3.1.
 According to that defect, the upgraded version (10.3.3.1) will still not reclaim space lost
prior to the update, so a full offline compression is required.
>>
>
>-Brett
>
>
>
>On Thu, Sep 24, 2009 at 10:03 AM, T K <sanokistoka@yahoo.com> wrote:
>
>We have a horrific performance issue with a table of 13 rows, each one containing a very
small blob, because the table is presumably full of dead rows and we are table-scanning; here's
part of the explain plan:
>>
>>                        Source result set:
>>                                Table Scan ResultSet for SOMETABLE at read committed
isolation level using instantaneous share row locking chosen by the
>> optimizer
>>                                        Number of columns fetched=4
>>                                        Number of pages visited=8546
>>                                        Number of rows
>> qualified=13
>>                                        Number of rows visited=85040
>>                                        optimizer estimated cost:       787747.94
>>
>>So I assume I have over 85,000 dead rows in the table, and compressing it does not
reclaim the space. In fact, because we keep adding and deleting rows, the performance gets
worse by the hour, and according to the above plan, Derby has processed over 32MB of data
just to match 4 of the 13 rows. For the time being, I want to optimize this table scan before
I resort to
>> indices and/or reusing rows. This is with Derby 10.3
>>
>>Any thoughts?
>>
>>Thanks
>>
>>
>
>


      
Mime
View raw message