db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brett Wooldridge <brett.wooldri...@gmail.com>
Subject Re: Horrible performance - how can I reclaim table space?
Date Thu, 24 Sep 2009 01:31:51 GMT
If you are on 10.3, you might consider 10.3.3.1, as a space reclamation
issue for large objects was resolved (
http://issues.apache.org/jira/browse/DERBY-4050) between 10.3 and 10.3.3.1.
 According to that defect, the upgraded version (10.3.3.1) will still not
reclaim space lost prior to the update, so a full offline compression is
required.
-Brett


On Thu, Sep 24, 2009 at 10:03 AM, T K <sanokistoka@yahoo.com> wrote:

> We have a horrific performance issue with a table of 13 rows, each one
> containing a very small blob, because the table is presumably full of dead
> rows and we are table-scanning; here's part of the explain plan:
>
>                         Source result set:
>                                 Table Scan ResultSet for SOMETABLE at read
> committed isolation level using instantaneous share row locking chosen by
> the optimizer
>                                         Number of columns fetched=4
>                                         Number of pages visited=8546
>                                         Number of rows qualified=13
>                                         Number of rows visited=85040
>                                         optimizer estimated cost:
> 787747.94
>
> So I assume I have over 85,000 dead rows in the table, and compressing it
> does not reclaim the space. In fact, because we keep adding and deleting
> rows, the performance gets worse by the hour, and according to the above
> plan, Derby has processed over 32MB of data just to match 4 of the 13 rows.
> For the time being, I want to optimize this table scan before I resort to
> indices and/or reusing rows. This is with Derby 10.3
>
> Any thoughts?
>
> Thanks
>
>

Mime
View raw message