db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From T K <sanokist...@yahoo.com>
Subject Re: Horrible performance - how can I reclaim table space?
Date Thu, 24 Sep 2009 01:31:23 GMT
A select count(*) returns 13 rows as expected. According, though, to the explain plain, there
are over 85,000 which I presume is dead space (deleted rows). Derby does not support detached
blobs. According to the plan, it traversed all 85,000 rows (active or deleted) = ~32MB of
data, sequentially, to find those 13 of which 4 match the predicate.

This is truly nuts!




________________________________
From: "derby@segel.com" <derby@segel.com>
To: Derby Discussion <derby-user@db.apache.org>
Sent: Wednesday, September 23, 2009 9:22:16 PM
Subject: RE: Horrible performance - how can I reclaim table space?

  
I’m confused.
 
How many rows are actually in the table?
Also the blob is stored in the same table not stored separately?
(Sorry, but I’m working in a couple
of different databases so I’m always forgetting if Derby allows for detached blobs …)
 
But if your table really only has 13 rows,
it will always do a sequential scan.
 

________________________________
 
From:T K
[mailto:sanokistoka@yahoo.com] 
Sent: Wednesday, September 23,
2009 8:04 PM
To: derby-user@db.apache.org
Subject: Horrible performance -
how can I reclaim table space?
 
We have a horrific
performance issue with a table of 13 rows, each one containing a very small
blob, because the table is presumably full of dead rows and we are
table-scanning; here's part of the explain plan:

                       
Source result set:
                               
Table Scan ResultSet for SOMETABLE at read committed isolation level using
instantaneous share row locking chosen by the optimizer
                                       
Number of columns fetched=4
                                       
Number of pages visited=8546
                                       
Number of rows qualified=13
                                       
Number of rows visited=85040
                                       
optimizer estimated cost:       787747.94

So I assume I have over 85,000 dead rows in the table, and compressing it does
not reclaim the space. In fact, because we keep adding and deleting rows, the
performance gets worse by the hour, and according to the above plan, Derby has processed over
32MB of data just to match 4 of the 13 rows. For the time being, I want to
optimize this table scan before I resort to indices and/or reusing rows. This
is with Derby 
10.3

Any thoughts?

Thanks


      
Mime
View raw message