db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <de...@segel.com>
Subject RE: Horrible performance - how can I reclaim table space?
Date Thu, 24 Sep 2009 01:22:16 GMT
I'm confused.

 

How many rows are actually in the table? Also the blob is stored in the same
table not stored separately?

(Sorry, but I'm working in a couple of different databases so I'm always
forgetting if Derby allows for detached blobs .)

 

But if your table really only has 13 rows, it will always do a sequential
scan.

 

  _____  

From: T K [mailto:sanokistoka@yahoo.com] 
Sent: Wednesday, September 23, 2009 8:04 PM
To: derby-user@db.apache.org
Subject: Horrible performance - how can I reclaim table space?

 

We have a horrific performance issue with a table of 13 rows, each one
containing a very small blob, because the table is presumably full of dead
rows and we are table-scanning; here's part of the explain plan:

                        Source result set:
                                Table Scan ResultSet for SOMETABLE at read
committed isolation level using instantaneous share row locking chosen by
the optimizer
                                        Number of columns fetched=4
                                        Number of pages visited=8546
                                        Number of rows qualified=13
                                        Number of rows visited=85040
                                        optimizer estimated cost:
787747.94

So I assume I have over 85,000 dead rows in the table, and compressing it
does not reclaim the space. In fact, because we keep adding and deleting
rows, the performance gets worse by the hour, and according to the above
plan, Derby has processed over 32MB of data just to match 4 of the 13 rows.
For the time being, I want to optimize this table scan before I resort to
indices and/or reusing rows. This is with Derby 10.3

Any thoughts?

Thanks

 


Mime
View raw message