db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bergquist, Brett" <BBergqu...@canoga.com>
Subject How to best constrain database space with many records being inserted and deleted
Date Tue, 06 Jul 2010 18:09:59 GMT
I  have a situation where my system is adding about 600K records a day to a table and also
deleting about 600K records a day.  Basically it is keeping 30 days worth of history of some
network management service level agreement data.  So each day about 600K new tests are performed
and recorded in the database and each day after the 30 day mark, about 600K old records are
purged out.  On average there is about 18 million records in the table.

I have little to no down time for database maintenance.  Maybe 2 hours per month maximum.
 What I am seeing is that the database is growing and it does not seem to be reusing the deleted
space.  Should it be?  The records being inserted are exactly the size of the records being
deleted.

I know that I could use the SYSCS_UTIL.SYSCS_COMPRESS_TABLE to reclaim the space, but I am
not so interested in giving back to the OS, but rather ensuring the space available from the
deleted records is reused.  I have attempted to reclaim the space with this command, however,
and about 5 hours of time is just too much.

I also see there is a SYS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE but I am not sure what the PURGE_ROW
option is going to do.  It sounds like it is something that I want to look into, but the documentation
is not clear enough if that is what I need.

Thanks in advance.

Brett

Mime
View raw message