Brett,
Thanks for the suggestion. Based on prior discussions on this mailing list, I have considered this option of partitioning data chronologically, and just dropping (or truncating)
old tables instead using just one table. However, that is a significant change to the application that has been in use for over 10 years and that works well with most other databases (granted, that other databases like Oracle, MySQL, SQL Server etc are more heavy weight database servers). In any case, we have been experimenting with Derby during the past few months since it is Java based and has an embedded db option. Everything about Derby works fabulously well for our needs except the space reclamation part.. I am hoping we can resolve it in some manner even if it is not a perfect solution.

Thanks again for your thoughts,
Sundar


From: "Bergquist, Brett" <BBergquist@canoga.com>
To: Derby Discussion <derby-user@db.apache.org>
Sent: Thursday, November 3, 2011 10:19 AM
Subject: RE: SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE question

Just a heads up, I have recently went through this exercise of having a table that is always being inserted at one end and deleted at the other.  Network test results were being inserted at a rate of about 4 million per day and the same amount from earlier days needed to be purged out.  I had major performance issues in trying to delete rows while inserting rows.  Turns out I could insert faster than delete.  And on top of that, the space was not being reused efficiently enough and I had no down time to run the compress procedures.

What I ended up doing was to do data partitioning.  I ended up creating a table for each week of the year and used the Restrict VTI functions to build a "view" combining all of the tables using UNION back into one virtual table, and then using "truncate table" to purge a whole week of data in a few seconds.

The Restricted VTI was used because of the ability to optimize the returned rows based on the incoming query constraints.  Not perfect as it is a poor man's partitioning but it does work.

You might consider this route if you have no downtime to delete and compress.

Just some thoughts

Brett

-----Original Message-----
From: Sundar Narayanaswamy [mailto:sundar007@yahoo.com]
Sent: Saturday, October 29, 2011 1:02 AM
To: derby-user@db.apache.org
Subject: Re: SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE question


I have posted the issue to DERBY-5487. I have also attached the Java test
program.

The test rows do insert at one end of the primary key and delete the other
end.. Interestingly, I noticed that primary key space is reclaimed if I
reuse the primary keys across the insert-delete loops. But, my application
requires me to use continuously increasing primary keys (not reuse them).


Mike Matrigali wrote:
>
> Posting your test to a JIRA issue would be best.  It would be
> interesting to post the space table results after each
> insert/delete/compress iteration (or every 10, ...).
> When do you commit (every row or every 10000)?  Is it multi-threaded?
> Does your
> test always insert rows at one end of the index and delete them
> from the other end.  If so it may be DERBY-5473 (a runtime issue,
> not a compress table issue).
>
>

--
View this message in context: http://old.nabble.com/SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE-question-tp32736560p32742387.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.