db-derby-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "John I. Moore, Jr." <softmo...@att.net>
Subject RE: Issue with large delete in derby
Date Fri, 31 Jan 2014 12:02:39 GMT
Here is a personal experience that could be related.

I once worked on an application that used a widely known commercial database (not Derby),
and a script that someone else wrote took several hours to perform a series of deletions
and insertions.  After examining the script, I realized that there was a single commit at
the end.  By adding a few commits at appropriate places where the database would remain
consistent for use on the application, we were able to achieve an order of magnitude
increase in performance.  Hope this helps.


John I. Moore, Jr.
SoftMoore Consulting

-----Original Message-----
From: Amitava Kundu1 [mailto:amitavakundu@in.ibm.com] 
Sent: Thursday, January 30, 2014 1:46 AM
To: derby-user@db.apache.org
Subject: Issue with large delete in derby

We are using embedded derby in our product, This derby database is
used as regular RDBMS where lot of insert, delete and select happens, There
are business entities each of its occurrence could be of size 10 GB and
upward e.g. a huge log file data.
In our application, we use cascade delete and also has referential
integrity constraints ON.

This application runs on 64 bit Linux with 8 GB RAM allocated to JVM.
Similar time is observed in our development Windows box.

 It takes more than 3 hour to delete those entities. During this time all
the relevant tables stay locked and no other operation is feasible.

We'd like know what could be different options/ strategy be adopted for:
   Speeding up the delete process
   Ability to other database activities in parallel

	Amitava Kundu

View raw message