db-torque-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Husek, Paul" <Paul.Hu...@pfizer.com>
Subject RE: inefficient doDelete
Date Mon, 14 Mar 2005 14:44:30 GMT
Thomas,

Thanks for the answers.  What should I conclude from this behavior?  Are
people using Torque on big databases?  If torque wants to load each record I
suppose that works for me but there needs to be a way to tell it not to load
all records at once.  On any large database (with a criteria that matches a
lot of records) this method becomes useless.   How are people handling this?

Thanks,

Paul

-----Original Message-----
From: Thomas Fischer [mailto:fischer@seitenbau.net] 
Sent: Friday, March 11, 2005 11:49 AM
To: Apache Torque Users List
Subject: RE: inefficient doDelete





Hi Paul,

on second thought, the reason for loading the datasets will probably be
that it is the easiest method to do cascading deletes (meaning that, if any
object has a foreign key pointing to the deleted item, that object should
also be deleted to ensure referential integrity).
This could also be done using subselects, but I would guess this is a)
difficult to implement and b) I am not sure whether all databases support
subselects.
So this behaviour is not likely to be changed in forthcoming versions of
Torque.

There is also another way around the OutOfMemoryException: Do the deletes
in chunks using criteria.setLimit(). Problem there is that Torque does not
tell you how much datasets it has deleted, so you have to look by other
means whether there are still Datasets to delete.

     Thomas

Thomas Fischer <fischer@seitenbau.net> schrieb am 11.03.2005 17:25:27:

>
>
>
>
>
>
> "Husek, Paul" <Paul.Husek@pfizer.com> schrieb am 11.03.2005 17:01:45:
>
> > I've been using Torque for almost a year now and am very happy with it.
> > Recently though I found something that confuses me.
> >
> >
> >
> > All along I've been deleting all History books like:
> >
> >
> >
> > Criteria c=new Criteria();
> >
> > c.add(BookPeer.TYPE,"HISTORY");
> >
> > BookPeer.doDelete(c)
> >
> >
> >
> > And it works fine.  But recently I tried this when there over 100,000
> > history books.   I was greeted with a java "out of memory" error.  Is
> Torque
> > trying to load all records before deleting each of them?
>
> Yes, it does. Code form BasePeer.doDelete(Criteria criteria, Connection
> con)
>
> tds.where(sqlSnippet);
> tds.fetchRecords();
> if (tds.size() > 1 && criteria.isSingleRecord())
> {
>     handleMultipleRecords(tds);
> }
> for (int j = 0; j < tds.size(); j++)
> {
>     Record rec = tds.getRecord(j);
>     rec.markToBeDeleted();
>     rec.save();
> }
>
> > Why would it?
>
> I am not sure about this. It seems than in its early days, Torque has
> relied a lot on the village library and this is the way village handles
> deletes. Not a convincing explanation,though. Perhaps some old Torque
guru
> can think of another reason...
>
> > Is there a work around?
> >
>
> I can think of two things: either patch Torque or create the SQL yourself
> and use executeStatement(String).
> I will ask on the dev list if anybody can think of a reason why the
records
> are loaded before they are deleted. If nobody has a reason for it,
chances
> are good that it will be changed.
>
>    Thomas
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: torque-user-unsubscribe@db.apache.org
> For additional commands, e-mail: torque-user-help@db.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: torque-user-unsubscribe@db.apache.org
For additional commands, e-mail: torque-user-help@db.apache.org




---------------------------------------------------------------------
To unsubscribe, e-mail: torque-user-unsubscribe@db.apache.org
For additional commands, e-mail: torque-user-help@db.apache.org


Mime
View raw message