db-torque-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thomas Fischer <fisc...@seitenbau.net>
Subject RE: inefficient doDelete
Date Fri, 11 Mar 2005 16:48:59 GMT




Hi Paul,

on second thought, the reason for loading the datasets will probably be
that it is the easiest method to do cascading deletes (meaning that, if any
object has a foreign key pointing to the deleted item, that object should
also be deleted to ensure referential integrity).
This could also be done using subselects, but I would guess this is a)
difficult to implement and b) I am not sure whether all databases support
subselects.
So this behaviour is not likely to be changed in forthcoming versions of
Torque.

There is also another way around the OutOfMemoryException: Do the deletes
in chunks using criteria.setLimit(). Problem there is that Torque does not
tell you how much datasets it has deleted, so you have to look by other
means whether there are still Datasets to delete.

     Thomas

Thomas Fischer <fischer@seitenbau.net> schrieb am 11.03.2005 17:25:27:

>
>
>
>
>
>
> "Husek, Paul" <Paul.Husek@pfizer.com> schrieb am 11.03.2005 17:01:45:
>
> > I've been using Torque for almost a year now and am very happy with it.
> > Recently though I found something that confuses me.
> >
> >
> >
> > All along I've been deleting all History books like:
> >
> >
> >
> > Criteria c=new Criteria();
> >
> > c.add(BookPeer.TYPE,"HISTORY");
> >
> > BookPeer.doDelete(c)
> >
> >
> >
> > And it works fine.  But recently I tried this when there over 100,000
> > history books.   I was greeted with a java "out of memory" error.  Is
> Torque
> > trying to load all records before deleting each of them?
>
> Yes, it does. Code form BasePeer.doDelete(Criteria criteria, Connection
> con)
>
> tds.where(sqlSnippet);
> tds.fetchRecords();
> if (tds.size() > 1 && criteria.isSingleRecord())
> {
>     handleMultipleRecords(tds);
> }
> for (int j = 0; j < tds.size(); j++)
> {
>     Record rec = tds.getRecord(j);
>     rec.markToBeDeleted();
>     rec.save();
> }
>
> > Why would it?
>
> I am not sure about this. It seems than in its early days, Torque has
> relied a lot on the village library and this is the way village handles
> deletes. Not a convincing explanation,though. Perhaps some old Torque
guru
> can think of another reason...
>
> > Is there a work around?
> >
>
> I can think of two things: either patch Torque or create the SQL yourself
> and use executeStatement(String).
> I will ask on the dev list if anybody can think of a reason why the
records
> are loaded before they are deleted. If nobody has a reason for it,
chances
> are good that it will be changed.
>
>    Thomas
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: torque-user-unsubscribe@db.apache.org
> For additional commands, e-mail: torque-user-help@db.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: torque-user-unsubscribe@db.apache.org
For additional commands, e-mail: torque-user-help@db.apache.org


Mime
View raw message