db-torque-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian McCallister <mccallis...@forthillcompany.com>
Subject datadump on big tables
Date Tue, 15 Apr 2003 20:29:08 GMT
Is there a way to increase the memory available to the datadump task?

I am running it on a pretty big table (really big table) and am hitting 
out of memory errors:

...
         at 
org.apache.velocity.texen.ant.TexenTask.execute(TexenTask.java:564)
         at org.apache.tools.ant.Task.perform(Task.java:319)
         at org.apache.tools.ant.Target.execute(Target.java:309)
         at org.apache.tools.ant.Target.performTasks(Target.java:336)
         at org.apache.tools.ant.Project.executeTarget(Project.java:1306)
         at 
org.apache.tools.ant.Project.executeTargets(Project.java:1250)
         at org.apache.tools.ant.Main.runBuild(Main.java:610)
         at org.apache.tools.ant.Main.start(Main.java:196)
         at org.apache.tools.ant.Main.main(Main.java:235)
Caused by: java.lang.OutOfMemoryError
--- Nested Exception ---
java.lang.OutOfMemoryError

on the ant task. I configured ant to run with 512 megs, but profiling 
it it only is using 212 or so megs when the OOM error occurs - which 
leads me to believe that the datadump is secretly forking or some such.

Anyone played with this much?

I am doing this with Torque 3.0 on OS X jdk 1.4.1 release against 
postgres 7.3.2 with most recent org.postgresql.Driver driver.

Thanks,

Brian


Mime
View raw message