db-torque-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Russell Simpkins" <RussellSimpk...@funnygarbage.com>
Subject RE: datadump on big tables
Date Wed, 16 Apr 2003 14:34:55 GMT
Brian,
I would suggest that write that code a bit differently to export large data to xml.  If you
look at that the source code, it appears to be doing select * from table and then returning
ALL of the results back to a QueryDataSet object...  That really should be written using Streams
to work effectivly.  I don't think anyone intended that to export a 1 gig table.  With the
implementation I saw, you will always run out of memory at some point.  If it was streamed,
you would only run out of disk space.

-----Original Message-----
From: Brian McCallister [mailto:mccallister@forthillcompany.com]
Sent: Wednesday, April 16, 2003 10:06 AM
To: Turbine Torque Users List
Subject: Re: datadump on big tables


Followup to this problem...

Apparently it does run directly in the ant process, no fork - OS X has 
issues allocating more than 212 megs to the jvm is seems.

Moved it to Sun JDK 1.4.1_02 on linux and allocated 1 gigs per process 
to the jvm and still ran out of memory. Profiling it - it uses the 
memory - 4 processes using as much memory as I will throw at it (1 gig 
per process in this case, 4 odd some gigabytes total memory usage) - 
which is off the charts of absurdity, the database isn't *that* big - a 
postgres dump (type c so pretty optimized, bzipping the dump only 
shaves off a few percent) is only 70 megs.

Is the data dumper significantly different in 3.1?

-Brian

On Tuesday, April 15, 2003, at 04:29 PM, Brian McCallister wrote:

> Is there a way to increase the memory available to the datadump task?
>
> I am running it on a pretty big table (really big table) and am 
> hitting out of memory errors:
>
> ...
>         at 
> org.apache.velocity.texen.ant.TexenTask.execute(TexenTask.java:564)
>         at org.apache.tools.ant.Task.perform(Task.java:319)
>         at org.apache.tools.ant.Target.execute(Target.java:309)
>         at org.apache.tools.ant.Target.performTasks(Target.java:336)
>         at 
> org.apache.tools.ant.Project.executeTarget(Project.java:1306)
>         at 
> org.apache.tools.ant.Project.executeTargets(Project.java:1250)
>         at org.apache.tools.ant.Main.runBuild(Main.java:610)
>         at org.apache.tools.ant.Main.start(Main.java:196)
>         at org.apache.tools.ant.Main.main(Main.java:235)
> Caused by: java.lang.OutOfMemoryError
> --- Nested Exception ---
> java.lang.OutOfMemoryError
>
> on the ant task. I configured ant to run with 512 megs, but profiling 
> it it only is using 212 or so megs when the OOM error occurs - which 
> leads me to believe that the datadump is secretly forking or some > such.
>
> Anyone played with this much?
>
> I am doing this with Torque 3.0 on OS X jdk 1.4.1 release against 
> postgres 7.3.2 with most recent org.postgresql.Driver driver.
>
> Thanks,
>
> Brian
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: torque-user-unsubscribe@db.apache.org
> For additional commands, e-mail: torque-user-help@db.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: torque-user-unsubscribe@db.apache.org
For additional commands, e-mail: torque-user-help@db.apache.org


Mime
View raw message