Return-Path: Mailing-List: contact torque-user-help@db.apache.org; run by ezmlm Delivered-To: mailing list torque-user@db.apache.org Received: (qmail 95451 invoked from network); 16 Apr 2003 14:06:02 -0000 Received: from delmar-209-137-167-162-dsl.cavtel.net (HELO mail.fridayfives.net) (209.137.167.162) by daedalus.apache.org with SMTP; 16 Apr 2003 14:06:02 -0000 Received: by mail.fridayfives.net (Postfix, from userid 60999) id 8478CBF94; Wed, 16 Apr 2003 10:06:02 -0400 (EDT) Received: from forthillcompany.com (kite.forthill.int [192.168.1.30]) by mail.fridayfives.net (Postfix) with ESMTP id 96CEEBF86 for ; Wed, 16 Apr 2003 10:05:58 -0400 (EDT) Date: Wed, 16 Apr 2003 10:05:58 -0400 Subject: Re: datadump on big tables Content-Type: text/plain; charset=US-ASCII; format=flowed Mime-Version: 1.0 (Apple Message framework v552) From: Brian McCallister To: "Turbine Torque Users List" Content-Transfer-Encoding: 7bit In-Reply-To: Message-Id: <8C2E83DE-7014-11D7-8398-000A95782782@forthillcompany.com> X-Mailer: Apple Mail (2.552) X-Spam-Status: No, hits=-4.2 required=8.0 tests=EMAIL_ATTRIBUTION,IN_REP_TO,QUOTED_EMAIL_TEXT, SPAM_PHRASE_00_01,USER_AGENT_APPLEMAIL version=2.43 X-Spam-Level: X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N Followup to this problem... Apparently it does run directly in the ant process, no fork - OS X has issues allocating more than 212 megs to the jvm is seems. Moved it to Sun JDK 1.4.1_02 on linux and allocated 1 gigs per process to the jvm and still ran out of memory. Profiling it - it uses the memory - 4 processes using as much memory as I will throw at it (1 gig per process in this case, 4 odd some gigabytes total memory usage) - which is off the charts of absurdity, the database isn't *that* big - a postgres dump (type c so pretty optimized, bzipping the dump only shaves off a few percent) is only 70 megs. Is the data dumper significantly different in 3.1? -Brian On Tuesday, April 15, 2003, at 04:29 PM, Brian McCallister wrote: > Is there a way to increase the memory available to the datadump task? > > I am running it on a pretty big table (really big table) and am > hitting out of memory errors: > > ... > at > org.apache.velocity.texen.ant.TexenTask.execute(TexenTask.java:564) > at org.apache.tools.ant.Task.perform(Task.java:319) > at org.apache.tools.ant.Target.execute(Target.java:309) > at org.apache.tools.ant.Target.performTasks(Target.java:336) > at > org.apache.tools.ant.Project.executeTarget(Project.java:1306) > at > org.apache.tools.ant.Project.executeTargets(Project.java:1250) > at org.apache.tools.ant.Main.runBuild(Main.java:610) > at org.apache.tools.ant.Main.start(Main.java:196) > at org.apache.tools.ant.Main.main(Main.java:235) > Caused by: java.lang.OutOfMemoryError > --- Nested Exception --- > java.lang.OutOfMemoryError > > on the ant task. I configured ant to run with 512 megs, but profiling > it it only is using 212 or so megs when the OOM error occurs - which > leads me to believe that the datadump is secretly forking or some > such. > > Anyone played with this much? > > I am doing this with Torque 3.0 on OS X jdk 1.4.1 release against > postgres 7.3.2 with most recent org.postgresql.Driver driver. > > Thanks, > > Brian > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: torque-user-unsubscribe@db.apache.org > For additional commands, e-mail: torque-user-help@db.apache.org > >