Return-Path: Delivered-To: apmail-xml-xmlbeans-dev-archive@www.apache.org Received: (qmail 96726 invoked from network); 10 Dec 2003 17:24:42 -0000 Received: from daedalus.apache.org (HELO mail.apache.org) (208.185.179.12) by minotaur-2.apache.org with SMTP; 10 Dec 2003 17:24:42 -0000 Received: (qmail 31433 invoked by uid 500); 10 Dec 2003 17:24:31 -0000 Delivered-To: apmail-xml-xmlbeans-dev-archive@xml.apache.org Received: (qmail 31411 invoked by uid 500); 10 Dec 2003 17:24:31 -0000 Mailing-List: contact xmlbeans-dev-help@xml.apache.org; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: Reply-To: xmlbeans-dev@xml.apache.org Delivered-To: mailing list xmlbeans-dev@xml.apache.org Received: (qmail 31366 invoked from network); 10 Dec 2003 17:24:31 -0000 X-MimeOLE: Produced By Microsoft Exchange V6.0.6375.0 content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01C3BF42.792579E8" Subject: RE: high volume processing Date: Wed, 10 Dec 2003 09:24:31 -0800 Message-ID: <4B2B4C417991364996F035E1EE39E2E10D8E34@uskiex01.amer.bea.com> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: high volume processing Thread-Index: AcO/GLz4TG2a2P6PR9Ohx6Og+TXydgAKSQhQ From: "Eric Vasilik" To: X-OriginalArrivalTime: 10 Dec 2003 17:24:33.0742 (UTC) FILETIME=[7A5196E0:01C3BF42] X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N X-Spam-Rating: minotaur-2.apache.org 1.6.2 0/1000/N ------_=_NextPart_001_01C3BF42.792579E8 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable The V2 store architecture will have this capability. Briefly, I am = designing the V2 store with an abstracted store back end which will have = multiple implementations. One will be an in-memory backend with with = requirements similar to that that of the V1 store. Another will be a = memory mapped file based store where instances larger than can fit into = memory can be handled. This is a backend which may interest you. =20 =20 - Eric -----Original Message----- From: Matthias Kubik [mailto:KUBIK@de.ibm.com] Sent: Wednesday, December 10, 2003 4:26 AM To: xmlbeans-dev@xml.apache.org Subject: high volume processing Hi all,=20 I'm new to this list as I could not find any thing that would indicate = memory problems in the list archive.=20 Now, here's what happened:=20 I was trying the easypo sample as described o the web site. After some = script fixing (Linux) I finally got the sample to work.=20 As I have a requirement to process xml files that are 100MB+ in size, I = had some expectations...that were not fulfilled.=20 It seems that even a 30MB file would run into an out-of-memory error. I = know I could temporarily fix that with giving the=20 JVM more memory. But this is not a solution. To me this looks like the = whole DOM tree (if any) and the Object hierarchy is=20 kept in memory. I'd love to see something more "intelligent" there.=20 My question now is, will that be addressed in V2 or is it even a design = goal? (didn't find anything in the project mgt, tho).=20 Thanks=20 - matthias ------_=_NextPart_001_01C3BF42.792579E8 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable
The V2=20 store architecture will have this capability.   Briefly, I am=20 designing the V2 store with an abstracted store back end which will have = multiple implementations.  One will be an in-memory backend with = with=20 requirements similar to that that of the V1 store.  Another will be = a=20 memory mapped file based store where instances larger than can fit into = memory=20 can be handled.  This is a backend which may interest you. =20
 
-=20 Eric
-----Original Message-----
From: Matthias Kubik=20 [mailto:KUBIK@de.ibm.com]
Sent: Wednesday, December 10, 2003 = 4:26=20 AM
To: xmlbeans-dev@xml.apache.org
Subject: high = volume=20 processing


Hi = all,=20
I'm new  to this list as I = could not find=20 any thing that would indicate memory problems in the list = archive.=20
Now, here's what happened: =

I was trying the easypo sample as described o = the web=20 site. After some script fixing (Linux) I finally got the sample to=20 work.
As I have a requirement = to process=20 xml files that are 100MB+ in size, I had some expectations...that were = not=20 fulfilled.
It seems that even = a 30MB file=20 would run into an out-of-memory error.  I know I could = temporarily fix=20 that with giving the
JVM more = memory. But=20 this is not a solution. To me this looks like the whole DOM tree (if = any) and=20 the Object hierarchy is
kept = in memory.=20 I'd love to see something more "intelligent" there.
My question now is, will that be addressed in = V2 or is it=20 even a design goal? (didn't find anything in the project mgt, = tho).=20

Thanks
 - matthias
------_=_NextPart_001_01C3BF42.792579E8--