xml-xmlbeans-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eric Vasilik" <eric...@bea.com>
Subject RE: high volume processing
Date Wed, 10 Dec 2003 17:24:31 GMT
The V2 store architecture will have this capability.   Briefly, I am designing the V2 store
with an abstracted store back end which will have multiple implementations.  One will be an
in-memory backend with with requirements similar to that that of the V1 store.  Another will
be a memory mapped file based store where instances larger than can fit into memory can be
handled.  This is a backend which may interest you.  
- Eric

-----Original Message-----
From: Matthias Kubik [mailto:KUBIK@de.ibm.com]
Sent: Wednesday, December 10, 2003 4:26 AM
To: xmlbeans-dev@xml.apache.org
Subject: high volume processing

Hi all, 
I'm new  to this list as I could not find any thing that would indicate memory problems in
the list archive. 
Now, here's what happened: 

I was trying the easypo sample as described o the web site. After some script fixing (Linux)
I finally got the sample to work. 
As I have a requirement to process xml files that are 100MB+ in size, I had some expectations...that
were not fulfilled. 
It seems that even a 30MB file would run into an out-of-memory error.  I know I could temporarily
fix that with giving the 
JVM more memory. But this is not a solution. To me this looks like the whole DOM tree (if
any) and the Object hierarchy is 
kept in memory. I'd love to see something more "intelligent" there. 
My question now is, will that be addressed in V2 or is it even a design goal? (didn't find
anything in the project mgt, tho). 

 - matthias

View raw message