cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joerg Heinicke <>
Subject Re: Avoiding OutOfMemory Errors by limiting data in pipeline
Date Mon, 28 Apr 2008 03:43:52 GMT
On 24.04.2008 16:08, Bruce Atherton wrote:
> Thanks for the response. About setting the buffer size, this looks like 
> it could be what I am looking for. A few questions:
> 1. Do I have to set the buffer size on each transformer and the 
> serializer as well as the generator? What about setting it on the pipeline?

It is on the pipeline and only there. You can set it on the map:pipe 
element in the map:components section, so that it is applied to each 
pipeline of that type. Or on any individual map:pipeline element in the 
map:pipelines section.

> 2. Does the full amount of the buffer automatically get allocated for 
> each request, or does it grow gradually based on the xml stream size?
> I have a lot of steps in the pipeline, so I am worried about the impact 
> of creating too many buffers even if they are relatively small. A 1 Meg 
> buffer might be too much if it is created for every element of every 
> pipeline for every request.

That's a very good question - with a negative answer: A buffer of that 
particular size is created initially. That's why I want to bring this 
issue up on dev again: With my changes for COCOON-2168 [1] it's now not 
only a problem for applications with over-sized downloads but 
potentially for everyone relying on Cocoon's default configuration. One 
idea would be to change our BufferedOutputStream implementation to take 
2 parameters: one for the initial buffer size and one for the flush 
size. The flush treshold would be the configurable outputBufferSize, the 
initial buffer size does not need to be configurable I think.

What do other think?

> On an unrelated note, is there some way to configure caching so that 
> nothing is cached that is larger than a certain size? I'm worried that 
> this might be a caching issue rather than a buffer issue.

Not that I'm aware of. Why do you think it's caching? Caching is at 
least configurable in terms of number of cache entries and I also think 
in terms of max cache size. But beyond a certain cache size the cache 
entries are written to disk anyway so it's unlikely resulting in a 
memory issue.

> How do you read the object graph from the heap dump? To tell you the 
> truth, I'm not sure. This is the hierarchy generated by the Heap 
> Analyzer tool from IBM, and is from a heap dump on an AIX box running 
> the IBM JRE. My guess as to the Object referencing the 
> ComponentsSelector is that the ArrayList is not generified, so the 
> analyzer doesn't know the actual type of the Object being referenced. 
> What the object actually is would depend on what 
> CachingProcessorPipeline put into the ArrayList. That is just a guess, 
> though. And I have no explanation for the link between 
> FOM_Cocoon$CallContext and ConcreteCallProcessor. Perhaps things were 
> different in the 2.1.9 release?

No serious changes since 2.1.9 which is rev 392241 [2].



View raw message