cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bruce Atherton <br...@callenish.com>
Subject Re: Avoiding OutOfMemory Errors by limiting data in pipeline
Date Thu, 08 May 2008 15:53:04 GMT
My only comment is that I think it would be good to allow the initial 
buffer size to be configurable. If you know the bulk of your responses 
are greater than 32K, then performing the ramp-up from 8K every time 
would be a waste of resources. For another web site, if most responses 
were smaller than 6K then an 8K buffer would be perfect. Allowing 
someone to tweak that based on their situation seems useful to me.

Not critical though, if it is hard to do. Allowing the buffer to scale 
is the important thing.

Joerg Heinicke wrote:
> On 27.04.2008 23:43, Joerg Heinicke wrote:
>
>>> 2. Does the full amount of the buffer automatically get allocated 
>>> for each request, or does it grow gradually based on the xml stream 
>>> size?
>>>
>>> I have a lot of steps in the pipeline, so I am worried about the 
>>> impact of creating too many buffers even if they are relatively 
>>> small. A 1 Meg buffer might be too much if it is created for every 
>>> element of every pipeline for every request.
>>
>> That's a very good question - with a negative answer: A buffer of 
>> that particular size is created initially. That's why I want to bring 
>> this issue up on dev again: With my changes for COCOON-2168 [1] it's 
>> now not only a problem for applications with over-sized downloads but 
>> potentially for everyone relying on Cocoon's default configuration. 
>> One idea would be to change our BufferedOutputStream implementation 
>> to take 2 parameters: one for the initial buffer size and one for the 
>> flush size. The flush treshold would be the configurable 
>> outputBufferSize, the initial buffer size does not need to be 
>> configurable I think.
>>
>> What do other think?
>
> No interest or no objections? :)
>
> Joerg


Mime
View raw message