cxf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Glynn, Eoghan" <>
Subject RE: Flush Problem with HTTPCondit
Date Fri, 22 Sep 2006 06:30:35 GMT

> >I think Tom was referring to the outbound dispatch as 
> opposed to the inbound, i.e. the process of zipping up rather 
> than unzipping.
> >
> >In my naïve understanding of how data compression works, it 
> performs best on fairly decent chunks of data (i.e. a large 
> block size) as opposed to being drip-fed data in small 
> increments (e.g. individual XML elements). 
> >
> >Hence the motivation for caching all (or at least reasonable 
> chunks of) the payload before applying the compression.
> >  
> >
> You could still do it a chunk (100K?) a at a time over a 
> large messge - and this would alleviate the need to write to 
> disk or hold too much in memory.

Sure, I had in mind a block size of 64k (which is apparently the look-ahead used by GNU zip)
so 100K sounds like the right order of magnitude.

Certainly this zip block size and the AbstractCachedOutputStream threshold for dumping to
disk (currently 8k, a bit conversative IMO) could be both set to the same value, so that in
the Gzip case we'd never have to shunt data onto disk.


View raw message