commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brett Henderson" <jaka...@bretth.com>
Subject RE: [codec] Streamable Codec Framework
Date Tue, 13 Jan 2004 01:01:54 GMT
> I suspect we are going to need something along the lines of a 
> "Streamable Encoder/Decoder" for the Multipart stuff. If we look at 
> HttpClients MultipartPostMethod, there is a framework for basically 
> joining multiple sources (files, strings, byte arrays) 
> together into an 
> OutputStream which is multipart encoded. I want to attempt to 
> maintain 
> this strategy when isolating the code out of HttpClient and into the 
> multipart sandbox project. I suspect that your Streamable 
> Consumer/Producer stuff could also be advantageous for multipart 
> encoding/decoding. At least I want to make sure we're not 
> inventing the 
> same wheel.

I'll try to look at the HttpClient code to get a feel for how it
hangs together.  From what I can gather my code should plug in fairly
cleanly.  My code doesn't specify any type of IO interface as
any interface can be adapted in by implementing relevant consumers
and producers.  I've tried to design the framework such that the
actual codec algorithms have no knowledge of the source or destination
of the data they process.  This allows them to be far more generic
and greatly increases their usefulness.

> Specifically, I see we're going to need interfaces other than the 
> existing codec ones because they pass around byte[] and Object when 
> encoding/decoding. We need to maintain that the content will 
> be streamed 
> from its native datstructure when its consumed by a consumer 
> (HttpClient 
> MultipartPost for instance) or when it is used to "decode" that the 
> Objects produced are built efficiently off a InputStream (ie 
> Files are 
> immediately written to the FileSystem, Strings or byte[]s are 
> maintained 
> in memory).

My framework doesn't specify any particular type of data although
byte oriented processing is the only fleshed out implementation at
the moment.  All it cares about is that a producer is available
to generate data from an external source and a matching consumer
is available to pass it to a destination.
Every producer must have a matching consumer.  A consumer can be
called directly by clients.
Typically an engine (implementing both consumer and producer) will
sit in the middle performing some kind of translation/encoding/decoding
on the data.  It "consumes" input data and "produces" output data.
Using this structure, processing chains can be defined so that
multiple transforms can be performed on the same data all in a
stream oriented fashion.

To cut a long story short, chains can be defined to access data
from streams/buffers/etc, perform relevant translations (re-using
small in-memory buffers to eliminate garbage collection) and pass
data to output streams/buffers/etc.  Due to the stream support,
data of arbitrary size can be processed.

> 
> Either way, I'm currently "tidying" up a maven project 
> directory to be 
> committed into the sandbox for the new multipart codec stuff. 
> Once, its 
> in place we could add your code to it as well.

Let me know if you want to import any of my code and I'll do
any necessary package reorganisation.


---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org


Mime
View raw message