incubator-etch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From scott comer <wer...@mac.com>
Subject Re: Big data transfer using etch
Date Tue, 14 Dec 2010 13:21:27 GMT
hi Nicolae!

if the data is easily chunked by you, such as a very large byte array, 
then the standard
methods work just fine. for example, to transfer an image or sound, 
video, etc. you could
easily implement an OutputStream to buffer up data and transmit in 
chunks via etch,
reassemble on the other side, etc.

the problem comes when you have objects with complicated structure, such 
as rows
from a db table. can you write 100 rows? 1000? depends upon the data in 
each row.

the big message problem has no easy solution, but one that works might 
be this:

create a virtual stream of data by using the etch binary encoding to 
code your large
data structure, then chop the stream up and transmit the chunks, and 
reassemble on
the other side. you have to do the work yourself, but it isn't hard work 
and could serve
as a basis for a real solution for the big message problem.

can you say some more about your application?

scott out

On 12/14/2010 3:57 AM, Nicolae Mihalache wrote:
> Hello,
>
> I'm considering the possibilities to replace CORBA in some application
> and I found the etch project.
> It looks nice and seem to satisfy all my needs except the Big Message
> Problem as described here:
> http://incubator.apache.org/etch/big-message-problem.html
>
> What would be nice is the ability to stream messages, a bit like
> @oneway but with guaranteed order and possibility to receive
> acknowledge if a message has generated an exception.
> The functionality would be somehow similar with standard TCP sockets:
> one pushes the data as fast as possible and the write is blocked if
> the the network or the reader cannot sustain the throughput.
>  From the API point of view it will be like:
> start transfer
> while(not finished):
>    id=push_data(new_chunk)
> end transfer
> -->  at this point all data is guaranteed to have been delivered
> If an exception is caught, the transfer is interrupted and one can get
> the id of the message that generated the exception.
>
>
> Strangely enough this functionality useful for file or big data
> transfer is missing from all the RPC frameworks I've checked so far.
>
> It can be emulated somehow with asynchronous calls but one has to
> manually tune the number of the back buffers depending on the network
> throughput and latency.
>
> Do you plan to implement such a thing in etch? Or to accept such feature?
>
>
> nicolae


Mime
View raw message