incubator-etch-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Holger Grandy <Holger.Gra...@bmw-carit.de>
Subject RE: Big data transfer using etch
Date Tue, 14 Dec 2010 16:10:21 GMT
Hi Nicolae, 

Etch is fully symmetric, the server (whose name only tells you that he listens for connections)

can sent messages to the client (who initializes the connection and therefore is the "client")
at 
any time. Same in the other direction. 

Etch has a listening thread on both sides which handles incoming messages and the 
runtime dispatches new incoming requests / answers to previous calls. 
You don't have to care about those threads yourself. Your client-directed call will be called
from the Etch receiver thread (as long as you don't annotate it using @AsyncReceiver, in this
case
each call gets its own thread on the recipient side).

So, the @Direction(Client) Method you mention will be a good solution for your problem from
my
point of view.

Cheers, 
Holger

> -----Original Message-----
> From: Nicolae Mihalache [mailto:xpromache@gmail.com]
> Sent: Dienstag, 14. Dezember 2010 16:06
> To: etch-dev@incubator.apache.org
> Subject: Re: Big data transfer using etch
> 
> Actually I'm just testing now. Oneway indeed works as you say but what
> I don't like:
> - oneway cannot throw exceptions
> - when I throw a non checked exception (in java), it does not seem to
> propagate to the client.
> 
> I need the exceptions in order to tell the client to stop sending data
> if the server cannot handle it (because of filesystem full or
> whatever).
> 
> I guess a solution then is to create a method in the other direction
> to tell the client to stop together with some error message.
> But then the client has to be able to receive this message in a
> different thread and stop the sending thread. Will that work? Can the
> client receive messages while it is sending itself messages to the
> server?
> 
> 
> nicolae
> 
> 
> On Tue, Dec 14, 2010 at 3:51 PM, Holger Grandy
> <Holger.Grandy@bmw-carit.de> wrote:
> > Hello Nicolae,
> >
> > oneway messages are delivered using the same TCP transport as twoway
> messages and are therefore
> > delivered in order. The difference is that there is no acknowledge
> (or return value) sent after
> > processing the message on the other side.
> >
> > This should work well in your scenario. There is a significant
> performance gain when using
> > oneway calls in a fast sequence one after each other compared to
> twoway messages. I have experienced
> > up to a factor of 10 when using oneway. I am pretty sure you will
> come close to the maximum bandwidth
> > available when using Oneways for your scenario.
> >
> > Please tell us about your experiences!
> >
> > Regards,
> > Holger
> >
> >> -----Original Message-----
> >> From: Nicolae Mihalache [mailto:xpromache@gmail.com]
> >> Sent: Dienstag, 14. Dezember 2010 15:03
> >> To: etch-dev@incubator.apache.org
> >> Subject: Re: Big data transfer using etch
> >>
> >> Hello and thanks for the answer.
> >>
> >> My problem is not of chunking the data, that I can easily do. The
> >> problem is how to efficiently send data at the maximum throughput
> >> available. If I use normal methods (called actions as I learned in
> the
> >> meanwhile), I will be very limited by the latency. For example the
> >> Europe-US round-trip is >100ms, that will limit the speed to 10
> >> messages/sec.
> >>
> >> If I use oneway methods (called events as I also learned today),
> there
> >> is no guaranteed order of delivery.
> >> That's what I thought based on my previous CORBA experience (in
> CORBA
> >> one has to setup a special single threaded POA to guarantee ordered
> >> delivery and even that has problems). Reading more through the etch
> >> mailing lists I found out that oneway methods are actually delivered
> >> in order. And even better if there is a problem with the delivery
> >> there is a notification mechanism. So it actually seems to work the
> >> way I want (but I didn't tested yet).
> >>
> >> It's a pity that the documentation isn't better when it comes to
> >> threads and stuff.
> >> Reading the mailing lists helps a lot, so I started reading them
> all.
> >> I reached now the thread from September "Future of etch"...
> >>
> >> I'll come with more questions after I do some tests.
> >>
> >> nicolae
> >>
> >> On Tue, Dec 14, 2010 at 2:21 PM, scott comer <wert1y@mac.com> wrote:
> >> > hi Nicolae!
> >> >
> >> > if the data is easily chunked by you, such as a very large byte
> >> array, then
> >> > the standard
> >> > methods work just fine. for example, to transfer an image or
> sound,
> >> video,
> >> > etc. you could
> >> > easily implement an OutputStream to buffer up data and transmit in
> >> chunks
> >> > via etch,
> >> > reassemble on the other side, etc.
> >> >
> >> > the problem comes when you have objects with complicated
> structure,
> >> such as
> >> > rows
> >> > from a db table. can you write 100 rows? 1000? depends upon the
> data
> >> in each
> >> > row.
> >> >
> >> > the big message problem has no easy solution, but one that works
> >> might be
> >> > this:
> >> >
> >> > create a virtual stream of data by using the etch binary encoding
> to
> >> code
> >> > your large
> >> > data structure, then chop the stream up and transmit the chunks,
> and
> >> > reassemble on
> >> > the other side. you have to do the work yourself, but it isn't
> hard
> >> work and
> >> > could serve
> >> > as a basis for a real solution for the big message problem.
> >> >
> >> > can you say some more about your application?
> >> >
> >> > scott out
> >> >
> >> > On 12/14/2010 3:57 AM, Nicolae Mihalache wrote:
> >> >>
> >> >> Hello,
> >> >>
> >> >> I'm considering the possibilities to replace CORBA in some
> >> application
> >> >> and I found the etch project.
> >> >> It looks nice and seem to satisfy all my needs except the Big
> >> Message
> >> >> Problem as described here:
> >> >> http://incubator.apache.org/etch/big-message-problem.html
> >> >>
> >> >> What would be nice is the ability to stream messages, a bit like
> >> >> @oneway but with guaranteed order and possibility to receive
> >> >> acknowledge if a message has generated an exception.
> >> >> The functionality would be somehow similar with standard TCP
> >> sockets:
> >> >> one pushes the data as fast as possible and the write is blocked
> if
> >> >> the the network or the reader cannot sustain the throughput.
> >> >>  From the API point of view it will be like:
> >> >> start transfer
> >> >> while(not finished):
> >> >>   id=push_data(new_chunk)
> >> >> end transfer
> >> >> -->  at this point all data is guaranteed to have been delivered
> >> >> If an exception is caught, the transfer is interrupted and one
> can
> >> get
> >> >> the id of the message that generated the exception.
> >> >>
> >> >>
> >> >> Strangely enough this functionality useful for file or big data
> >> >> transfer is missing from all the RPC frameworks I've checked so
> far.
> >> >>
> >> >> It can be emulated somehow with asynchronous calls but one has to
> >> >> manually tune the number of the back buffers depending on the
> >> network
> >> >> throughput and latency.
> >> >>
> >> >> Do you plan to implement such a thing in etch? Or to accept such
> >> feature?
> >> >>
> >> >>
> >> >> nicolae
> >> >
> >> >
> >

Mime
View raw message