ignite-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrey Kornev <andrewkor...@hotmail.com>
Subject RE: Continuous queries changes
Date Tue, 28 Jul 2015 00:08:33 GMT
I thought transaction was a fair price to pay for such strong delivery guarantees. But if the
originally proposed approach works without requiring anything that is just the 2PC protocol
in disguise -- great! 

It's not so much the performance hit that I was concerned about, but the amount of state that
each node will have to maintain and non-triviality (in my opinion) of having the nodes agree
on which queue items can be safely discarded.

Thanks!
Andrey

> Date: Mon, 27 Jul 2015 16:02:49 -0700
> Subject: Re: Continuous queries changes
> From: alexey.goncharuk@gmail.com
> To: dev@ignite.incubator.apache.org
> 
> In my opinion the approach with the IgniteQueue does not work well: if the
> events were pushed to the IgniteQueue asynchronously, the same problem with
> missing events would exist since node might crash in the window when update
> is already completed, but event is not yet added to the queue. If events
> were pushed to the IgniteQueue synchronously, it would mean that we add a
> huge performance hit to each update since queue.add() for this use-case
> would require transactional semantics.
> 
> On the other hand, I do not see anything wrong with the backup queue - it
> should not add any performance drawbacks and the design suggested by
> Valentin/Yakov seems to guarantee the absence of duplicates.
> 
> 2015-07-27 15:19 GMT-07:00 Andrey Kornev <andrewkornev@hotmail.com>:
> 
> > Hi Val,
> >
> > I was hoping to be able to use the blocking IgniteQueue.take() to achieve
> > the desirable "push" semantics.
> >
> > Andrey
> >
> > > Date: Mon, 27 Jul 2015 11:43:13 -0700
> > > Subject: Re: Continuous queries changes
> > > From: valentin.kulichenko@gmail.com
> > > To: dev@ignite.incubator.apache.org
> > >
> > > Andrey,
> > >
> > > I think your approach works, but it requires periodical polling from
> > queue.
> > > Continuous queries provide an ability to get push notifications for
> > > updates, which from my experience is critical for some use cases.
> > >
> > > -Val
> > >
> > > On Mon, Jul 27, 2015 at 7:35 AM, Andrey Kornev <andrewkornev@hotmail.com
> > >
> > > wrote:
> > >
> > > > I wonder if the same result (guaranteed delivery of CQ notifications)
> > can
> > > > be achieved entirely in the "user space" using the public Ignite API
> > only?
> > > >
> > > > For example:
> > > > - start a server-side CQ and have the listener push the notifications
> > into
> > > > an IgniteQueue.
> > > > - have the client connect to the queue and start receiving the
> > > > notifications.
> > > >
> > > > Regards
> > > > Andrey
> > > >
> > > > > From: dsetrakyan@apache.org
> > > > > Date: Sun, 26 Jul 2015 22:15:09 -0700
> > > > > Subject: Re: Continuous queries changes
> > > > > To: dev@ignite.incubator.apache.org
> > > > >
> > > > > On Sat, Jul 25, 2015 at 8:07 AM, Andrey Kornev <
> > andrewkornev@hotmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Val,
> > > > > >
> > > > > > I'm sorry for being obtuse. :)
> > > > > >
> > > > > > I was simply wondering if the queue is going to be holding all
> > > > unfiltered
> > > > > > events per partition or will there be a queue per continuous
query
> > > > instance
> > > > > > per partition? Or, is it going to be arranged some other way?
> > > > > >
> > > > >
> > > > > I believe that backup queues will have the same filters as primary
> > > > queues.
> > > > >
> > > > >
> > > > > > Also, in order to know when it's ok to remove an event from
the
> > backup
> > > > > > queue, wouldn't this approach require maintaining a queue for
each
> > > > > > connected client and having to deal with potentially  unbounded
> > queue
> > > > > > growth if a client struggles to keep up or simply stops acking?
> > > > > >
> > > > >
> > > > > I think the policy for backups should be no different as for the
> > > > primaries.
> > > > > As far as slow clients, Ignite is capable to automatically disconnect
> > > > them:
> > > > > http://s.apache.org/managing-slow-clients
> > > > >
> > > > > Isn't this feature getting Ignite into the murky waters of the
> > message
> > > > > > brokers and guaranteed once-only message delivery with all the
> > > > complexity
> > > > > > and overhead that come with it? Besides in some cases, it's
doesn't
> > > > really
> > > > > > matter if some updates are missing, while in others it is only
> > > > necessary to
> > > > > > be able to detect a missing update. I wouldn't want to have
to pay
> > for
> > > > > > something I don't need...
> > > > > >
> > > > >
> > > > > I believe that the new proposed approach will be optional and you
> > will
> > > > > still be able to get event notifications in non-fault-tolerant
> > manner the
> > > > > old way.
> > > > >
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > > Andrey
> > > > > >
> > > > > > > Date: Fri, 24 Jul 2015 23:40:15 -0700
> > > > > > > Subject: Re: Continuous queries changes
> > > > > > > From: valentin.kulichenko@gmail.com
> > > > > > > To: dev@ignite.incubator.apache.org
> > > > > > >
> > > > > > > Andrey,
> > > > > > >
> > > > > > > I mean the queue of update events that is collected on
backup
> > nodes
> > > > and
> > > > > > > flushed to listening clients in case of topology change.
> > > > > > >
> > > > > > > -Val
> > > > > > >
> > > > > > > On Fri, Jul 24, 2015 at 3:16 PM, Andrey Kornev <
> > > > andrewkornev@hotmail.com
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Val,
> > > > > > > >
> > > > > > > > Could you please elaborate what you mean by "updates
queue" you
> > > > plan to
> > > > > > > > maintain on the primary and backup nodes?
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > > Andrey
> > > > > > > >
> > > > > > > > > Date: Fri, 24 Jul 2015 17:51:48 +0300
> > > > > > > > > Subject: Re: Continuous queries changes
> > > > > > > > > From: yzhdanov@apache.org
> > > > > > > > > To: dev@ignite.incubator.apache.org
> > > > > > > > >
> > > > > > > > > Val,
> > > > > > > > >
> > > > > > > > > I have idea on how to clean up backup queue.
> > > > > > > > >
> > > > > > > > > 1. Our communication uses acks. So, you can determine
[on
> > server
> > > > > > node]
> > > > > > > > > whether client received the update from local
server or not.
> > I
> > > > think
> > > > > > you
> > > > > > > > > can easily change existing code to get notifications
on ack
> > > > receiving
> > > > > > > > (this
> > > > > > > > > way you dont need to introduce your own acks).
> > > > > > > > > 2. How do you know when evict from backup? Each
message that
> > > > client
> > > > > > acks
> > > > > > > > > corresponds to some per-partition long value
you talked above
> > > > (great
> > > > > > > > idea,
> > > > > > > > > btw!). Servers can exchange per-partition long
value that
> > > > > > corresponds to
> > > > > > > > > the latest acked message and that's the way how
backups can
> > > > safely
> > > > > > evict
> > > > > > > > > from the queue.
> > > > > > > > >
> > > > > > > > > Let me know if you have questions.
> > > > > > > > >
> > > > > > > > > --Yakov
> > > > > > > > >
> > > > > > > > > 2015-07-23 8:53 GMT+03:00 Valentin Kulichenko
<
> > > > > > > > valentin.kulichenko@gmail.com
> > > > > > > > > >:
> > > > > > > > >
> > > > > > > > > > Igniters,
> > > > > > > > > >
> > > > > > > > > > Based on discussions with our users I came
to conclusion
> > that
> > > > our
> > > > > > > > > > continuous query implementation is not good
enough for use
> > > > cases
> > > > > > with
> > > > > > > > > > strong consistency requirements, because
there is a
> > > > possibility to
> > > > > > lose
> > > > > > > > > > updates in case of topology change.
> > > > > > > > > >
> > > > > > > > > > So I started working on
> > > > > > > > https://issues.apache.org/jira/browse/IGNITE-426
> > > > > > > > > > and I hope to finish in in couple of weeks
so that we can
> > > > include
> > > > > > it in
> > > > > > > > > > next release.
> > > > > > > > > >
> > > > > > > > > > I have the following design in mind:
> > > > > > > > > >
> > > > > > > > > >    - Maintain updates queue on backup node(s)
in addition
> > to
> > > > > > primary
> > > > > > > > node.
> > > > > > > > > >    - If primary node crushes, this queue
is flushed to
> > > > listening
> > > > > > > > clients.
> > > > > > > > > >    - To avoid notification duplicates we
will have a
> > > > per-partition
> > > > > > > > update
> > > > > > > > > >    counter. Once an entry in some partition
is updated,
> > > > counter for
> > > > > > > > this
> > > > > > > > > >    partition is incremented on both primary
and backups.
> > The
> > > > value
> > > > > > of
> > > > > > > > this
> > > > > > > > > >    counter is also sent along with the update
to the
> > client,
> > > > which
> > > > > > also
> > > > > > > > > >    maintains the copy of this mapping. If
at some moment it
> > > > > > receives an
> > > > > > > > > > update
> > > > > > > > > >    with the counter less than in its local
map, this
> > update is
> > > > a
> > > > > > > > duplicate
> > > > > > > > > > and
> > > > > > > > > >    can be discarded.
> > > > > > > > > >    - Also need to figure out the best way
to clean the
> > backup
> > > > > > queue if
> > > > > > > > > >    topology is stable. Will be happy to
hear any
> > suggestions :)
> > > > > > > > > >
> > > > > > > > > > To make all this work we also need to implement
> > > > > > thread-per-partition
> > > > > > > > mode
> > > > > > > > > > in atomic cache, because now updates order
on backup nodes
> > can
> > > > > > differ
> > > > > > > > from
> > > > > > > > > > the primary node:
> > > > https://issues.apache.org/jira/browse/IGNITE-104
> > > > > > .
> > > > > > > > I'm
> > > > > > > > > > already working on this.
> > > > > > > > > >
> > > > > > > > > > Feel free to share your thoughts!
> > > > > > > > > >
> > > > > > > > > > -Val
> > > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > >
> > > >
> > > >
> >
> >
 		 	   		  
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message