ignite-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrey Kornev <andrewkor...@hotmail.com>
Subject RE: Continuous queries changes
Date Mon, 27 Jul 2015 22:56:43 GMT
Dmitriy,

What I had in mind was to have each client create a dedicated (replicated) queue for each
CQ it's about to start. Once started, the server-side CQ listener would then simply push the
notifications onto the queue. With such setup, the CQ notifications should be able to survive
crashes of the server nodes as well as the client node reconnects. It would also allow the
users handle the queue growth in a more graceful (or application specific) way than a client
disconnect.

Andrey

> From: dsetrakyan@apache.org
> Date: Mon, 27 Jul 2015 07:47:39 -0700
> Subject: Re: Continuous queries changes
> To: dev@ignite.incubator.apache.org
> 
> On Mon, Jul 27, 2015 at 7:35 AM, Andrey Kornev <andrewkornev@hotmail.com>
> wrote:
> 
> > I wonder if the same result (guaranteed delivery of CQ notifications) can
> > be achieved entirely in the "user space" using the public Ignite API only?
> >
> > For example:
> > - start a server-side CQ and have the listener push the notifications into
> > an IgniteQueue.
> > - have the client connect to the queue and start receiving the
> > notifications.
> >
> 
> Hm... Do you mean that in this approach we will have 1 CQ queue per server,
> instead of 1 queue per subscription, as we planned before?
> 
> 
> >
> > Regards
> > Andrey
> >
> > > From: dsetrakyan@apache.org
> > > Date: Sun, 26 Jul 2015 22:15:09 -0700
> > > Subject: Re: Continuous queries changes
> > > To: dev@ignite.incubator.apache.org
> > >
> > > On Sat, Jul 25, 2015 at 8:07 AM, Andrey Kornev <andrewkornev@hotmail.com
> > >
> > > wrote:
> > >
> > > > Val,
> > > >
> > > > I'm sorry for being obtuse. :)
> > > >
> > > > I was simply wondering if the queue is going to be holding all
> > unfiltered
> > > > events per partition or will there be a queue per continuous query
> > instance
> > > > per partition? Or, is it going to be arranged some other way?
> > > >
> > >
> > > I believe that backup queues will have the same filters as primary
> > queues.
> > >
> > >
> > > > Also, in order to know when it's ok to remove an event from the backup
> > > > queue, wouldn't this approach require maintaining a queue for each
> > > > connected client and having to deal with potentially  unbounded queue
> > > > growth if a client struggles to keep up or simply stops acking?
> > > >
> > >
> > > I think the policy for backups should be no different as for the
> > primaries.
> > > As far as slow clients, Ignite is capable to automatically disconnect
> > them:
> > > http://s.apache.org/managing-slow-clients
> > >
> > > Isn't this feature getting Ignite into the murky waters of the message
> > > > brokers and guaranteed once-only message delivery with all the
> > complexity
> > > > and overhead that come with it? Besides in some cases, it's doesn't
> > really
> > > > matter if some updates are missing, while in others it is only
> > necessary to
> > > > be able to detect a missing update. I wouldn't want to have to pay for
> > > > something I don't need...
> > > >
> > >
> > > I believe that the new proposed approach will be optional and you will
> > > still be able to get event notifications in non-fault-tolerant manner the
> > > old way.
> > >
> > >
> > > >
> > > > Thanks
> > > > Andrey
> > > >
> > > > > Date: Fri, 24 Jul 2015 23:40:15 -0700
> > > > > Subject: Re: Continuous queries changes
> > > > > From: valentin.kulichenko@gmail.com
> > > > > To: dev@ignite.incubator.apache.org
> > > > >
> > > > > Andrey,
> > > > >
> > > > > I mean the queue of update events that is collected on backup nodes
> > and
> > > > > flushed to listening clients in case of topology change.
> > > > >
> > > > > -Val
> > > > >
> > > > > On Fri, Jul 24, 2015 at 3:16 PM, Andrey Kornev <
> > andrewkornev@hotmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Val,
> > > > > >
> > > > > > Could you please elaborate what you mean by "updates queue"
you
> > plan to
> > > > > > maintain on the primary and backup nodes?
> > > > > >
> > > > > > Thanks
> > > > > > Andrey
> > > > > >
> > > > > > > Date: Fri, 24 Jul 2015 17:51:48 +0300
> > > > > > > Subject: Re: Continuous queries changes
> > > > > > > From: yzhdanov@apache.org
> > > > > > > To: dev@ignite.incubator.apache.org
> > > > > > >
> > > > > > > Val,
> > > > > > >
> > > > > > > I have idea on how to clean up backup queue.
> > > > > > >
> > > > > > > 1. Our communication uses acks. So, you can determine [on
server
> > > > node]
> > > > > > > whether client received the update from local server or
not. I
> > think
> > > > you
> > > > > > > can easily change existing code to get notifications on
ack
> > receiving
> > > > > > (this
> > > > > > > way you dont need to introduce your own acks).
> > > > > > > 2. How do you know when evict from backup? Each message
that
> > client
> > > > acks
> > > > > > > corresponds to some per-partition long value you talked
above
> > (great
> > > > > > idea,
> > > > > > > btw!). Servers can exchange per-partition long value that
> > > > corresponds to
> > > > > > > the latest acked message and that's the way how backups
can
> > safely
> > > > evict
> > > > > > > from the queue.
> > > > > > >
> > > > > > > Let me know if you have questions.
> > > > > > >
> > > > > > > --Yakov
> > > > > > >
> > > > > > > 2015-07-23 8:53 GMT+03:00 Valentin Kulichenko <
> > > > > > valentin.kulichenko@gmail.com
> > > > > > > >:
> > > > > > >
> > > > > > > > Igniters,
> > > > > > > >
> > > > > > > > Based on discussions with our users I came to conclusion
that
> > our
> > > > > > > > continuous query implementation is not good enough
for use
> > cases
> > > > with
> > > > > > > > strong consistency requirements, because there is
a
> > possibility to
> > > > lose
> > > > > > > > updates in case of topology change.
> > > > > > > >
> > > > > > > > So I started working on
> > > > > > https://issues.apache.org/jira/browse/IGNITE-426
> > > > > > > > and I hope to finish in in couple of weeks so that
we can
> > include
> > > > it in
> > > > > > > > next release.
> > > > > > > >
> > > > > > > > I have the following design in mind:
> > > > > > > >
> > > > > > > >    - Maintain updates queue on backup node(s) in addition
to
> > > > primary
> > > > > > node.
> > > > > > > >    - If primary node crushes, this queue is flushed
to
> > listening
> > > > > > clients.
> > > > > > > >    - To avoid notification duplicates we will have
a
> > per-partition
> > > > > > update
> > > > > > > >    counter. Once an entry in some partition is updated,
> > counter for
> > > > > > this
> > > > > > > >    partition is incremented on both primary and backups.
The
> > value
> > > > of
> > > > > > this
> > > > > > > >    counter is also sent along with the update to the
client,
> > which
> > > > also
> > > > > > > >    maintains the copy of this mapping. If at some
moment it
> > > > receives an
> > > > > > > > update
> > > > > > > >    with the counter less than in its local map, this
update is
> > a
> > > > > > duplicate
> > > > > > > > and
> > > > > > > >    can be discarded.
> > > > > > > >    - Also need to figure out the best way to clean
the backup
> > > > queue if
> > > > > > > >    topology is stable. Will be happy to hear any suggestions
:)
> > > > > > > >
> > > > > > > > To make all this work we also need to implement
> > > > thread-per-partition
> > > > > > mode
> > > > > > > > in atomic cache, because now updates order on backup
nodes can
> > > > differ
> > > > > > from
> > > > > > > > the primary node:
> > https://issues.apache.org/jira/browse/IGNITE-104
> > > > .
> > > > > > I'm
> > > > > > > > already working on this.
> > > > > > > >
> > > > > > > > Feel free to share your thoughts!
> > > > > > > >
> > > > > > > > -Val
> > > > > > > >
> > > > > >
> > > > > >
> > > >
> >
 		 	   		  
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message