ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexey Kukushkin <kukushkinale...@gmail.com>
Subject Re: How to do 'stream processing' and more questions of a Ignite newbie
Date Thu, 14 Dec 2017 12:08:21 GMT
1. To reduce operating costs in real enterprise environments, have a single
application-agnostic Ignite cluster and multiple business function-centric
applications configuring themselves in the cluster upon startup. You will
have a single Ignite server configuration file including generic settings
(e.g. discovery mechanism, memory and persistence settings, whether to
support peer class loading) and application-specific configurations
defining the application data models (caches configuration). The Operations
team will manage a single cluster while the apps are still decoupled from
each other thus reducing the operating cost.

In real enterprise environments the admins would create wrappers over
ignite.sh (or develop a script from scratch) to start/stop the cluster
where enterprise-wide deployment, monitoring, error handling, logging,
networking, etc. settings would be automatically applied.

2. Web Console allows you to configure cluster and data model by filling
web forms (either from scratch or automatically converting existing
relational model). It is up to you to decide whether you prefer editing XML
yourself or filling web forms and the generating XML (and Java project
skeleton) automatically. You may consider generating initial project using
Web Console and then continue updating it by editing raw XML.

3. I think you already answered your question: you need to either enable
peer class loading or have the Interpolation class on the server classpath.
The reason is that, even though your simple server-side filter "e -> true"
does not capture any outer class members, Java lambda still has an implicit
reference to the outer class "Interpolation" that has to be deserialised.

Ignite supports Docker deployment
<https://apacheignite.readme.io/v2.1/docs/docker-deployment>. Did you try
following the instructions? Share your specific problems if you have any.
Also, as an application developer, you are less focused on the deployment
approach - that is normally for the enterprise Operations team to decide.
As I recommended above, consider the cluster as a shared "resource" that
your app would connect to no matter how the cluster was deployed.

4. Not sure I understood your question. I guess your concern is you were
getting all events in local listener since your remote filter passes all
the events. You are right that you need to properly filter events on the
server side using remote filter to minimise traffic.
You can run a continuous query from any kind of node - either client or
server nodes. "local" listener means it is running on the node where the
query was originated and remote filter runs on all the remote server nodes.

View raw message