ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kmandalas <Kyriakos.Manda...@IRIworldwide.com>
Subject Re: Distributed Closures VS Executor Service
Date Tue, 28 Mar 2017 10:45:22 GMT
Hello Christo,

After checking in more detail the options we have with Ignite, our first
user case and feedback we have so far (e.g. feedback from Dmitry here:
we conclude that in our case we should follow the MapReduce & ForkJoin

Some details about our use case:

- Spring-based web-app for Retailers
- Users selects some of his product categories (and other criteria both
mandatory and optional), applies some metrics and submits requests for
analytical calculations
- The application server-side queries the DB, finds the products belonging
to the selected categories, performs calculations (applies the metrics on
them etc.) and persists the results in the database for future reference 

Now, like Dmitry said because the data we send and we produce are not to be
re-used, their size and network I/O limitations, caching is not recommended
here. This is why we are thinking the MapReduce & ForkJoin approach with the
following twists:

- in the ComputeTask#map() method we simply break the list of submitted
category IDs to smaller lists and we send each chunk along with other
parameters to each grid node. The parameters object in this case is very
"light", contains only a couple of small arrays of primitives and 3-4 other
primitive fields and Strings  
- we will have to use @SpringResource transient Spring services and DAOs in
our ComputeJobAdapter, since now it will have to do actual DB retrieval,
processing and finally writing to DB. Just it will do it for a very small

Now the things we are thinking are:

- in case a Job fails due to user code Exception (e.g. some unexpected
NullPointer) or some Database error when writing the results in the DB: in
the ComputeTask#result() method we see that we can identify these cases and
act accordingly e.g. mark the calculation as failed and delete possibly
incomplete results. 

*One question is*: if some Job fails and other Jobs belonging to the same
Task are still running (in the same or other grid nodes), is it possible to
cancel them and abort the whole Task before waiting for all of them to

*Another question is*: now, since we need Spring beans to be instantiated
and available at all participating grid nodes, we do not have the option to
start Ignite at each node of the cluster by using ignite.sh right? We will
need to start web server and deploy the web-app everywhere with it's Spring
application context utilizing IgniteSpringBean.

Also, we would like your opinion in general about the use case.

Thank you in advance.

View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Distributed-Closures-VS-Executor-Service-tp11192p11490.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

View raw message