ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Konstantin Boudnik <...@apache.org>
Subject Re: Multiple grids in a cluster
Date Thu, 23 Jul 2015 20:58:16 GMT
On Thu, Jul 23, 2015 at 01:31PM, Valentin Kulichenko wrote:
>    Hi,
>    You're right, you should always start a node with Ignition.start() and get
>    Ignite instance with Ignition.ignite() after that.
>    You can start several nodes in one process, and in this case you will have
>    to give unique grid name to each node within this process. But this is
>    used mostly for unit testing, because it allows to start the whole cluster
>    in one JVM and debug conveniently. Otherwise the most common deployment is
>    one node per JVM with the default (null) name. You can start one or
>    several nodes per host depending on your use case.

But a cache can be replicated between the clusters, right?


>    Any cache is available only to one cluster and can't be shared.
>    -Val
>    On Jul 23, 2015 7:28 AM, "hueb1" <eric.hu@finra.org> wrote:
>      I'm new to Ignite and trying to understand the API of Ignition.ignite()
>      and
>      Ignition.start().A  I believe we'd always need to call
>      Ignition.start(..)
>      first to initialize the data grid(s).A  And then call
>      Ignition.ignite(..)
>      giving it the data grid name or none for default data grid.A  That being
>      said, is there an example of a configuration file that specifies
>      multiple
>      data grids?A  Is it not good practice to have multiple data grids
>      running on
>      the same set of hosts?A  Should we always just use the default data
>      grid?
>      Also I'm assuming the distributed caches are scoped by data grids yes?A 
>      You
>      can't have multiple data grids access the same distributed cache?
>      --
>      View this message in context:
>      http://apache-ignite-users.70518.x6.nabble.com/Multiple-grids-in-a-cluster-tp692.html
>      Sent from the Apache Ignite Users mailing list archive at Nabble.com.

View raw message