ignite-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From agoncha...@apache.org
Subject [72/79] [abbrv] incubator-ignite git commit: # sprint-2 - added documentation.
Date Fri, 06 Mar 2015 06:49:24 GMT
# sprint-2 - added documentation.


Project: http://git-wip-us.apache.org/repos/asf/incubator-ignite/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-ignite/commit/af603c00
Tree: http://git-wip-us.apache.org/repos/asf/incubator-ignite/tree/af603c00
Diff: http://git-wip-us.apache.org/repos/asf/incubator-ignite/diff/af603c00

Branch: refs/heads/ignite-45
Commit: af603c00fa602538db47fc95761c57d5df82e75b
Parents: 5a2d62e
Author: Dmitiry Setrakyan <dsetrakyan@gridgain.com>
Authored: Thu Mar 5 17:11:43 2015 -0800
Committer: Dmitiry Setrakyan <dsetrakyan@gridgain.com>
Committed: Thu Mar 5 17:11:43 2015 -0800

----------------------------------------------------------------------
 docs/wiki/basic-concepts/async-support.md       |   75 +
 docs/wiki/basic-concepts/getting-started.md     |  218 +++
 docs/wiki/basic-concepts/ignite-life-cycel.md   |  105 ++
 docs/wiki/basic-concepts/maven-setup.md         |   68 +
 docs/wiki/basic-concepts/what-is-ignite.md      |   31 +
 docs/wiki/basic-concepts/zero-deployment.md     |   56 +
 docs/wiki/clustering/aws-config.md              |   42 +
 docs/wiki/clustering/cluster-config.md          |  176 ++
 docs/wiki/clustering/cluster-groups.md          |  210 +++
 docs/wiki/clustering/cluster.md                 |  128 ++
 docs/wiki/clustering/leader-election.md         |   59 +
 docs/wiki/clustering/network-config.md          |  101 ++
 docs/wiki/clustering/node-local-map.md          |   35 +
 docs/wiki/compute-grid/checkpointing.md         |  238 +++
 .../compute-grid/collocate-compute-and-data.md  |   29 +
 docs/wiki/compute-grid/compute-grid.md          |   56 +
 docs/wiki/compute-grid/compute-tasks.md         |  105 ++
 docs/wiki/compute-grid/distributed-closures.md  |  107 ++
 docs/wiki/compute-grid/executor-service.md      |   23 +
 docs/wiki/compute-grid/fault-tolerance.md       |   79 +
 docs/wiki/compute-grid/job-scheduling.md        |   69 +
 docs/wiki/compute-grid/load-balancing.md        |   59 +
 docs/wiki/data-grid/affinity-collocation.md     |   78 +
 docs/wiki/data-grid/automatic-db-integration.md |  102 ++
 docs/wiki/data-grid/cache-modes.md              |  237 +++
 docs/wiki/data-grid/cache-queries.md            |  164 ++
 docs/wiki/data-grid/data-grid.md                |   68 +
 docs/wiki/data-grid/data-loading.md             |   77 +
 docs/wiki/data-grid/evictions.md                |   86 +
 docs/wiki/data-grid/hibernate-l2-cache.md       |  173 ++
 docs/wiki/data-grid/jcache.md                   |   99 ++
 docs/wiki/data-grid/off-heap-memory.md          |  180 ++
 docs/wiki/data-grid/persistent-store.md         |  111 ++
 docs/wiki/data-grid/rebalancing.md              |  105 ++
 docs/wiki/data-grid/transactions.md             |  127 ++
 docs/wiki/data-grid/web-session-clustering.md   |  236 +++
 .../distributed-data-structures/atomic-types.md |   97 ++
 .../countdownlatch.md                           |   24 +
 .../distributed-data-structures/id-generator.md |   40 +
 .../queue-and-set.md                            |  116 ++
 .../distributed-events/automatic-batching.md    |   16 +
 docs/wiki/distributed-events/events.md          |  101 ++
 docs/wiki/distributed-file-system/igfs.md       |    1 +
 docs/wiki/distributed-messaging/messaging.md    |   73 +
 docs/wiki/http/configuration.md                 |   58 +
 docs/wiki/http/rest-api.md                      | 1646 ++++++++++++++++++
 docs/wiki/release-notes/release-notes.md        |   13 +
 docs/wiki/service-grid/cluster-singletons.md    |   94 +
 docs/wiki/service-grid/service-configuration.md |   33 +
 docs/wiki/service-grid/service-example.md       |   94 +
 docs/wiki/service-grid/service-grid.md          |   62 +
 51 files changed, 6380 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/basic-concepts/async-support.md
----------------------------------------------------------------------
diff --git a/docs/wiki/basic-concepts/async-support.md b/docs/wiki/basic-concepts/async-support.md
new file mode 100755
index 0000000..eaf84ef
--- /dev/null
+++ b/docs/wiki/basic-concepts/async-support.md
@@ -0,0 +1,75 @@
+All distributed methods on all Ignite APIs can be executed either synchronously or asynchronously. However, instead of having a duplicate asynchronous method for every synchronous one (like `get()` and `getAsync()`, or `put()` and `putAsync()`, etc.), Ignite chose a more elegant approach, where methods don't have to be duplicated.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "IgniteAsyncSupport"
+}
+[/block]
+`IgniteAsyncSupport` interface adds asynchronous mode to many Ignite APIs. For example, `IgniteCompute`, `IgniteServices`, `IgniteCache`, and `IgniteTransactions` all extend `IgniteAsyncSupport` interface.
+
+To enable asynchronous mode, you should call `withAsync()` method. 
+
+## Compute Grid Example
+The example below illustrates the difference between synchronous and asynchronous computations.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteCompute compute = ignite.compute();\n\n// Execute a job and wait for the result.\nString res = compute.call(() -> {\n  // Print hello world on some cluster node.\n\tSystem.out.println(\"Hello World\");\n  \n  return \"Hello World\";\n});",
+      "language": "java",
+      "name": "Synchronous"
+    }
+  ]
+}
+[/block]
+Here is how you would make the above invocation asynchronous:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "// Enable asynchronous mode.\nIgniteCompute asyncCompute = ignite.compute().withAsync();\n\n// Asynchronously execute a job.\nasyncCompute.call(() -> {\n  // Print hello world on some cluster node and wait for completion.\n\tSystem.out.println(\"Hello World\");\n  \n  return \"Hello World\";\n});\n\n// Get the future for the above invocation.\nIgniteFuture<String> fut = asyncCompute.future();\n\n// Asynchronously listen for completion and print out the result.\nfut.listenAsync(f -> System.out.println(\"Job result: \" + f.get()));",
+      "language": "java",
+      "name": "Asynchronous"
+    }
+  ]
+}
+[/block]
+## Data Grid Example
+Here is the data grid example for synchronous and asynchronous invocations.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteCache<String, Integer> cache = ignite.jcache(\"mycache\");\n\n// Synchronously store value in cache and get previous value.\nInteger val = cache.getAndPut(\"1\", 1);",
+      "language": "java",
+      "name": "Synchronous"
+    }
+  ]
+}
+[/block]
+Here is how you would make the above invocation asynchronous.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "// Enable asynchronous mode.\nIgniteCache<String, Integer> asyncCache = ignite.jcache(\"mycache\").withAsync();\n\n// Asynhronously store value in cache.\nasyncCache.getAndPut(\"1\", 1);\n\n// Get future for the above invocation.\nIgniteFuture<Integer> fut = asyncCache.future();\n\n// Asynchronously listen for the operation to complete.\nfut.listenAsync(f -> System.out.println(\"Previous cache value: \" + f.get()));",
+      "language": "java",
+      "name": "Asynchronous"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "@IgniteAsyncSupported"
+}
+[/block]
+Not every method on Ignite APIs is distributed and therefore does not really require asynchronous mode. To avoid confusion about which method is distributed, i.e. can be asynchronous, and which is not, all distributed methods in Ignite are annotated with `@IgniteAsyncSupported` annotation.
+[block:callout]
+{
+  "type": "info",
+  "body": "Note that, although not really needed, in async mode you can still get the future for non-distributed operations as well.  However, this future will always be completed."
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/basic-concepts/getting-started.md
----------------------------------------------------------------------
diff --git a/docs/wiki/basic-concepts/getting-started.md b/docs/wiki/basic-concepts/getting-started.md
new file mode 100755
index 0000000..af488d0
--- /dev/null
+++ b/docs/wiki/basic-concepts/getting-started.md
@@ -0,0 +1,218 @@
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Prerequisites"
+}
+[/block]
+Apache Ignite was officially tested on:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Name",
+    "h-1": "Value",
+    "0-0": "JDK",
+    "0-1": "Oracle JDK 7 and above",
+    "1-0": "OS",
+    "2-0": "Network",
+    "1-1": "Linux (any flavor),\nMac OSX (10.6 and up)\nWindows (XP and up), \nWindows Server (2008 and up)",
+    "2-1": "No restrictions (10G recommended)",
+    "3-0": "Hardware",
+    "3-1": "No restrictions"
+  },
+  "cols": 2,
+  "rows": 3
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Installation"
+}
+[/block]
+Here is the quick summary on installation of Apache Ignite:
+  * Download Apache Ignite as ZIP archive from https://ignite.incubator.apache.org/
+  * Unzip ZIP archive into the installation folder in your system
+  * Set `IGNITE_HOME` environment variable to point to the installation folder and make sure there is no trailing `/` in the path (this step is optional)
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Start From Command Line"
+}
+[/block]
+An Ignite node can be started from command line either with default configuration or by passing a configuration file. You can start as many nodes as you like and they will all automatically discover each other. 
+
+##With Default Configuration
+To start a grid node with default configuration, open the command shell and, assuming you are in `IGNITE_HOME` (Ignite installation folder), just type this:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "$ bin/ignite.sh",
+      "language": "shell"
+    }
+  ]
+}
+[/block]
+and you will see the output similar to this:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "[02:49:12] Ignite node started OK (id=ab5d18a6)\n[02:49:12] Topology snapshot [ver=1, nodes=1, CPUs=8, heap=1.0GB]",
+      "language": "text"
+    }
+  ]
+}
+[/block]
+By default `ignite.sh` starts Ignite node with the default configuration: `config/default-config.xml`.
+
+##Passing Configuration File 
+To pass configuration file explicitly,  from command line, you can type ggstart.sh <path to configuration file> from within your Ignite installation folder. For example:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "$ bin/ignite.sh examples/config/example-cache.xml",
+      "language": "shell"
+    }
+  ]
+}
+[/block]
+Path to configuration file can be absolute, or relative to either `IGNITE_HOME` (Ignite installation folder) or `META-INF` folder in your classpath. 
+[block:callout]
+{
+  "type": "success",
+  "title": "Interactive Mode",
+  "body": "To pick a configuration file in interactive mode just pass `-i` flag, like so: `ignite.sh -i`."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Get It With Maven"
+}
+[/block]
+Another easy way to get started with Apache Ignite in your project is to use Maven 2 dependency management.
+
+Ignite requires only one `ignite-core` mandatory dependency. Usually you will also need to add `ignite-spring` for spring-based XML configuration and `ignite-indexing` for SQL querying.
+
+Replace `${ignite-version}` with actual Ignite version.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-core</artifactId>\n    <version>${ignite.version}</version>\n</dependency>\n<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-spring</artifactId>\n    <version>${ignite.version}</version>\n</dependency>\n<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-indexing</artifactId>\n    <version>${ignite.version}</version>\n</dependency>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "success",
+  "title": "Maven Setup",
+  "body": "See [Maven Setup](/docs/maven-setup) for more information on how to include individual Ignite maven artifacts."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "First Ignite Compute Application"
+}
+[/block]
+Let's write our first grid application which will count a number of non-white-space characters in a sentence. As an example, we will take a sentence, split it into multiple words, and have every compute job count number of characters in each individual word. At the end we simply add up results received from individual jobs to get our total count.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "try (Ignite ignite = Ignition.start()) {\n  Collection<IgniteCallable<Integer>> calls = new ArrayList<>();\n\n  // Iterate through all the words in the sentence and create Callable jobs.\n  for (final String word : \"Count characters using callable\".split(\" \"))\n    calls.add(word::length);\n\n  // Execute collection of Callables on the grid.\n  Collection<Integer> res = ignite.compute().call(calls);\n\n  int sum = res.stream().mapToInt(Integer::intValue).sum();\n \n\tSystem.out.println(\"Total number of characters is '\" + sum + \"'.\");\n}",
+      "language": "java",
+      "name": "compute"
+    },
+    {
+      "code": "try (Ignite ignite = Ignition.start()) {\n    Collection<IgniteCallable<Integer>> calls = new ArrayList<>();\n \n    // Iterate through all the words in the sentence and create Callable jobs.\n    for (final String word : \"Count characters using callable\".split(\" \")) {\n        calls.add(new IgniteCallable<Integer>() {\n            @Override public Integer call() throws Exception {\n                return word.length();\n            }\n        });\n    }\n \n    // Execute collection of Callables on the grid.\n    Collection<Integer> res = ignite.compute().call(calls);\n \n    int sum = 0;\n \n    // Add up individual word lengths received from remote nodes.\n    for (int len : res)\n        sum += len;\n \n    System.out.println(\">>> Total number of characters in the phrase is '\" + sum + \"'.\");\n}",
+      "language": "java",
+      "name": "java7 compute"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "success",
+  "body": "Note that because of  [Zero Deployment](doc:zero-deployment) feature, when running the above application from your IDE, remote nodes will execute received jobs without explicit deployment.",
+  "title": "Zero Deployment"
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "First Ignite Data Grid Application"
+}
+[/block]
+Now let's write a simple set of mini-examples which will put and get values to/from distributed cache, and perform basic transactions.
+
+Since we are using cache in this example, we should make sure that it is configured. Let's use example configuration shipped with Ignite that already has several caches configured: 
+[block:code]
+{
+  "codes": [
+    {
+      "code": "$ bin/ignite.sh examples/config/example-cache.xml",
+      "language": "shell"
+    }
+  ]
+}
+[/block]
+
+[block:code]
+{
+  "codes": [
+    {
+      "code": "try (Ignite ignite = Ignition.start(\"examples/config/example-cache.xml\")) {\n    IgniteCache<Integer, String> cache = ignite.jcache(CACHE_NAME);\n \n    // Store keys in cache (values will end up on different cache nodes).\n    for (int i = 0; i < 10; i++)\n        cache.put(i, Integer.toString(i));\n \n    for (int i = 0; i < 10; i++)\n        System.out.println(\"Got [key=\" + i + \", val=\" + cache.get(i) + ']');\n}",
+      "language": "java",
+      "name": "Put and Get"
+    },
+    {
+      "code": "// Put-if-absent which returns previous value.\nInteger oldVal = cache.getAndPutIfAbsent(\"Hello\", 11);\n  \n// Put-if-absent which returns boolean success flag.\nboolean success = cache.putIfAbsent(\"World\", 22);\n  \n// Replace-if-exists operation (opposite of getAndPutIfAbsent), returns previous value.\noldVal = cache.getAndReplace(\"Hello\", 11);\n \n// Replace-if-exists operation (opposite of putIfAbsent), returns boolean success flag.\nsuccess = cache.replace(\"World\", 22);\n  \n// Replace-if-matches operation.\nsuccess = cache.replace(\"World\", 2, 22);\n  \n// Remove-if-matches operation.\nsuccess = cache.remove(\"Hello\", 1);",
+      "language": "java",
+      "name": "Atomic Operations"
+    },
+    {
+      "code": "try (Transaction tx = ignite.transactions().txStart()) {\n    Integer hello = cache.get(\"Hello\");\n  \n    if (hello == 1)\n        cache.put(\"Hello\", 11);\n  \n    cache.put(\"World\", 22);\n  \n    tx.commit();\n}",
+      "language": "java",
+      "name": "Transactions"
+    },
+    {
+      "code": "// Lock cache key \"Hello\".\nLock lock = cache.lock(\"Hello\");\n \nlock.lock();\n \ntry {\n    cache.put(\"Hello\", 11);\n    cache.put(\"World\", 22);\n}\nfinally {\n    lock.unlock();\n} ",
+      "language": "java",
+      "name": "Distributed Locks"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Ignite Visor Admin Console"
+}
+[/block]
+The easiest way to examine the content of the data grid as well as perform a long list of other management and monitoring operations is to use Ignite Visor Command Line Utility.
+
+To start Visor simply run:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "$ bin/ignitevisorcmd.sh",
+      "language": "shell"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/basic-concepts/ignite-life-cycel.md
----------------------------------------------------------------------
diff --git a/docs/wiki/basic-concepts/ignite-life-cycel.md b/docs/wiki/basic-concepts/ignite-life-cycel.md
new file mode 100755
index 0000000..3933abf
--- /dev/null
+++ b/docs/wiki/basic-concepts/ignite-life-cycel.md
@@ -0,0 +1,105 @@
+Ignite is JVM-based. Single JVM represents one or more logical Ignite nodes (most of the time, however, a single JVM runs just one Ignite node). Throughout Ignite documentation we use term Ignite runtime and Ignite node almost interchangeably. For example, when we say that you can "run 5 nodes on this host" - in most cases it technically means that you can start 5 JVMs on this host each running a single Ignite node. Ignite also supports multiple Ignite nodes in a single JVM. In fact, that is exactly how most of the internal tests run for Ignite itself.
+[block:callout]
+{
+  "type": "success",
+  "body": "Ignite runtime == JVM process == Ignite node (in most cases)"
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Ignition Class"
+}
+[/block]
+The `Ignition` class starts individual Ignite nodes in the network topology. Note that a physical server (like a computer on the network) can have multiple Ignite nodes running on it.
+
+Here is how you can start grid node locally with all defaults
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.start();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+or by passing a configuration file:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.start(\"examples/config/example-cache.xml\");",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+Path to configuration file can be absolute, or relative to either `IGNITE_HOME` (Ignite installation folder) or `META-INF` folder in your classpath.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "LifecycleBean"
+}
+[/block]
+Sometimes you need to perform certain actions before or after the Ignite node starts or stops. This can be done by implementing `LifecycleBean` interface, and specifying the implementation bean in `lifecycleBeans` property of `IgniteConfiguration` in the spring XML file:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\">\n    ...\n    <property name=\"lifecycleBeans\">\n        <list>\n            <bean class=\"com.mycompany.MyGridLifecycleBean\"/>\n        </list>\n    </property>\n    ...\n</bean>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+`GridLifeCycleBean` can also configured programmatically the following way:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "// Create new configuration.\nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Provide lifecycle bean to configuration.\ncfg.setLifecycleBeans(new MyGridLifecycleBean());\n \n// Start Ignite node with given configuration.\nIgnite ignite = GridGain.start(cfg)",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+An implementation of `LifecycleBean` may look like the following:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "public class MyLifecycleBean implements LifecycleBean {\n    @Override public void onLifecycleEvent(LifecycleEventType evt) {\n        if (evt == LifecycleEventType.BEFORE_GRID_START) {\n            // Do something.\n            ...\n        }\n    }\n}",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+You can inject Ignite instance and other useful resources into a `LifecycleBean` implementation. Please refer to [Resource Injection](/docs/resource-injection) section for more information.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Lifecycle Event Types"
+}
+[/block]
+The following lifecycle event types are supported:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Event Type",
+    "h-1": "Description",
+    "0-0": "BEFORE_NODE_START",
+    "0-1": "Invoked before Ignite node startup routine is initiated.",
+    "1-0": "AFTER_NODE_START",
+    "1-1": "Invoked right after Ignite node has started.",
+    "2-0": "BEFORE_NODE_STOP",
+    "2-1": "Invoked right before Ignite stop routine is initiated.",
+    "3-0": "AFTER_NODE_STOP",
+    "3-1": "Invoked right after Ignite node has stopped."
+  },
+  "cols": 2,
+  "rows": 4
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/basic-concepts/maven-setup.md
----------------------------------------------------------------------
diff --git a/docs/wiki/basic-concepts/maven-setup.md b/docs/wiki/basic-concepts/maven-setup.md
new file mode 100755
index 0000000..ad15cfd
--- /dev/null
+++ b/docs/wiki/basic-concepts/maven-setup.md
@@ -0,0 +1,68 @@
+If you are using Maven to manage dependencies of your project, you can import individual Ignite modules a la carte.
+[block:callout]
+{
+  "type": "info",
+  "body": "In the examples below, please replace `${ignite.version}` with actual Apache Ignite version you are interested in."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Common Dependencies"
+}
+[/block]
+Ignite data fabric comes with one mandatory dependency on `ignite-core.jar`. 
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-core</artifactId>\n    <version>${ignite.version}</version>\n</dependency>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+However, in many cases may wish to have more dependencies, for example, if you want to use Spring configuration or SQL queries.
+
+Here are the most commonly used optional modules:
+  * ignite-indexing (optional, add if you need SQL indexing)
+  * ignite-spring (optional, add if you plan to use Spring configuration) 
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-core</artifactId>\n    <version>${ignite.version}</version>\n</dependency>\n<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-spring</artifactId>\n    <version>${ignite.version}</version>\n</dependency>\n<dependency>\n    <groupId>org.apache.ignite</groupId>\n    <artifactId>ignite-indexing</artifactId>\n    <version>${ignite.version}</version>\n</dependency>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Importing Individual Modules A La Carte"
+}
+[/block]
+You can import Ignite modules a la carte, one by one. The only required module is `ignite-core`, all others are optional. All optional modules can be imported just like the core module, but with different artifact IDs.
+
+The following modules are available:
+  * `ignite-spring` (for Spring-based configuration support)
+  * `ignite-indexing` (for SQL querying and indexing)
+  * `ignite-geospatial` (for geospatial indexing)
+  * `ignite-hibernate` (for Hibernate integration)
+  * `ignite-web` (for Web Sessions Clustering)
+  * `ignite-schedule` (for Cron-based task scheduling)
+  * `ignite-logj4` (for Log4j logging)
+  * `ignite-jcl` (for Apache Commons logging)
+  * `ignite-jta` (for XA integration)
+  * `ignite-hadoop2-integration` (Integration with HDFS 2.0)
+  * `ignite-rest-http` (for HTTP REST messages)
+  * `ignite-scalar` (for Ignite Scala API)
+  * `ignite-sl4j` (for SL4J logging)
+  * `ignite-ssh` (for starting grid nodes on remote machines)
+  * `ignite-urideploy` (for URI-based deployment)
+  * `ignite-aws` (for seamless cluster discovery on AWS S3)
+  * `ignite-aop` (for AOP-based grid-enabling)
+  * `ignite-visor-console`  (open source command line management and monitoring tool)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/basic-concepts/what-is-ignite.md
----------------------------------------------------------------------
diff --git a/docs/wiki/basic-concepts/what-is-ignite.md b/docs/wiki/basic-concepts/what-is-ignite.md
new file mode 100755
index 0000000..5cdaf6b
--- /dev/null
+++ b/docs/wiki/basic-concepts/what-is-ignite.md
@@ -0,0 +1,31 @@
+Apache Ignite In-Memory Data Fabric is a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based or flash technologies.
+[block:image]
+{
+  "images": [
+    {
+      "image": [
+        "https://www.filepicker.io/api/file/lydEeGB6Rs9hwbpcQxiw",
+        "apache-ignite.png",
+        "1024",
+        "310",
+        "#ec945e",
+        ""
+      ],
+      "caption": ""
+    }
+  ]
+}
+[/block]
+##Features
+You can view Ignite as a collection of independent, well-integrated, in-memory components geared to improve performance and scalability of you application. Some of these components include:
+
+  * [Advanced Clustering](doc:cluster)
+  * [Compute Grid](doc:compute-grid) 
+  * [Data Grid (JCache)](doc:data-grid) 
+  * [Service Grid](doc:service-grid)
+  * [Ignite File System](doc:igfs)
+  * [Distributed Data Structures](doc:queue-and-set) 
+  * [Distributed Messaging](doc:messaging) 
+  * [Distributed Events](doc:events) 
+  * Streaming & CEP
+  * In-Memory Hadoop Accelerator
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/basic-concepts/zero-deployment.md
----------------------------------------------------------------------
diff --git a/docs/wiki/basic-concepts/zero-deployment.md b/docs/wiki/basic-concepts/zero-deployment.md
new file mode 100755
index 0000000..08b4472
--- /dev/null
+++ b/docs/wiki/basic-concepts/zero-deployment.md
@@ -0,0 +1,56 @@
+The closures and tasks that you use for your computations may be of any custom class, including anonymous classes. In Ignite, the remote nodes will automatically become aware of those classes, and you won't need to explicitly deploy or move any .jar files to any remote nodes. 
+
+Such behavior is possible due to peer class loading (P2P class loading), a special **distributed  ClassLoader** in Ignite for inter-node byte-code exchange. With peer-class-loading enabled, you don't have to manually deploy your Java or Scala code on each node in the grid and re-deploy it each time it changes.
+
+A code example like below would run on all remote nodes due to peer class loading, without any explicit deployment step.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Compute instance over remote nodes.\nIgniteCompute compute = ignite.compute(cluster.forRemotes());\n\n// Print hello message on all remote nodes.\ncompute.broadcast(() -> System.out.println(\"Hello node: \" + cluster.localNode().id());",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+Here is how peer class loading can be configured:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...   \n    <!-- Explicitly enable peer class loading. -->\n    <property name=\"peerClassLoadingEnabled\" value=\"true\"/>\n    ...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n\ncfg.setPeerClassLoadingEnabled(true);\n\n// Start Ignite node.\nIgnite ignite = Ignition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+Peer class loading sequence works as follows:
+1. Ignite will check if class is available on local classpath (i.e. if it was loaded at system startup), and if it was, it will be returned. No class loading from a peer node will take place in this case.
+2. If class is not locally available, then a request will be sent to the originating node to provide class definition. Originating node will send class byte-code definition and the class will be loaded on the worker node. This happens only once per class - once class definition is loaded on a node, it will never have to be loaded again.
+[block:callout]
+{
+  "type": "warning",
+  "title": "Development vs Production",
+  "body": "It is recommended that peer-class-loading is disabled in production. Generally you want to have a controlled production environment without any magic."
+}
+[/block]
+
+[block:callout]
+{
+  "type": "warning",
+  "title": "Auto-Clearing Caches for Hot Redeployment",
+  "body": "Whenever you change class definitions for the data stored in cache, Ignite will automatically clear the caches for previous class definitions before peer-deploying the new data to avoid class-loading conflicts."
+}
+[/block]
+
+[block:callout]
+{
+  "type": "info",
+  "title": "3rd Party Libraries",
+  "body": "When utilizing peer class loading, you should be aware of the libraries that get loaded from peer nodes vs. libraries that are already available locally in the class path. Our suggestion is to include all 3rd party libraries into class path of every node. This way you will not transfer megabytes of 3rd party classes to remote nodes every time you change a line of code."
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/clustering/aws-config.md
----------------------------------------------------------------------
diff --git a/docs/wiki/clustering/aws-config.md b/docs/wiki/clustering/aws-config.md
new file mode 100755
index 0000000..2a99ac8
--- /dev/null
+++ b/docs/wiki/clustering/aws-config.md
@@ -0,0 +1,42 @@
+Node discovery on AWS cloud is usually proven to be more challenging. Amazon EC2, just like most of the other virtual environments, has the following limitations:
+* Multicast is disabled.
+* TCP addresses change every time a new image is started.
+
+Although you can use TCP-based discovery in the absence of the Multicast, you still have to deal with constantly changing IP addresses and constantly updating the configuration. This creates a major inconvenience and makes configurations based on static IPs virtually unusable in such environments.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Amazon S3 Based Discovery"
+}
+[/block]
+To mitigate constantly changing IP addresses problem, Ignite supports automatic node discovery by utilizing S3 store via `TcpDiscoveryS3IpFinder`. On startup nodes register their IP addresses with Amazon S3 store. This way other nodes can try to connect to any of the IP addresses stored in S3 and initiate automatic grid node discovery.
+[block:callout]
+{
+  "type": "success",
+  "body": "Such approach allows to create your configuration once and reuse it for all EC2 instances."
+}
+[/block]
+
+
+Here is an example of how to configure Amazon S3 IP finder:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n  <property name=\"discoverySpi\">\n    <bean class=\"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi\">\n      <property name=\"ipFinder\">\n        <bean class=\"org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder\">\n          <property name=\"awsCredentials\" ref=\"aws.creds\"/>\n          <property name=\"bucketName\" value=\"YOUR_BUCKET_NAME\"/>\n        </bean>\n      </property>\n    </bean>\n  </property>\n</bean>\n\n<!-- AWS credentials. Provide your access key ID and secret access key. -->\n<bean id=\"aws.creds\" class=\"com.amazonaws.auth.BasicAWSCredentials\">\n  <constructor-arg value=\"YOUR_ACCESS_KEY_ID\" />\n  <constructor-arg value=\"YOUR_SECRET_ACCESS_KEY\" />\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "TcpDiscoverySpi spi = new TcpDiscoverySpi();\n\nBasicAWSCredentials creds = new BasicAWSCredentials(\"yourAccessKey\", \"yourSecreteKey\");\n\nTcpDiscoveryS3IpFinder ipFinder = new TcpDiscoveryS3IpFinder();\n\nipFinder.setAwsCredentials(creds);\n\nspi.setIpFinder(ipFinder);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default discovery SPI.\ncfg.setDiscoverySpi(spi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "success",
+  "body": "Refer to [Cluster Configuration](doc:cluster-config) for more information on various cluster configuration properties."
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/clustering/cluster-config.md
----------------------------------------------------------------------
diff --git a/docs/wiki/clustering/cluster-config.md b/docs/wiki/clustering/cluster-config.md
new file mode 100755
index 0000000..ccbcdc5
--- /dev/null
+++ b/docs/wiki/clustering/cluster-config.md
@@ -0,0 +1,176 @@
+In Ignite, nodes can discover each other by using `DiscoverySpi`. Ignite provides `TcpDiscoverySpi` as a default implementation of `DiscoverySpi` that uses TCP/IP for node discovery. Discovery SPI can be configured for Multicast and Static IP based node discovery.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Multicast Based Discovery"
+}
+[/block]
+`TcpDiscoveryMulticastIpFinder` uses Multicast to discover other nodes in the grid and is the default IP finder. You should not have to specify it unless you plan to override default settings. Here is an example of how to configure this finder via Spring XML file or programmatically from Java:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n  <property name=\"discoverySpi\">\n    <bean class=\"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi\">\n      <property name=\"ipFinder\">\n        <bean class=\"org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder\">\n          <property name=\"multicastGroup\" value=\"228.10.10.157\"/>\n        </bean>\n      </property>\n    </bean>\n  </property>\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "TcpDiscoverySpi spi = new TcpDiscoverySpi();\n \nTcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();\n \nipFinder.setMulticastGroup(\"228.10.10.157\");\n \nspi.setIpFinder(ipFinder);\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default discovery SPI.\ncfg.setDiscoverySpi(spi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Static IP Based Discovery"
+}
+[/block]
+For cases when Multicast is disabled, `TcpDiscoveryVmIpFinder` should be used with pre-configured list of IP addresses. You are only required to provide at least one IP address, but usually it is advisable to provide 2 or 3 addresses of the grid nodes that you plan to start first for redundancy. Once a connection to any of the provided IP addresses is established, Ignite will automatically discover all other grid nodes.
+[block:callout]
+{
+  "type": "success",
+  "body": "You do not need to specify IP addresses for all Ignite nodes, only for a couple of nodes you plan to start first."
+}
+[/block]
+
+Here is an example of how to configure this finder via Spring XML file or programmatically from Java:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n  <property name=\"discoverySpi\">\n    <bean class=\"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi\">\n      <property name=\"ipFinder\">\n        <bean class=\"org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder\">\n          <property name=\"addresses\">\n            <list>\n              <value>1.2.3.4</value>\n              \n              <!-- \n                  IP Address and optional port range.\n                  You can also optionally specify an individual port.\n              -->\n              <value>1.2.3.5:47500..47509</value>\n            </list>\n          </property>\n        </bean>\n      </property>\n    </bean>\n  </property>\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "TcpDiscoverySpi spi = new TcpDiscoverySpi();\n \nTcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();\n \n// Set initial IP addresses.\n// Note that you can optionally specify a port or a port range.\nipFinder.setAddresses(Arrays.asList(\"1.2.3.4\", \"1.2.3.5:47500..47509\"));\n \nspi.setIpFinder(ipFinder);\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default discovery SPI.\ncfg.setDiscoverySpi(spi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Multicast and Static IP Based Discovery"
+}
+[/block]
+You can use both, Multicast and Static IP based discovery together. In this case, in addition to addresses received via multicast, if any, `TcpDiscoveryMulticastIpFinder` can also work with pre-configured list of static IP addresses, just like Static IP-Based Discovery described above. Here is an example of how to configure Multicast IP finder with static IP addresses:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n  <property name=\"discoverySpi\">\n    <bean class=\"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi\">\n      <property name=\"ipFinder\">\n        <bean class=\"org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder\">\n          <property name=\"multicastGroup\" value=\"228.10.10.157\"/>\n           \n          <!-- list of static IP addresses-->\n          <property name=\"addresses\">\n            <list>\n              <value>1.2.3.4</value>\n              \n              <!-- \n                  IP Address and optional port range.\n                  You can also optionally specify an individual port.\n              -->\n              <value>1.2.3.5:47500..47509</value>\n            </list>\n          </property>\n        </bean>\n      </property>\n    </bean>\n  </property>\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "TcpDiscoverySpi spi = new TcpDiscoverySpi();\n \nTcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();\n \n// Set Multicast group.\nipFinder.setMulticastGroup(\"228.10.10.157\");\n\n// Set initial IP addresses.\n// Note that you can optionally specify a port or a port range.\nipFinder.setAddresses(Arrays.asList(\"1.2.3.4\", \"1.2.3.5:47500..47509\"));\n \nspi.setIpFinder(ipFinder);\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default discovery SPI.\ncfg.setDiscoverySpi(spi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Amazon S3 Based Discovery"
+}
+[/block]
+Refer to [AWS Configuration](doc:aws-config) documentation.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "JDBC Based Discovery"
+}
+[/block]
+You can have your database be a common shared storage of initial IP addresses. In this nodes will write their IP addresses to a database on startup. This is done via `TcpDiscoveryJdbcIpFinder`.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n  <property name=\"discoverySpi\">\n    <bean class=\"org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi\">\n      <property name=\"ipFinder\">\n        <bean class=\"org.apache.ignite.spi.discovery.tcp.ipfinder.jdbc.TcpDiscoveryJdbcIpFinder\">\n          <property name=\"dataSource\" ref=\"ds\"/>\n        </bean>\n      </property>\n    </bean>\n  </property>\n</bean>\n\n<!-- Configured data source instance. -->\n<bean id=\"ds\" class=\"some.Datasource\">\n  ...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "TcpDiscoverySpi spi = new TcpDiscoverySpi();\n\n// Configure your DataSource.\nDataSource someDs = MySampleDataSource(...);\n\nTcpDiscoveryJdbcIpFinder ipFinder = new TcpDiscoveryJdbcIpFinder();\n\nipFinder.setDataSource(someDs);\n\nspi.setIpFinder(ipFinder);\n\nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default discovery SPI.\ncfg.setDiscoverySpi(spi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Configuration"
+}
+[/block]
+Following configuration parameters can be optionally configured on `TcpDiscoverySpi`.
+[block:parameters]
+{
+  "data": {
+    "0-0": "`setIpFinder(TcpDiscoveryIpFinder)`",
+    "0-1": "IP finder that is used to share info about nodes IP addresses.",
+    "0-2": "`TcpDiscoveryMulticastIpFinder`\n\nProvided implementations can be used:\n`TcpDiscoverySharedFsIpFinder`\n`TcpDiscoveryS3IpFinder`\n`TcpDiscoveryJdbcIpFinder`\n`TcpDiscoveryVmIpFinder`",
+    "h-0": "Setter Method",
+    "h-1": "Description",
+    "h-2": "Default",
+    "h-3": "Default",
+    "0-3": "",
+    "1-0": "`setLocalAddress(String)`",
+    "1-1": "Sets local host IP address that discovery SPI uses.",
+    "1-3": "",
+    "1-2": "If not provided, by default a first found non-loopback address will be used. If there is no non-loopback address available, then `java.net.InetAddress.getLocalHost()` will be used.",
+    "2-0": "`setLocalPort(int)`",
+    "2-1": "Port the SPI listens to.",
+    "2-2": "47500",
+    "2-3": "",
+    "3-0": "`setLocalPortRange(int)`",
+    "3-1": "Local port range. \nLocal node will try to bind on first available port starting from local port up until local port + local port range.",
+    "3-2": "100",
+    "3-3": "100",
+    "4-0": "`setHeartbeatFrequency(long)`",
+    "4-1": "Delay in milliseconds between heartbeat issuing of heartbeat messages. \nSPI sends messages in configurable time interval to other nodes to notify them about its state.",
+    "4-3": "2000",
+    "4-2": "2000",
+    "5-0": "`setMaxMissedHeartbeats(int)`",
+    "5-1": "Number of heartbeat requests that could be missed before local node initiates status check.",
+    "5-3": "1",
+    "5-2": "1",
+    "6-0": "`setReconnectCount(int)`",
+    "6-1": "Number of times node tries to (re)establish connection to another node.",
+    "6-3": "2",
+    "6-2": "2",
+    "7-0": "`setNetworkTimeout(long)`",
+    "7-1": "Sets maximum network timeout in milliseconds to use for network operations.",
+    "7-2": "5000",
+    "7-3": "5000",
+    "8-0": "`setSocketTimeout(long)`",
+    "8-1": "Sets socket operations timeout. This timeout is used to limit connection time and write-to-socket time.",
+    "8-2": "2000",
+    "8-3": "2000",
+    "9-0": "`setAckTimeout(long)`",
+    "9-1": "Sets timeout for receiving acknowledgement for sent message. \nIf acknowledgement is not received within this timeout, sending is considered as failed and SPI tries to repeat message sending.",
+    "9-2": "2000",
+    "9-3": "2000",
+    "10-0": "`setJoinTimeout(long)`",
+    "10-1": "Sets join timeout. If non-shared IP finder is used and node fails to connect to any address from IP finder, node keeps trying to join within this timeout. If all addresses are still unresponsive, exception is thrown and node startup fails. \n0 means wait forever.",
+    "10-2": "0",
+    "10-3": "0",
+    "11-0": "`setThreadPriority(int)`",
+    "11-1": "Thread priority for threads started by SPI.",
+    "11-2": "0",
+    "11-3": "0",
+    "12-0": "`setStatisticsPrintFrequency(int)`",
+    "12-1": "Statistics print frequency in milliseconds. \n0 indicates that no print is required. If value is greater than 0 and log is not quiet then stats are printed out with INFO level once a period. This may be very helpful for tracing topology problems.",
+    "12-2": "true",
+    "12-3": "true",
+    "13-0": ""
+  },
+  "cols": 3,
+  "rows": 13
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/clustering/cluster-groups.md
----------------------------------------------------------------------
diff --git a/docs/wiki/clustering/cluster-groups.md b/docs/wiki/clustering/cluster-groups.md
new file mode 100755
index 0000000..4e2c330
--- /dev/null
+++ b/docs/wiki/clustering/cluster-groups.md
@@ -0,0 +1,210 @@
+`ClusterGroup` represents a logical grouping of cluster nodes. 
+
+In Ignite all nodes are equal by design, so you don't have to start any nodes in specific order, or assign any specific roles to them. However, Ignite allows users to logically group cluster nodes for any application specific purpose. For example, you may wish to deploy a service only on remote nodes, or assign a role of "worker" to some worker nodes for job execution.
+[block:callout]
+{
+  "type": "success",
+  "body": "Note that `IgniteCluster` interface is also a cluster group which includes all nodes in the cluster."
+}
+[/block]
+You can limit job execution, service deployment, messaging, events, and other tasks to run only within some cluster group. For example, here is how to broadcast a job only to remote nodes (excluding the local node).
+[block:code]
+{
+  "codes": [
+    {
+      "code": "final Ignite ignite = Ignition.ignite();\n\nIgniteCluster cluster = ignite.cluster();\n\n// Get compute instance which will only execute\n// over remote nodes, i.e. not this node.\nIgniteCompute compute = ignite.compute(cluster.forRemotes());\n\n// Broadcast to all remote nodes and print the ID of the node \n// on which this closure is executing.\ncompute.broadcast(() -> System.out.println(\"Hello Node: \" + ignite.cluster().localNode().id());\n",
+      "language": "java",
+      "name": "broadcast"
+    },
+    {
+      "code": "final Ignite ignite = Ignition.ignite();\n\nIgniteCluster cluster = ignite.cluster();\n\n// Get compute instance which will only execute\n// over remote nodes, i.e. not this node.\nIgniteCompute compute = ignite.compute(cluster.forRemotes());\n\n// Broadcast closure only to remote nodes.\ncompute.broadcast(new IgniteRunnable() {\n    @Override public void run() {\n        // Print ID of the node on which this runnable is executing.\n        System.out.println(\">>> Hello Node: \" + ignite.cluster().localNode().id());\n    }\n}",
+      "language": "java",
+      "name": "java7 broadcast"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Predefined Cluster Groups"
+}
+[/block]
+You can create cluster groups based on any predicate. For convenience Ignite comes with some predefined cluster groups.
+
+Here are examples of some cluster groups available on `ClusterGroup` interface.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Cluster group with remote nodes, i.e. other than this node.\nClusterGroup remoteGroup = cluster.forRemotes();",
+      "language": "java",
+      "name": "Remote Nodes"
+    },
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\n// All nodes on wich cache with name \"myCache\" is deployed.\nClusterGroup cacheGroup = cluster.forCache(\"myCache\");",
+      "language": "java",
+      "name": "Cache Nodes"
+    },
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\n// All nodes with attribute \"ROLE\" equal to \"worker\".\nClusterGroup attrGroup = cluster.forAttribute(\"ROLE\", \"worker\");",
+      "language": "java",
+      "name": "Nodes With Attributes"
+    },
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Cluster group containing one random node.\nClusterGroup randomGroup = cluster.forRandom();\n\n// First (and only) node in the random group.\nClusterNode randomNode = randomGroup.node();",
+      "language": "java",
+      "name": "Random Node"
+    },
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Pick random node.\nClusterGroup randomNode = cluster.forRandeom();\n\n// All nodes on the same physical host as the random node.\nClusterGroup cacheNodes = cluster.forHost(randomNode);",
+      "language": "java",
+      "name": "Host Nodes"
+    },
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Dynamic cluster group representing the oldest cluster node.\n// Will automatically shift to the next oldest, if the oldest\n// node crashes.\nClusterGroup oldestNode = cluster.forOldest();",
+      "language": "java",
+      "name": "Oldest Node"
+    },
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Cluster group with only this (local) node in it.\nClusterGroup localGroup = cluster.forLocal();\n\n// Local node.\nClusterNode localNode = localGroup.node();",
+      "language": "java",
+      "name": "Local Node"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Cluster Groups with Node Attributes"
+}
+[/block]
+The unique characteristic of Ignite is that all grid nodes are equal. There are no master or server nodes, and there are no worker or client nodes either. All nodes are equal from Ignite’s point of view - however, users can configure nodes to be masters and workers, or clients and data nodes. 
+
+All cluster nodes on startup automatically register all environment and system properties as node attributes. However, users can choose to assign their own node attributes through configuration:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\">\n    ...\n    <property name=\"userAttributes\">\n        <map>\n            <entry key=\"ROLE\" value=\"worker\"/>\n        </map>\n    </property>\n    ...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n\nMap<String, String> attrs = Collections.singletonMap(\"ROLE\", \"worker\");\n\ncfg.setUserAttributes(attrs);\n\n// Start Ignite node.\nIgnite ignite = Ignition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "success",
+  "body": "All environment variables and system properties are automatically registered as node attributes on startup."
+}
+[/block]
+
+[block:callout]
+{
+  "type": "success",
+  "body": "Node attributes are available via `ClusterNode.attribute(\"propertyName\")` method."
+}
+[/block]
+Following example shows how to get the nodes where "worker" attribute has been set.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\nClusterGroup workerGroup = cluster.forAttribute(\"ROLE\", \"worker\");\n\nCollection<GridNode> workerNodes = workerGroup.nodes();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Custom Cluster Groups"
+}
+[/block]
+You can define dynamic cluster groups based on some predicate. Such cluster groups will always only include the nodes that pass the predicate.
+
+Here is an example of a cluster group over nodes that have less than 50% CPU utilization. Note that the nodes in this group will change over time based on their CPU load.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Nodes with less than 50% CPU load.\nClusterGroup readyNodes = cluster.forPredicate((node) -> node.metrics().getCurrentCpuLoad() < 0.5);",
+      "language": "java",
+      "name": "custom group"
+    },
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Nodes with less than 50% CPU load.\nClusterGroup readyNodes = cluster.forPredicate(\n    new IgnitePredicate<ClusterNode>() {\n        @Override public boolean apply(ClusterNode node) {\n            return node.metrics().getCurrentCpuLoad() < 0.5;\n        }\n    }\n));",
+      "language": "java",
+      "name": "java7 custom group"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Combining Cluster Groups"
+}
+[/block]
+You can combine cluster groups by nesting them within each other. For example, the following code snippet shows how to get a random remote node by combing remote group with random group.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "// Group containing oldest node out of remote nodes.\nClusterGroup oldestGroup = cluster.forRemotes().forOldest();\n\nClusterNode oldestNode = oldestGroup.node();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Getting Nodes from Cluster Groups"
+}
+[/block]
+You can get to various cluster group nodes as follows:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "ClusterGroup remoteGroup = cluster.forRemotes();\n\n// All cluster nodes in the group.\nCollection<ClusterNode> grpNodes = remoteGroup.nodes();\n\n// First node in the group (useful for groups with one node).\nClusterNode node = remoteGroup.node();\n\n// And if you know a node ID, get node by ID.\nUUID myID = ...;\n\nnode = remoteGroup.node(myId);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Cluster Group Metrics"
+}
+[/block]
+Ignite automatically collects metrics about all the cluster nodes. The cool thing about cluster groups is that it automatically aggregates the metrics across all the nodes in the group and provides proper averages, mins, and maxes within the group.
+
+Group metrics are available via `ClusterMetrics` interface which contains over 50 various metrics (note that the same metrics are available for individual cluster nodes as well).
+
+Here is an example of getting some metrics, including average CPU load and used heap, across all remote nodes:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "// Cluster group with remote nodes, i.e. other than this node.\nClusterGroup remoteGroup = ignite.cluster().forRemotes();\n\n// Cluster group metrics.\nClusterMetrics metrics = remoteGroup.metrics();\n\n// Get some metric values.\ndouble cpuLoad = metrics.getCurrentCpuLoad();\nlong usedHeap = metrics.getHeapMemoryUsed();\nint numberOfCores = metrics.getTotalCpus();\nint activeJobs = metrics.getCurrentActiveJobs();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/clustering/cluster.md
----------------------------------------------------------------------
diff --git a/docs/wiki/clustering/cluster.md b/docs/wiki/clustering/cluster.md
new file mode 100755
index 0000000..1ec34cb
--- /dev/null
+++ b/docs/wiki/clustering/cluster.md
@@ -0,0 +1,128 @@
+Ignite nodes can automatically discover each other. This helps to scale the cluster when needed, without having to restart the whole cluster. Developers can also leverage from Ignite’s hybrid cloud support that allows establishing connection between private cloud and public clouds such as Amazon Web Services, providing them with best of both worlds. 
+[block:image]
+{
+  "images": [
+    {
+      "image": [
+        "https://www.filepicker.io/api/file/KBkahg31S4qWXEBjfoya",
+        "ignite_cluster.png",
+        "500",
+        "350",
+        "#f48745",
+        ""
+      ]
+    }
+  ]
+}
+[/block]
+##Features
+  * Pluggable Design via `IgniteDiscoverySpi`
+  * Dynamic topology management
+  * Automatic discovery on LAN, WAN, and AWS
+  * On-demand and direct deployment
+  * Support for virtual clusters and node groupings
+[block:api-header]
+{
+  "type": "basic",
+  "title": "IgniteCluster"
+}
+[/block]
+Cluster functionality is provided via `IgniteCluster` interface. You can get an instance of `IgniteCluster` from `Ignite` as follows:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteCluster cluster = ignite.cluster();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+Through `IgniteCluster` interface you can:
+ * Start and stop remote cluster nodes
+ * Get a list of all cluster members
+ * Create logical [Cluster Groups](doc:cluster-groups)
+[block:api-header]
+{
+  "type": "basic",
+  "title": "ClusterNode"
+}
+[/block]
+The `ClusterNode` interface has very concise API and deals only with the node as a logical network endpoint in the topology: its globally unique ID, the node metrics, its static attributes set by the user and a few other parameters.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Cluster Node Attributes"
+}
+[/block]
+All cluster nodes on startup automatically register all environment and system properties as node attributes. However, users can choose to assign their own node attributes through configuration:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\">\n    ...\n    <property name=\"userAttributes\">\n        <map>\n            <entry key=\"ROLE\" value=\"worker\"/>\n        </map>\n    </property>\n    ...\n</bean>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+Following example shows how to get the nodes where "worker" attribute has been set.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "ClusterGroup workers = ignite.cluster().forAttribute(\"ROLE\", \"worker\");\n\nCollection<GridNode> nodes = workers.nodes();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "success",
+  "body": "All node attributes are available via `ClusterNode.attribute(\"propertyName\")` method."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Cluster Node Metrics"
+}
+[/block]
+Ignite automatically collects metrics for all cluster nodes. Metrics are collected in the background and are updated with every heartbeat message exchanged between cluster nodes.
+
+Node metrics are available via `ClusterMetrics` interface which contains over 50 various metrics (note that the same metrics are available for [Cluster Groups](doc:cluster-groups)  as well).
+
+Here is an example of getting some metrics, including average CPU load and used heap, for the local node:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "// Local Ignite node.\nClusterNode localNode = cluster.localNode();\n\n// Node metrics.\nClusterMetrics metrics = localNode.metrics();\n\n// Get some metric values.\ndouble cpuLoad = metrics.getCurrentCpuLoad();\nlong usedHeap = metrics.getHeapMemoryUsed();\nint numberOfCores = metrics.getTotalCpus();\nint activeJobs = metrics.getCurrentActiveJobs();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Local Cluster Node"
+}
+[/block]
+Local grid node is an instance of the `ClusterNode` representing *this* Ignite node. 
+
+Here is an example of how to get a local node:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "ClusterNode localNode = ignite.cluster().localNode();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/clustering/leader-election.md
----------------------------------------------------------------------
diff --git a/docs/wiki/clustering/leader-election.md b/docs/wiki/clustering/leader-election.md
new file mode 100755
index 0000000..3a6497d
--- /dev/null
+++ b/docs/wiki/clustering/leader-election.md
@@ -0,0 +1,59 @@
+When working in distributed environments, sometimes you need to have a guarantee that you always will pick the same node, regardless of the cluster topology changes. Such nodes are usually called **leaders**. 
+
+In many systems electing cluster leaders usually has to do with data consistency and is generally handled via collecting votes from cluster members. Since in Ignite the data consistency is handled by data grid affinity function (e.g. [Rendezvous Hashing](http://en.wikipedia.org/wiki/Rendezvous_hashing)), picking leaders in traditional sense for data consistency outside of the data grid is not really needed.
+
+However, you may still wish to have a *coordinator* node for certain tasks. For this purpose, Ignite lets you automatically always pick either oldest or youngest nodes in the cluster.
+[block:callout]
+{
+  "type": "warning",
+  "title": "Use Service Grid",
+  "body": "Note that for most *leader* or *singleton-like* use cases, it is recommended to use the **Service Grid** functionality, as it allows to automatically deploy various [Cluster Singleton Services](doc:cluster-singletons) and is usually easier to use."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Oldest Node"
+}
+[/block]
+Oldest node has a property that it remains constant whenever new nodes are added. The only time when the oldest node in the cluster changes is when it leaves the cluster or crashes.
+
+Here is an example of how to select [Cluster Group](doc:cluster-group) with only the oldest node in it.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\n// Dynamic cluster group representing the oldest cluster node.\n// Will automatically shift to the next oldest, if the oldest\n// node crashes.\nClusterGroup oldestNode = cluster.forOldest();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Youngest Node"
+}
+[/block]
+Youngest node, unlike the oldest node, constantly changes every time a new node joins a cluster. However, sometimes it may still become handy, especially if you need to execute some task only on the newly joined node.
+
+Here is an example of how to select [Cluster Group](doc:cluster-groups) with only the youngest node in it.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "gniteCluster cluster = ignite.cluster();\n\n// Dynamic cluster group representing the youngest cluster node.\n// Will automatically shift to the next oldest, if the oldest\n// node crashes.\nClusterGroup youngestNode = cluster.forYoungest();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "success",
+  "body": "Once the cluster group is obtained, you can use it for executing tasks, deploying services, sending messages, and more."
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/clustering/network-config.md
----------------------------------------------------------------------
diff --git a/docs/wiki/clustering/network-config.md b/docs/wiki/clustering/network-config.md
new file mode 100755
index 0000000..7611c50
--- /dev/null
+++ b/docs/wiki/clustering/network-config.md
@@ -0,0 +1,101 @@
+`CommunicationSpi` provides basic plumbing to send and receive grid messages and is utilized for all distributed grid operations, such as task execution, monitoring data exchange, distributed event querying and others. Ignite provides `TcpCommunicationSpi` as the default implementation of `CommunicationSpi`, that uses the TCP/IP to communicate with other nodes. 
+
+To enable communication with other nodes, `TcpCommunicationSpi` adds `TcpCommuncationSpi.ATTR_ADDRS` and `TcpCommuncationSpi.ATTR_PORT` local node attributes. At startup, this SPI tries to start listening to local port specified by `TcpCommuncationSpi.setLocalPort(int)` method. If local port is occupied, then SPI will automatically increment the port number until it can successfully bind for listening. `TcpCommuncationSpi.setLocalPortRange(int)` configuration parameter controls maximum number of ports that SPI will try before it fails. 
+[block:callout]
+{
+  "type": "info",
+  "body": "Port range comes very handy when starting multiple grid nodes on the same machine or even in the same VM. In this case all nodes can be brought up without a single change in configuration.",
+  "title": "Local Port Range"
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Configuration"
+}
+[/block]
+Following configuration parameters can be optionally configured on `TcpCommuncationSpi`:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Setter Method",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-0": "`setLocalAddress(String)\t`",
+    "0-1": "Sets local host address for socket binding.",
+    "0-2": "Any available local IP address.",
+    "1-0": "`setLocalPort(int)`",
+    "2-0": "`setLocalPortRange(int)`",
+    "3-0": "`setTcpNoDelay(boolean)`",
+    "4-0": "`setConnectTimeout(long)`",
+    "5-0": "`setIdleConnectionTimeout(long)`",
+    "6-0": "`setBufferSizeRatio(double)`",
+    "7-0": "`setMinimumBufferedMessageCount(int)`",
+    "8-0": "`setDualSocketConnection(boolean)`",
+    "9-0": "`setSpiPortResolver(GridSpiPortResolver)`",
+    "10-0": "`setConnectionBufferSize(int)`",
+    "11-0": "`setSelectorsCount(int)`",
+    "12-0": "`setConnectionBufferFlushFrequency(long)`",
+    "13-0": "`setDirectBuffer(boolean)`",
+    "14-0": "`setDirectSendBuffer(boolean)`",
+    "15-0": "`setAsyncSend(boolean)`",
+    "16-0": "`setSharedMemoryPort(int)`",
+    "17-0": "`setSocketReceiveBuffer(int)`",
+    "18-0": "`setSocketSendBuffer(int)`",
+    "1-1": "Sets local port for socket binding.",
+    "1-2": "47100",
+    "2-1": "Controls maximum number of local ports tried if all previously tried ports are occupied.",
+    "2-2": "100",
+    "3-1": "Sets value for `TCP_NODELAY` socket option. Each socket accepted or created will be using provided value.\nThis should be set to true (default) for reducing request/response time during communication over TCP protocol. In most cases we do not recommend to change this option.",
+    "3-2": "true",
+    "4-1": "Sets connect timeout used when establishing connection with remote nodes.",
+    "4-2": "1000",
+    "5-1": "Sets maximum idle connection timeout upon which a connection to client will be closed.",
+    "5-2": "30000",
+    "6-1": "Sets the buffer size ratio for this SPI. As messages are sent, the buffer size is adjusted using this ratio.",
+    "6-2": "0.8 or `IGNITE_COMMUNICATION_BUF_RESIZE_RATIO` system property value, if set.",
+    "7-1": "Sets the minimum number of messages for this SPI, that are buffered prior to sending.",
+    "7-2": "512 or `IGNITE_MIN_BUFFERED_COMMUNICATION_MSG_CNT` system property value, if set.",
+    "8-1": "Sets flag indicating whether dual-socket connection between nodes should be enforced. If set to true, two separate connections will be established between communicating nodes: one for outgoing messages, and one for incoming. When set to false, single TCP connection will be used for both directions.\nThis flag is useful on some operating systems, when TCP_NODELAY flag is disabled and messages take too long to get delivered.",
+    "8-2": "false",
+    "9-1": "Sets port resolver for internal-to-external port mapping. In some cases network routers are configured to perform port mapping between external and internal networks and the same mapping must be available to SPIs in GridGain that perform communication over IP protocols.",
+    "9-2": "null",
+    "10-1": "This parameter is used only when `setAsyncSend(boolean)` is set to false. \n\nSets connection buffer size for synchronous connections. Increase buffer size if using synchronous send and sending large amount of small sized messages. However, most of the time this should be set to 0 (default).",
+    "10-2": "0",
+    "11-1": "Sets the count of selectors to be used in TCP server.",
+    "11-2": "Default count of selectors equals to the expression result - \nMath.min(4, Runtime.getRuntime() .availableProcessors())",
+    "12-1": "This parameter is used only when `setAsyncSend(boolean)` is set to false. \n\nSets connection buffer flush frequency in milliseconds. This parameter makes sense only for synchronous send when connection buffer size is not 0. Buffer will be flushed once within specified period if there is no enough messages to make it flush automatically.",
+    "12-2": "100",
+    "13-1": "Switches between using NIO direct and NIO heap allocation buffers. Although direct buffers perform better, in some cases (especially on Windows) they may cause JVM crashes. If that happens in your environment, set this property to false.",
+    "13-2": "true",
+    "14-1": "Switches between using NIO direct and NIO heap allocation buffers usage for message sending in asynchronous mode.",
+    "14-2": "false",
+    "15-1": "Switches between synchronous and asynchronous message sending.\nThis should be set to true (default) if grid nodes send large amount of data over network from multiple threads, however this maybe environment and application specific and we recommend to benchmark the application in both modes.",
+    "15-2": "true",
+    "16-1": "Sets port which will be used by `IpcSharedMemoryServerEndpoint`. \nNodes started on the same host will communicate over IPC shared memory (only for Linux and MacOS hosts). Set this to -1 to disable IPC shared memory communication.",
+    "16-2": "48100",
+    "17-1": "Sets receive buffer size for sockets created or accepted by this SPI. If not provided, default is 0 which leaves buffer unchanged after socket creation (i.e. uses Operating System default value).",
+    "17-2": "0",
+    "18-1": "Sets send buffer size for sockets created or accepted by this SPI. If not provided, default is 0 which leaves the buffer unchanged after socket creation (i.e. uses Operating System default value).",
+    "18-2": "0"
+  },
+  "cols": 3,
+  "rows": 19
+}
+[/block]
+##Example 
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n  <property name=\"communicationSpi\">\n    <bean class=\"org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi\">\n      <!-- Override local port. -->\n      <property name=\"localPort\" value=\"4321\"/>\n    </bean>\n  </property>\n  ...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "TcpCommunicationSpi commSpi = new TcpCommunicationSpi();\n \n// Override local port.\ncommSpi.setLocalPort(4321);\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default communication SPI.\ncfg.setCommunicationSpi(commSpi);\n \n// Start grid.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/clustering/node-local-map.md
----------------------------------------------------------------------
diff --git a/docs/wiki/clustering/node-local-map.md b/docs/wiki/clustering/node-local-map.md
new file mode 100755
index 0000000..e067e2e
--- /dev/null
+++ b/docs/wiki/clustering/node-local-map.md
@@ -0,0 +1,35 @@
+Often it is useful to share a state between different compute jobs or different deployed services. For this purpose Ignite provides a shared concurrent **node-local-map** available on each node.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteCluster cluster = ignite.cluster();\n\nConcurrentMap<String, Integer> nodeLocalMap = cluster.nodeLocalMap():",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+Node-local values are similar to thread locals in a way that these values are not distributed and kept only on the local node. Node-local data can be used by compute jobs to share the state between executions. It can also be used by deployed services as well. 
+
+As an example, let's create a job which increments a node-local counter every time it executes on some node. This way, the node-local counter on each node will tell us how many times a job had executed on that cluster node. 
+[block:code]
+{
+  "codes": [
+    {
+      "code": "private IgniteCallable<Long> job = new IgniteCallable<Long>() {\n  @IgniteInstanceResource\n  private Ignite ignite;\n  \n  @Override \n  public Long call() {                  \n    // Get a reference to node local.\n    ConcurrentMap<String, AtomicLong> nodeLocalMap = ignite.cluster().nodeLocalMap();\n\n    AtomicLong cntr = nodeLocalMap.get(\"counter\");\n\n    if (cntr == null) {\n      AtomicLong old = nodeLocalMap.putIfAbsent(\"counter\", cntr = new AtomicLong());\n      \n      if (old != null)\n        cntr = old;\n    }\n    \n    return cntr.incrementAndGet();\n  }\n}",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+Now let's execute this job 2 times on the same node and make sure that the value of the counter is 2.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "ClusterGroup random = ignite.cluster().forRandom();\n\nIgniteCompute compute = ignite.compute(random);\n\n// The first time the counter on the picked node will be initialized to 1.\nLong res = compute.call(job);\n\nassert res == 1;\n\n// Now the counter will be incremented and will have value 2.\nres = compute.call(job);\n\nassert res == 2;",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/compute-grid/checkpointing.md
----------------------------------------------------------------------
diff --git a/docs/wiki/compute-grid/checkpointing.md b/docs/wiki/compute-grid/checkpointing.md
new file mode 100755
index 0000000..0cc287a
--- /dev/null
+++ b/docs/wiki/compute-grid/checkpointing.md
@@ -0,0 +1,238 @@
+Checkpointing provides an ability to save an intermediate job state. It can be useful when long running jobs need to store some intermediate state to protect from node failures. Then on restart of a failed node, a job would load the saved checkpoint and continue from where it left off. The only requirement for job checkpoint state is to implement `java.io.Serializable` interface.
+
+Checkpoints are available through the following methods on `GridTaskSession` interface:
+* `ComputeTaskSession.loadCheckpoint(String)`
+* `ComputeTaskSession.removeCheckpoint(String)`
+* `ComputeTaskSession.saveCheckpoint(String, Object)`
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Master Node Failure Protection"
+}
+[/block]
+One important use case for checkpoint that is not readily apparent is to guard against failure of the "master" node - the node that started the original execution. When master node fails, Ignite doesn’t anywhere to send the results of job execution to, and thus the result will be discarded.
+
+To failover this scenario one can store the final result of the job execution as a checkpoint and have the logic re-run the entire task in case of a "master" node failure. In such case the task re-run will be much faster since all the jobs' can start from the saved checkpoints.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Setting Checkpoints"
+}
+[/block]
+Every compute job can periodically *checkpoint* itself by calling `ComputeTaskSession.saveCheckpoint(...)` method.
+
+If job did save a checkpoint, then upon beginning of its execution, it should check if the checkpoint is available and start executing from the last saved checkpoint.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteCompute compute = ignite.compute();\n\ncompute.run(new IgniteRunnable() {\n  // Task session (injected on closure instantiation).\n  @TaskSessionResource\n  private ComputeTaskSession ses;\n\n  @Override \n  public Object applyx(Object arg) throws GridException {\n    // Try to retrieve step1 result.\n    Object res1 = ses.loadCheckpoint(\"STEP1\");\n\n    if (res1 == null) {\n      res1 = computeStep1(arg); // Do some computation.\n\n      // Save step1 result.\n      ses.saveCheckpoint(\"STEP1\", res1);\n    }\n\n    // Try to retrieve step2 result.\n    Object res2 = ses.loadCheckpoint(\"STEP2\");\n\n    if (res2 == null) {\n      res2 = computeStep2(res1); // Do some computation.\n\n      // Save step2 result.\n      ses.saveCheckpoint(\"STEP2\", res2);\n    }\n\n    ...\n  }\n}",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "CheckpointSpi"
+}
+[/block]
+In Ignite, checkpointing functionality is provided by `CheckpointSpi` which has the following out-of-the-box implementations:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Class",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-0": "[SharedFsCheckpointSpi](#file-system-checkpoint-configuration)\n(default)",
+    "0-1": "This implementation uses a shared file system to store checkpoints.",
+    "0-2": "Yes",
+    "1-0": "[CacheCheckpointSpi](#cache-checkpoint-configuration)",
+    "1-1": "This implementation uses a cache to store checkpoints.",
+    "2-0": "[JdbcCheckpointSpi](#database-checkpoint-configuration)",
+    "2-1": "This implementation uses a database to store checkpoints.",
+    "3-1": "This implementation uses Amazon S3 to store checkpoints.",
+    "3-0": "[S3CheckpointSpi](#amazon-s3-checkpoint-configuration)"
+  },
+  "cols": 2,
+  "rows": 4
+}
+[/block]
+`CheckpointSpi` is provided in `IgniteConfiguration` and passed into Ignition class at startup. 
+[block:api-header]
+{
+  "type": "basic",
+  "title": "File System Checkpoint Configuration"
+}
+[/block]
+The following configuration parameters can be used to configure `SharedFsCheckpointSpi`:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Setter Method",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-0": "`setDirectoryPaths(Collection)`",
+    "0-1": "Sets directory paths to the shared folders where checkpoints are stored. The path can either be absolute or relative to the path specified in `IGNITE_HOME` environment or system varialble.",
+    "0-2": "`IGNITE_HOME/work/cp/sharedfs`"
+  },
+  "cols": 3,
+  "rows": 1
+}
+[/block]
+
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n  ...\n  <property name=\"checkpointSpi\">\n    <bean class=\"org.apache.ignite.spi.checkpoint.sharedfs.SharedFsCheckpointSpi\">\n    <!-- Change to shared directory path in your environment. -->\n      <property name=\"directoryPaths\">\n        <list>\n          <value>/my/directory/path</value>\n          <value>/other/directory/path</value>\n        </list>\n      </property>\n    </bean>\n  </property>\n  ...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n \nSharedFsCheckpointSpi checkpointSpi = new SharedFsCheckpointSpi();\n \n// List of checkpoint directories where all files are stored.\nCollection<String> dirPaths = new ArrayList<String>();\n \ndirPaths.add(\"/my/directory/path\");\ndirPaths.add(\"/other/directory/path\");\n \n// Override default directory path.\ncheckpointSpi.setDirectoryPaths(dirPaths);\n \n// Override default checkpoint SPI.\ncfg.setCheckpointSpi(checkpointSpi);\n \n// Starts Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Cache Checkpoint Configuration"
+}
+[/block]
+`CacheCheckpointSpi` is a cache-based implementation for checkpoint SPI. Checkpoint data will be stored in the Ignite data grid in a pre-configured cache. 
+
+The following configuration parameters can be used to configure `CacheCheckpointSpi`:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Setter Method",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-0": "`setCacheName(String)`",
+    "0-1": "Sets cache name to use for storing checkpoints.",
+    "0-2": "`checkpoints`"
+  },
+  "cols": 3,
+  "rows": 1
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Database Checkpoint Configuration"
+}
+[/block]
+`JdbcCheckpointSpi` uses database to store checkpoints. All checkpoints are stored in the database table and are available from all nodes in the grid. Note that every node must have access to the database. A job state can be saved on one node and loaded on another (e.g., if a job gets preempted on a different node after node failure).
+
+The following configuration parameters can be used to configure `JdbcCheckpointSpi` (all are optional):
+[block:parameters]
+{
+  "data": {
+    "h-0": "Setter Method",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-0": "`setDataSource(DataSource)`",
+    "0-1": "Sets DataSource to use for database access.",
+    "0-2": "No value",
+    "1-0": "`setCheckpointTableName(String)`",
+    "1-1": "Sets checkpoint table name.",
+    "1-2": "`CHECKPOINTS`",
+    "2-0": "`setKeyFieldName(String)`",
+    "2-1": "Sets checkpoint key field name.",
+    "2-2": "`NAME`",
+    "3-0": "`setKeyFieldType(String)`",
+    "3-1": "Sets checkpoint key field type. The field should have corresponding SQL string type (`VARCHAR` , for example).",
+    "3-2": "`VARCHAR(256)`",
+    "4-0": "`setValueFieldName(String)`",
+    "4-1": "Sets checkpoint value field name.",
+    "4-2": "`VALUE`",
+    "5-0": "`setValueFieldType(String)`",
+    "5-1": "Sets checkpoint value field type. Note, that the field should have corresponding SQL BLOB type. The default value is BLOB, won’t work for all databases. For example, if using HSQL DB, then the type should be `longvarbinary`.",
+    "5-2": "`BLOB`",
+    "6-0": "`setExpireDateFieldName(String)`",
+    "6-1": "Sets checkpoint expiration date field name.",
+    "6-2": "`EXPIRE_DATE`",
+    "7-0": "`setExpireDateFieldType(String)`",
+    "7-1": "Sets checkpoint expiration date field type. The field should have corresponding SQL `DATETIME` type.",
+    "7-2": "`DATETIME`",
+    "8-0": "`setNumberOfRetries(int)`",
+    "8-1": "Sets number of retries in case of any database errors.",
+    "8-2": "2",
+    "9-0": "`setUser(String)`",
+    "9-1": "Sets checkpoint database user name. Note that authentication will be performed only if both, user and password are set.",
+    "9-2": "No value",
+    "10-0": "`setPassword(String)`",
+    "10-1": "Sets checkpoint database password.",
+    "10-2": "No value"
+  },
+  "cols": 3,
+  "rows": 11
+}
+[/block]
+##Apache DBCP
+[Apache DBCP](http://commons.apache.org/proper/commons-dbcp/) project provides various wrappers for data sources and connection pools. You can use these wrappers as Spring beans to configure this SPI from Spring configuration file or code. Refer to [Apache DBCP](http://commons.apache.org/proper/commons-dbcp/) project for more information.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\" singleton=\"true\">\n  ...\n  <property name=\"checkpointSpi\">\n    <bean class=\"org.apache.ignite.spi.checkpoint.database.JdbcCheckpointSpi\">\n      <property name=\"dataSource\">\n        <ref bean=\"anyPoolledDataSourceBean\"/>\n      </property>\n      <property name=\"checkpointTableName\" value=\"CHECKPOINTS\"/>\n      <property name=\"user\" value=\"test\"/>\n      <property name=\"password\" value=\"test\"/>\n    </bean>\n  </property>\n  ...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "JdbcCheckpointSpi checkpointSpi = new JdbcCheckpointSpi();\n \njavax.sql.DataSource ds = ... // Set datasource.\n \n// Set database checkpoint SPI parameters.\ncheckpointSpi.setDataSource(ds);\ncheckpointSpi.setUser(\"test\");\ncheckpointSpi.setPassword(\"test\");\n \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default checkpoint SPI.\ncfg.setCheckpointSpi(checkpointSpi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Amazon S3 Checkpoint Configuration"
+}
+[/block]
+`S3CheckpointSpi` uses Amazon S3 storage to store checkpoints. For information about Amazon S3 visit [http://aws.amazon.com/](http://aws.amazon.com/).
+
+The following configuration parameters can be used to configure `S3CheckpointSpi`:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Setter Method",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-0": "`setAwsCredentials(AWSCredentials)`",
+    "0-1": "Sets AWS credentials to use for storing checkpoints.",
+    "0-2": "No value (must be provided)",
+    "1-0": "`setClientConfiguration(Client)`",
+    "1-1": "Sets AWS client configuration.",
+    "1-2": "No value",
+    "2-0": "`setBucketNameSuffix(String)`",
+    "2-1": "Sets bucket name suffix.",
+    "2-2": "default-bucket"
+  },
+  "cols": 3,
+  "rows": 3
+}
+[/block]
+
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.configuration.IgniteConfiguration\" singleton=\"true\">\n  ...\n  <property name=\"checkpointSpi\">\n    <bean class=\"org.apache.ignite.spi.checkpoint.s3.S3CheckpointSpi\">\n      <property name=\"awsCredentials\">\n        <bean class=\"com.amazonaws.auth.BasicAWSCredentials\">\n          <constructor-arg value=\"YOUR_ACCESS_KEY_ID\" />\n          <constructor-arg value=\"YOUR_SECRET_ACCESS_KEY\" />\n        </bean>\n      </property>\n    </bean>\n  </property>\n  ...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n \nS3CheckpointSpi spi = new S3CheckpointSpi();\n \nAWSCredentials cred = new BasicAWSCredentials(YOUR_ACCESS_KEY_ID, YOUR_SECRET_ACCESS_KEY);\n \nspi.setAwsCredentials(cred);\n \nspi.setBucketNameSuffix(\"checkpoints\");\n \n// Override default checkpoint SPI.\ncfg.setCheckpointSpi(cpSpi);\n \n// Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/compute-grid/collocate-compute-and-data.md
----------------------------------------------------------------------
diff --git a/docs/wiki/compute-grid/collocate-compute-and-data.md b/docs/wiki/compute-grid/collocate-compute-and-data.md
new file mode 100755
index 0000000..cb69d25
--- /dev/null
+++ b/docs/wiki/compute-grid/collocate-compute-and-data.md
@@ -0,0 +1,29 @@
+Collocation of computations with data allow for minimizing data serialization within network and can significantly improve performance and scalability of your application. Whenever possible, you should alway make best effort to colocate your computations with the cluster nodes caching the data that needs to be processed.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Affinity Call and Run Methods"
+}
+[/block]
+`affinityCall(...)`  and `affinityRun(...)` methods co-locate jobs with nodes on which data is cached. In other words, given a cache name and affinity key these methods try to locate the node on which the key resides on Ignite the specified Ignite cache, and then execute the job there. 
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteCache<Integer, String> cache = ignite.cache(CACHE_NAME);\n\nIgniteCompute compute = ignite.compute();\n\nfor (int key = 0; key < KEY_CNT; key++) {\n    // This closure will execute on the remote node where\n    // data with the 'key' is located.\n    compute.affinityRun(CACHE_NAME, key, () -> { \n        // Peek is a local memory lookup.\n        System.out.println(\"Co-located [key= \" + key + \", value= \" + cache.peek(key) +']');\n    });\n}",
+      "language": "java",
+      "name": "affinityRun"
+    },
+    {
+      "code": "IgniteCache<Integer, String> cache = ignite.cache(CACHE_NAME);\n\nIgniteCompute asyncCompute = ignite.compute().withAsync();\n\nList<IgniteFuture<?>> futs = new ArrayList<>();\n\nfor (int key = 0; key < KEY_CNT; key++) {\n    // This closure will execute on the remote node where\n    // data with the 'key' is located.\n    asyncCompute.affinityRun(CACHE_NAME, key, () -> { \n        // Peek is a local memory lookup.\n        System.out.println(\"Co-located [key= \" + key + \", value= \" + cache.peek(key) +']');\n    });\n  \n    futs.add(asyncCompute.future());\n}\n\n// Wait for all futures to complete.\nfuts.stream().forEach(IgniteFuture::get);",
+      "language": "java",
+      "name": "async affinityRun"
+    },
+    {
+      "code": "final IgniteCache<Integer, String> cache = ignite.cache(CACHE_NAME);\n\nIgniteCompute compute = ignite.compute();\n\nfor (int i = 0; i < KEY_CNT; i++) {\n    final int key = i;\n \n    // This closure will execute on the remote node where\n    // data with the 'key' is located.\n    compute.affinityRun(CACHE_NAME, key, new IgniteRunnable() {\n        @Override public void run() {\n            // Peek is a local memory lookup.\n            System.out.println(\"Co-located [key= \" + key + \", value= \" + cache.peek(key) +']');\n        }\n    });\n}",
+      "language": "java",
+      "name": "java7 affinityRun"
+    }
+  ]
+}
+[/block]
\ No newline at end of file


Mime
View raw message