ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nikolai Tikhonov <ntikho...@apache.org>
Subject Re: Ignite for Spark on YARN Deployment
Date Wed, 08 Jun 2016 16:30:35 GMT
Hi Hongmei Zong!

Client node which started from IgniteContext can't to find server nodes. By
default ignite integration with YARN uses TcpDiscoveryVmIpFinder (if you
don't use another ip finder in your configuration).
In the case you should set the ip finder in IgniteContext. The following
code snippet shows how do it on Java:

IgniteConfiguration cfg = new IgniteConfiguration();

TcpDiscoverySpi spi = new TcpDiscoverySpi();

TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Arrays.asList("your_address1:47500..47510",
"your_address2:47500..47510",
    "your_address3:47500..47510"));

spi.setIpFinder(ipFinder);

cfg.setDiscoverySpi(spi);


On Wed, Jun 8, 2016 at 6:31 PM, Hongmei Zong <hongmei@kayak.com> wrote:

> Hi Denis,
>
> I tried testing Ignite as the following steps:
>
> Background information:
> 1. Our Spark is running on YARN deployment; There are three Master hosts
> and many Worker nodes and three client nodes in the Spark clusters.
> 2. I installed Ignite on one of the client node and can launch
> Ignite-shell locally on this client node. Since for testing purpose, I did
> not install Ignite on any master nodes or worker nodes.
> 3. I logged into the client node which Ignite was installed and launched
> Ignite YARN application using the following command:
>
> hadoop jar
> /u/hongmei/apache-ignite/libs/optional/ignite-yarn/ignite-yarn-1.6.0.jar
> /u/hongmei/apache-ignite/libs/optional/ignite-yarn/ignite-yarn-1.6.0.jar
> /u/hongmei/apache-ignite/config/cluster.properties
>
> I open the UI console for the Ignite YARN application, and it shows that
> Ignite YARN is running, Containers, CPU Cores, Memory are all allocated by
> YARN. screenshot is as the following
>
> application_1464374946035_26956
> <http://c5hdp002.c5.runwaynine.com:8088/cluster/app/application_1464374946035_26956>
> hongmei ignition YARN default Wed Jun 8 10:21:04 -0400 2016 N/A RUNNING
> UNDEFINED 12 12 34816 9.6 3.8
> ApplicationMaster
> <http://c5hdp002.c5.runwaynine.com:8088/proxy/application_1464374946035_26956/>
>
> 4. I open an other terminal log into the same client node and run the
> Ignite for Spark Shell command as the following:
> The Spark shell started
>
> /usr/bin/spark-shell --jars /u/hongmei/apache-ignite/libs/
> ignite-core-1.6.0.jar,/u/hongmei/apache-ignite/libs/optional/ignite-spark/
> ignite-spark-1.6.0.jar,/u/hongmei/apache-ignite/libs/cache-api-1.0.0.jar,
> /u/hongmei/apache-ignite/libs/optional/ignite-log4j/
> ignite-log4j-1.6.0.jar,
> /u/hongmei/apache-ignite/libs/optional/ignite-log4j/log4j-1.2.17.jar
> --packages
> org.apache.ignite:ignite-spark:1.6.0,org.apache.ignite:ignite-spring:1.6.0
>
> The Spark shell launched successfully and I use these two commands
> to import the ignite spark files:
>
> import org.apache.ignite.spark._import org.apache.ignite.configuration._
>
>
> Next I create an instance of Ignite context as the following syntax:
>
> val ic = new IgniteContext[Integer, Integer](sc, () => new IgniteConfiguration())
>
>
> I got the following message: So I stuck at this point.
>
> 16/06/08 10:28:27 WARN TcpDiscoverySpi: IP finder returned empty addresses
> list. Please check IP finder configuration and make sure multicast works on
> your network. Will retry every 2 secs.
>
> So I stuck at this point.
>
> The other scenario when I run command listed as Step 4. The Spark shell
> can not be launched successfully with the status of “Accept”, but never get
> chance to run.
>
> Any good suggestions?????? Is there anything wrong with my test procedure??
> Thanks in advance!
>
> Mei
>
>
> On Jun 7, 2016, at 10:26 AM, Denis Magda <dmagda@gridgain.com> wrote:
>
> Hi,
>
> I’m not an expert in this area however have you tried to specify a Spark
> master like the following documentation says?
>
> https://apacheignite-fs.readme.io/docs/testing-integration-with-spark-shell#working-with-spark-shell
>
> If you did try please share the full logs, someone from the community will
> respond.
>
> —
> Denis
>
> On Jun 6, 2016, at 5:58 PM, Hongmei Zong <hongmei@kayak.com> wrote:
>
> Hi there,
>
> I would like to use "Ignite for Spark" to save the states of Spark jobs in
> memory and those states can be used for later jobs. For Shared Deployment,
> the document only offer two ways to deploy Ignite cluster. First is the
> standalone deployment, second is MESOS deployment. But Our Spark clusters
> are running on YARN. My question is: is it possible to run Ignite for Spark
> on YARN deployment???
>
> I downloaded and installed Ignite on my machine. Next, I referenced the
> link
> below for YARN Deployment.
> http://apacheignite.gridgain.org/docs/yarn-deployment
>
> I created the cluster.properties file and ran the application using the
> command:    hadoop jar
> /u/hongmei/apache-ignite/libs/optional/ignite-yarn/ignite-yarn-1.6.0.jar
> /u/hongmei/apache-ignite/libs/optional/ignite-yarn/ignite-yarn-1.6.0.jar
> /u/hongmei/apache-ignite/config/cluster.properties
>
> Form the YARN console, The YARN ignite application works ok. It shows
> running, and 16 containers are allocated for the ignite application.
>
> After this step, what should I do in order to run Spark with Ignite on YARN
> deployment??
>
> Many Thanks!!!
>
> Mei
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp5465.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>
>
>

Mime
View raw message