ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Denis Magda <dma...@gridgain.com>
Subject Re: Ignite - Spark integration
Date Thu, 16 Jun 2016 22:37:49 GMT
Hi, 

As I see you already got the answer in the following discussion
How deploy Ignite workers in a Spark cluster <http://apache-ignite-users.70518.x6.nabble.com/How-deploy-Ignite-workers-in-a-Spark-cluster-tp5641.html>

Let’s keep discussing in one thread.

—
Denis

> On Jun 13, 2016, at 12:40 PM, Paolo Di Tommaso <paolo.ditommaso@gmail.com> wrote:
> 
> Hi, 
> 
> Not sure that is the problem, because I'm using deploy a local Ignite cluster and it
works by using the multicast discover. 
> 
> However I've tried using TcpDiscoveryVmIpFinder and providing the local addresses. It
changes the warning message but it continues to hangs. 
> 
> 12516 [tcp-client-disco-msg-worker-#4%null%] WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi
 - Failed to connect to any address from IP finder (will retry to join topology every 2 secs):
[/192.168.1.36:47500 <http://192.168.1.36:47500/>, /192.168.99.1:47500 <http://192.168.99.1:47500/>]
> 
> It looks to me that there's no Ignite daemon to which connect. I've understood that an
Ignite daemon is automatically launched in each Spark worker when using the embedded deployment
mode (but I can't find any Ignite message in the Spark worker log). 
> 
> 
> I've missed something? 
> 
> 
> Cheers, 
> Paolo
> 
> 
> 
> 
> On Mon, Jun 13, 2016 at 10:09 AM, Denis Magda <dmagda@gridgain.com <mailto:dmagda@gridgain.com>>
wrote:
> Hi Paolo,
> 
> The application hangs because Ignite client node, that is used by Spark worker, can’t
connect to the cluster
> 3797 [tcp-client-disco-msg-worker-#4%null%] WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi
 - IP finder returned empty addresses list. Please check IP finder configuration and make
sure multicast works on your network. Will retry every 2 secs.
> 
> To fix the issue you have to use one of IP Finders implementations [1] that will let
each cluster node to find each other.
> One of the most common solutions is to use TcpDiscoveryVmIpFinder [2] listing all the
IPs of all the cluster nodes and set this IP finder to IgniteConfiguration on a node startup.
> 
> Also you may want to refer to the following discussion where the user also had an issue
with IP finders initially.
> 
> [1] https://apacheignite.readme.io/docs/cluster-config <https://apacheignite.readme.io/docs/cluster-config>
> [2] https://apacheignite.readme.io/v1.6/docs/cluster-config#static-ip-based-discovery
<https://apacheignite.readme.io/v1.6/docs/cluster-config#static-ip-based-discovery>
> [3] http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp5465.html
<http://apache-ignite-users.70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp5465.html>
> 
> —
> Denis
> 
>> On Jun 12, 2016, at 4:24 PM, Paolo Di Tommaso <paolo.ditommaso@gmail.com <mailto:paolo.ditommaso@gmail.com>>
wrote:
>> 
>> Hi, 
>> 
>> I'm giving a try to the Spark integration provided by Ignite by using the embedded
deployment mode described here <https://apacheignite-fs.readme.io/docs/installation-deployment>.

>> 
>>  I've setup a local cluster made up a master and a worker node. 
>> 
>> This is my basic Ignite-Spark application: 
>> 
>> public class JavaLaunchIgnite {
>> 
>>     static public void main(String... args) {
>>         // -- spark context
>>         SparkConf sparkConf = new SparkConf().setAppName("Spark-Ignite");
>>         JavaSparkContext sc = new JavaSparkContext(sparkConf);
>> 
>>         // -- ignite configuration
>>         IgniteOutClosure cfg = new IgniteOutClosure() {
>>             @Override public Object apply() {
>>                 return new IgniteConfiguration();
>>             }};
>>         // -- ignite context
>>         JavaIgniteContext<Integer,Integer> ic = new JavaIgniteContext<Integer,Integer>(sc,
cfg);
>>         final Ignite ignite = ic.ignite();
>>         ic.ignite().compute().broadcast(new IgniteRunnable() {
>>             @Override public void run() {
>>                 System.out.println(">>> Hello Node: " + ignite.cluster().localNode().id());
>>             }});
>> 
>>         ic.close(true);
>>         System.out.println(">>> DONE");
>>     }
>> }
>> 
>> However when I submit it it simply hangs. By using the Spark web console, I can see
that the application is correctly deployed and running but it never stops. 
>> 
>> In the Spark worker node I can find any log produced by Ignite (which is supposed
to deploy an Ignite worker). See here <http://pastebin.com/KdEA0KUq>. 
>> 
>> Instead I can see the Ignite output in the log of the spark-submit log. See here
<http://pastebin.com/Ff6fxYBF>. 
>> 
>> 
>> Does anybody have any clue why this app just hangs? 
>> 
>> 
>> Cheers,
>> Paolo
>> 
> 
> 


Mime
View raw message