ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paolo Di Tommaso <paolo.ditomm...@gmail.com>
Subject Re: How deploy Ignite workers in a Spark cluster
Date Wed, 15 Jun 2016 20:12:22 GMT
Hi,

I'm using a local spark cluster made up one master and one worker.

Using the version of the script that exception is not raised. But it is
confusing me even more because that application run is not reported in the
Spark console. It looks like it is running on the master node. Does it make
sense? You can find the output produced at this link
<http://pastebin.com/hAN0iWr6>.

My goal is to deploy an Ignite worker in *each* Spark node available in the
Spark cluster, deploy an hybrid application based on Spark+Ignite and
shutdown the Ignite workers on completion.

What is supposed to be the best approach to implement that.


Thanks,
Paolo


On Wed, Jun 15, 2016 at 6:13 PM, Alexei Scherbakov <
alexey.scherbakoff@gmail.com> wrote:

> Your understanding is correct.
>
> How many nodes do you have?
>
> Please provide full logs from the started Ignite instances.
>
>
>
> 2016-06-15 18:34 GMT+03:00 Paolo Di Tommaso <paolo.ditommaso@gmail.com>:
>
>> OK, using `ic.close(false)` instead of `ic.close(true)` that exception is
>> not reported.
>>
>> However I'm a bit confused. The close argument is named
>> `shutdownIgniteOnWorkers` so I was thinking that is required to set it true
>> to shutdown the Ignite daemon when the app is terminated.
>>
>> How it is supposed to be used that flag?
>>
>>
>> Cheers,
>> Paolo
>>
>>
>> On Wed, Jun 15, 2016 at 5:06 PM, Paolo Di Tommaso <
>> paolo.ditommaso@gmail.com> wrote:
>>
>>> The version is 1.6.0#20160518-sha1:0b22c45b and the following is the
>>> script I'm using.
>>>
>>>
>>> https://github.com/pditommaso/gspark/blob/master/src/main/groovy/org/apache/ignite/examples/JavaIgniteSimpleApp.java
>>>
>>>
>>>
>>> Cheers, p
>>>
>>>
>>> On Wed, Jun 15, 2016 at 5:00 PM, Alexei Scherbakov <
>>> alexey.scherbakoff@gmail.com> wrote:
>>>
>>>> I don't think it's OK.
>>>>
>>>> Which Ingite's version do you use?
>>>>
>>>> 2016-06-15 15:35 GMT+03:00 Paolo Di Tommaso <paolo.ditommaso@gmail.com>
>>>> :
>>>>
>>>>> Great, now it works! Thanks a lot.
>>>>>
>>>>>
>>>>> I have only a NPE during the application shutdown (you can find the
>>>>> stack trace at this link <http://pastebin.com/y0EM7qXU>). Is this
>>>>> normal? and in any case is there a way to avoid it?
>>>>>
>>>>>
>>>>> Cheers,
>>>>> Paolo
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <
>>>>> alexey.scherbakoff@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> To automatically start Ignite nodes you must pass false parameter
to
>>>>>> 3-d IgniteContext argument like:
>>>>>>
>>>>>> // java
>>>>>> SparcContext sc = ...
>>>>>> new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;
>>>>>>
>>>>>> or
>>>>>>
>>>>>> // scala
>>>>>> SparcContext sc = ...
>>>>>> new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)
>>>>>>
>>>>>> 2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso <
>>>>>> paolo.ditommaso@gmail.com>:
>>>>>>
>>>>>>> Hi all,
>>>>>>>
>>>>>>> I'm struggling deploying an Ignite application in a Spark (local)
>>>>>>> cluster using the Embedded deploying described at this link
>>>>>>> <https://apacheignite-fs.readme.io/docs/installation-deployment#embedded-deployment>.
>>>>>>>
>>>>>>>
>>>>>>> The documentation seems suggesting that Ignite workers are
>>>>>>> automatically instantiated at runtime when submitting the Ignite
app.
>>>>>>>
>>>>>>> Could you please confirm that this is the expected behaviour?
>>>>>>>
>>>>>>>
>>>>>>> In my tests the when the application starts it simply hangs,
>>>>>>> reporting this warning message:
>>>>>>>
>>>>>>> WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  -
Failed
>>>>>>> to connect to any address from IP finder (will retry to join
topology every
>>>>>>> 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]
>>>>>>>
>>>>>>> It looks like there are not ignite daemons to which connect to.
Also
>>>>>>> inspecting the Spark worker log I'm unable to find any message
produced by
>>>>>>> Ignite. I'm expecting instead to find the log messages produced
by the
>>>>>>> ignite daemon startup.
>>>>>>>
>>>>>>>
>>>>>>> Any idea what's wrong?
>>>>>>>
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Paolo
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Best regards,
>>>>>> Alexei Scherbakov
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Best regards,
>>>> Alexei Scherbakov
>>>>
>>>
>>>
>>
>
>
> --
>
> Best regards,
> Alexei Scherbakov
>

Mime
View raw message