ignite-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ilya Suntsov (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (IGNITE-1199) Spark integration: problem when start spark-shell with --jars
Date Wed, 17 Aug 2016 10:49:20 GMT

     [ https://issues.apache.org/jira/browse/IGNITE-1199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Ilya Suntsov updated IGNITE-1199:
---------------------------------
    Priority: Critical  (was: Blocker)

> Spark integration: problem when start spark-shell with --jars 
> --------------------------------------------------------------
>
>                 Key: IGNITE-1199
>                 URL: https://issues.apache.org/jira/browse/IGNITE-1199
>             Project: Ignite
>          Issue Type: Bug
>          Components: general
>    Affects Versions: ignite-1.4
>         Environment: Cent OS
> jdk 1.7
>            Reporter: Ilya Suntsov
>            Assignee: Alexey Goncharuk
>            Priority: Critical
>             Fix For: 1.8
>
>
> Steps to reproduce:
> 1. Start spark master, worker and ignite node with default config
> 2. Start spark-shell:
> {noformat}
> $ ./spark-shell --jars /home/gridgain/isuntsov/gridgain-community-fabric-1.3.2/libs/optional/ignite-spark/ignite-spark-1.3.2.jar,/home/gridgain/isuntsov/gridgain-community-fabric-1.3.2/libs/cache-api-1.0.0.jar,/home/gridgain/isuntsov/gridgain-community-fabric-1.3.2/libs/optional/ignite-log4j/ignite-log4j-1.3.2.jar,/home/gridgain/isuntsov/gridgain-community-fabric-1.3.2/libs/optional/ignite-log4j/log4j-1.2.17.jar
--master spark://fosters-218:7077
> log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
> 15/07/30 02:57:13 INFO SecurityManager: Changing view acls to: gridgain
> 15/07/30 02:57:13 INFO SecurityManager: Changing modify acls to: gridgain
> 15/07/30 02:57:13 INFO SecurityManager: SecurityManager: authentication disabled; ui
acls disabled; users with view permissions: Set(gridgain); users with modify permissions:
Set(gridgain)
> 15/07/30 02:57:13 INFO HttpServer: Starting HTTP Server
> 15/07/30 02:57:13 INFO Server: jetty-8.y.z-SNAPSHOT
> 15/07/30 02:57:13 INFO AbstractConnector: Started SocketConnector@0.0.0.0:51608
> 15/07/30 02:57:13 INFO Utils: Successfully started service 'HTTP class server' on port
51608.
> Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/
>    /___/ .__/\_,_/_/ /_/\_\   version 1.3.1
>       /_/
> Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_75)
> Type in expressions to have them evaluated.
> Type :help for more information.
> 15/07/30 02:57:16 INFO SparkContext: Running Spark version 1.3.1
> 15/07/30 02:57:16 INFO SecurityManager: Changing view acls to: gridgain
> 15/07/30 02:57:16 INFO SecurityManager: Changing modify acls to: gridgain
> 15/07/30 02:57:16 INFO SecurityManager: SecurityManager: authentication disabled; ui
acls disabled; users with view permissions: Set(gridgain); users with modify permissions:
Set(gridgain)
> 15/07/30 02:57:16 INFO Slf4jLogger: Slf4jLogger started
> 15/07/30 02:57:16 INFO Remoting: Starting remoting
> 15/07/30 02:57:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@fosters-221:41342]
> 15/07/30 02:57:16 INFO Utils: Successfully started service 'sparkDriver' on port 41342.
> 15/07/30 02:57:16 INFO SparkEnv: Registering MapOutputTracker
> 15/07/30 02:57:16 INFO SparkEnv: Registering BlockManagerMaster
> 15/07/30 02:57:16 INFO DiskBlockManager: Created local directory at /tmp/spark-2630fb35-12f4-4e70-920f-30124a4b2657/blockmgr-5a7d7b6f-c296-4d16-82e3-14e020895ed8
> 15/07/30 02:57:16 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
> 15/07/30 02:57:16 INFO HttpFileServer: HTTP File server directory is /tmp/spark-6721e343-162b-4d45-bc55-eaadf103719d/httpd-0dc803d3-4bc4-4d23-8983-66c298f88c7d
> 15/07/30 02:57:16 INFO HttpServer: Starting HTTP Server
> 15/07/30 02:57:16 INFO Server: jetty-8.y.z-SNAPSHOT
> 15/07/30 02:57:16 INFO AbstractConnector: Started SocketConnector@0.0.0.0:41602
> 15/07/30 02:57:16 INFO Utils: Successfully started service 'HTTP file server' on port
41602.
> 15/07/30 02:57:16 INFO SparkEnv: Registering OutputCommitCoordinator
> 15/07/30 02:57:17 INFO Server: jetty-8.y.z-SNAPSHOT
> 15/07/30 02:57:17 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
> 15/07/30 02:57:17 INFO Utils: Successfully started service 'SparkUI' on port 4040.
> 15/07/30 02:57:17 INFO SparkUI: Started SparkUI at http://fosters-221:4040
> 15/07/30 02:57:17 INFO SparkContext: Added JAR file:/home/gridgain/isuntsov/gridgain-community-fabric-1.3.2/libs/optional/ignite-spark/ignite-spark-1.3.2.jar
at http://10.20.0.221:41602/jars/ignite-spark-1.3.2.jar with timestamp 1438250237070
> 15/07/30 02:57:17 INFO SparkContext: Added JAR file:/home/gridgain/isuntsov/gridgain-community-fabric-1.3.2/libs/cache-api-1.0.0.jar
at http://10.20.0.221:41602/jars/cache-api-1.0.0.jar with timestamp 1438250237071
> 15/07/30 02:57:17 INFO SparkContext: Added JAR file:/home/gridgain/isuntsov/gridgain-community-fabric-1.3.2/libs/optional/ignite-log4j/ignite-log4j-1.3.2.jar
at http://10.20.0.221:41602/jars/ignite-log4j-1.3.2.jar with timestamp 1438250237071
> 15/07/30 02:57:17 INFO SparkContext: Added JAR file:/home/gridgain/isuntsov/gridgain-community-fabric-1.3.2/libs/optional/ignite-log4j/log4j-1.2.17.jar
at http://10.20.0.221:41602/jars/log4j-1.2.17.jar with timestamp 1438250237072
> 15/07/30 02:57:17 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster@fosters-218:7077/user/Master...
> 15/07/30 02:57:17 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app
ID app-20150730025814-0010
> 15/07/30 02:57:17 INFO AppClient$ClientActor: Executor added: app-20150730025814-0010/0
on worker-20150730014728-fosters-221-58923 (fosters-221:58923) with 16 cores
> 15/07/30 02:57:17 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150730025814-0010/0
on hostPort fosters-221:58923 with 16 cores, 512.0 MB RAM
> 15/07/30 02:57:17 INFO AppClient$ClientActor: Executor updated: app-20150730025814-0010/0
is now LOADING
> 15/07/30 02:57:17 INFO AppClient$ClientActor: Executor updated: app-20150730025814-0010/0
is now RUNNING
> 15/07/30 02:57:17 INFO NettyBlockTransferService: Server created on 56183
> 15/07/30 02:57:17 INFO BlockManagerMaster: Trying to register BlockManager
> 15/07/30 02:57:17 INFO BlockManagerMasterActor: Registering block manager fosters-221:56183
with 265.4 MB RAM, BlockManagerId(<driver>, fosters-221, 56183)
> 15/07/30 02:57:17 INFO BlockManagerMaster: Registered BlockManager
> 15/07/30 02:57:17 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling
beginning after reached minRegisteredResourcesRatio: 0.0
> 15/07/30 02:57:17 INFO SparkILoop: Created spark context..
> Spark context available as sc.
> 15/07/30 02:57:17 INFO SparkILoop: Created sql context (with Hive support)..
> SQL context available as sqlContext.
> 15/07/30 02:57:19 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@fosters-221:52601/user/Executor#-1088967613]
with ID 0
> scala> 15/07/30 02:57:19 INFO BlockManagerMasterActor: Registering block manager fosters-221:41156
with 265.4 MB RAM, BlockManagerId(0, fosters-221, 41156)
> {noformat}
> 3. 
> {noformat}scala> import org.apache.ignite.spark._
> import org.apache.ignite.spark._{noformat}
> 4.
> {noformat}scala> import org.apache.ignite.configuration._
> <console>:22: error: object configuration is not a member of package org.apache.ignite
>        import org.apache.ignite.configuration._{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message