Return-Path: X-Original-To: apmail-spark-issues-archive@minotaur.apache.org Delivered-To: apmail-spark-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C27F5190B8 for ; Fri, 25 Mar 2016 16:33:25 +0000 (UTC) Received: (qmail 31135 invoked by uid 500); 25 Mar 2016 16:33:25 -0000 Delivered-To: apmail-spark-issues-archive@spark.apache.org Received: (qmail 31094 invoked by uid 500); 25 Mar 2016 16:33:25 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 31083 invoked by uid 99); 25 Mar 2016 16:33:25 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 25 Mar 2016 16:33:25 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 78B122C14DC for ; Fri, 25 Mar 2016 16:33:25 +0000 (UTC) Date: Fri, 25 Mar 2016 16:33:25 +0000 (UTC) From: "Sean Owen (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (SPARK-13710) Spark shell shows ERROR when launching on Windows MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SPARK-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15212029#comment-15212029 ] Sean Owen commented on SPARK-13710: ----------------------------------- Although a couple of the thousands of tests are flaky and spurious failures are fairly common, I don't think you'd normally get lots of failures. It may be due to something more fundamental about the env or what you're building. You can post some errors here for a look. You can try opening a [WIP] PR to see how it tests versus Jenkins too . > Spark shell shows ERROR when launching on Windows > ------------------------------------------------- > > Key: SPARK-13710 > URL: https://issues.apache.org/jira/browse/SPARK-13710 > Project: Spark > Issue Type: Bug > Components: Spark Shell, Windows > Reporter: Masayoshi TSUZUKI > Priority: Minor > > On Windows, when we launch {{bin\spark-shell.cmd}}, it shows ERROR message and stacktrace. > {noformat} > C:\Users\tsudukim\Documents\workspace\spark-dev3>bin\spark-shell > [ERROR] Terminal initialization failed; falling back to unsupported > java.lang.NoClassDefFoundError: Could not initialize class scala.tools.fusesource_embedded.jansi.internal.Kernel32 > at scala.tools.fusesource_embedded.jansi.internal.WindowsSupport.getConsoleMode(WindowsSupport.java:50) > at scala.tools.jline_embedded.WindowsTerminal.getConsoleMode(WindowsTerminal.java:204) > at scala.tools.jline_embedded.WindowsTerminal.init(WindowsTerminal.java:82) > at scala.tools.jline_embedded.TerminalFactory.create(TerminalFactory.java:101) > at scala.tools.jline_embedded.TerminalFactory.get(TerminalFactory.java:158) > at scala.tools.jline_embedded.console.ConsoleReader.(ConsoleReader.java:229) > at scala.tools.jline_embedded.console.ConsoleReader.(ConsoleReader.java:221) > at scala.tools.jline_embedded.console.ConsoleReader.(ConsoleReader.java:209) > at scala.tools.nsc.interpreter.jline_embedded.JLineConsoleReader.(JLineReader.scala:61) > at scala.tools.nsc.interpreter.jline_embedded.InteractiveReader.(JLineReader.scala:33) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at scala.tools.nsc.interpreter.ILoop$$anonfun$scala$tools$nsc$interpreter$ILoop$$instantiate$1$1.apply(ILoop.scala:865) > at scala.tools.nsc.interpreter.ILoop$$anonfun$scala$tools$nsc$interpreter$ILoop$$instantiate$1$1.apply(ILoop.scala:862) > at scala.tools.nsc.interpreter.ILoop.scala$tools$nsc$interpreter$ILoop$$mkReader$1(ILoop.scala:871) > at scala.tools.nsc.interpreter.ILoop$$anonfun$15$$anonfun$apply$8.apply(ILoop.scala:875) > at scala.tools.nsc.interpreter.ILoop$$anonfun$15$$anonfun$apply$8.apply(ILoop.scala:875) > at scala.util.Try$.apply(Try.scala:192) > at scala.tools.nsc.interpreter.ILoop$$anonfun$15.apply(ILoop.scala:875) > at scala.tools.nsc.interpreter.ILoop$$anonfun$15.apply(ILoop.scala:875) > at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418) > at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418) > at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233) > at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223) > at scala.collection.immutable.Stream.collect(Stream.scala:435) > at scala.tools.nsc.interpreter.ILoop.chooseReader(ILoop.scala:877) > at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$apply$mcZ$sp$2.apply(ILoop.scala:916) > at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:916) > at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:911) > at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:911) > at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97) > at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:911) > at org.apache.spark.repl.Main$.doMain(Main.scala:64) > at org.apache.spark.repl.Main$.main(Main.scala:47) > at org.apache.spark.repl.Main.main(Main.scala) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:737) > at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:183) > at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:208) > at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:122) > at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) > Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties > Setting default log level to "WARN". > To adjust logging level use sc.setLogLevel(newLevel). > 16/03/07 13:05:32 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable > Spark context available as sc (master = local[*], app id = local-1457323533704). > SQL context available as sqlContext. > Welcome to > ____ __ > / __/__ ___ _____/ /__ > _\ \/ _ \/ _ `/ __/ '_/ > /___/ .__/\_,_/_/ /_/\_\ version 2.0.0-SNAPSHOT > /_/ > Using Scala version 2.11.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_40) > Type in expressions to have them evaluated. > Type :help for more information. > scala> sc.textFile("README.md") > res0: org.apache.spark.rdd.RDD[String] = README.md MapPartitionsRDD[1] at textFile at :25 > scala> sc.textFile("README.md").count() > res1: Long = 97 > {noformat} > Spark-shell itself seems to work file during my simple operation check. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org