spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-17622) Cannot run create or load DF on Windows- Spark 2.0.0
Date Thu, 22 Sep 2016 03:56:21 GMT

     [ https://issues.apache.org/jira/browse/SPARK-17622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Sean Owen updated SPARK-17622:
------------------------------
    Target Version/s:   (was: 2.0.0)
       Fix Version/s:     (was: 1.6.2)
                          (was: 1.6.1)
         Component/s:     (was: Java API)
                      SparkR

This doesn't actually show the underlying error.

> Cannot run create or load DF on Windows- Spark 2.0.0
> ----------------------------------------------------
>
>                 Key: SPARK-17622
>                 URL: https://issues.apache.org/jira/browse/SPARK-17622
>             Project: Spark
>          Issue Type: Bug
>          Components: SparkR
>    Affects Versions: 2.0.0
>         Environment: windows 10
> R 3.3.1
> RStudio 1.0.20
>            Reporter: renzhi he
>              Labels: windows
>
> Under spark2.0.0- on Windows- when try to load or create data with the similar codes
below, I also get error message and cannot execute the functions.
> |sc <- sparkR.session(master="local",sparkConfig = list(spark.driver.memory = "2g"))
|
> |df <- as.DataFrame(faithful) |
> Here is the error message:
> #Error in invokeJava(isStatic = TRUE, className, methodName, ...) :            
> #java.lang.reflect.InvocationTargetException
> #at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> #at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> #at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> #at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> #at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258)
> #at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359)
> #at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
> #at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
> #at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
> #at org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:46)
> #at org.apache.spark.sql.hive.HiveSharedSt
> However, under spark1.6.1 or spark1.6.2, run the same functional functions, there will
be no problem.
> |sc1 <- sparkR.init(master = "local", sparkEnvir = list(spark.driver.memory="2g"))|
> |sqlContext <- sparkRSQL.init(sc1)|
> |df <- as.DataFrame(sqlContext,faithful|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message