spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tim (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-23726) standalone quickstart fails loading files with Hadoop's java.net.ConnectException: Connection refused
Date Sat, 17 Mar 2018 23:05:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-23726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16403782#comment-16403782
] 

Tim commented on SPARK-23726:
-----------------------------

error is due to locally defined HADOOP_HOME and HADOOP_CONF_DIR env vars. Once I cleared those,
spark started working locally ok. issue can be closed.

> standalone quickstart fails loading files with Hadoop's java.net.ConnectException: Connection
refused
> -----------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-23726
>                 URL: https://issues.apache.org/jira/browse/SPARK-23726
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output
>    Affects Versions: 2.3.0
>         Environment: local mac with jvm "Using Scala version 2.11.8 (Java HotSpot(TM)
64-Bit Server VM, Java 1.8.0_121)"
>            Reporter: Tim
>            Priority: Blocker
>
> 1) downloaded latest 2.3.0 release from [https://www.apache.org/dyn/closer.lua/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz]
> 2) ungzip and startup spark shell with
> {{./bin/spark-shell --master local[2]}}
> 3) once console starts up, try to read in a file per [https://spark.apache.org/docs/latest/quick-start.html]
> scala> val textFile = spark.read.textFile("README.md")
>  
> this produces the following exception:
>  
> Macs-MBP:spark-2.3.0-bin-hadoop2.7 macuser$ ./bin/spark-shell --master local[2]
> 2018-03-17 18:43:33 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
> Spark context Web UI available at http://macs-mbp.fios-router.home:4040
> Spark context available as 'sc' (master = local[2], app id = local-1521326617762).
> Spark session available as 'spark'.
> Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/
>    /___/ .__/\_,_/_/ /_/\_\   version 2.3.0
>       /_/
>          
> Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala> val textFile = spark.read.textFile("README.md")
> 2018-03-17 18:43:41 WARN  FileStreamSink:66 - Error while looking for metadata directory.
> java.net.ConnectException: Call From Macs-MBP.fios-router.home/192.168.1.154 to localhost:8020
failed on connection exception: java.net.ConnectException: Connection refused; For more details
see:  http://wiki.apache.org/hadoop/ConnectionRefused
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1479)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1412)
>   at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
>   at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>   at com.sun.proxy.$Proxy20.getFileInfo(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108)
>   at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
>   at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
>   at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
>   at org.apache.spark.sql.execution.datasources.DataSource$.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:714)
>   at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$15.apply(DataSource.scala:389)
>   at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$15.apply(DataSource.scala:389)
>   at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>   at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
>   at scala.collection.immutable.List.flatMap(List.scala:344)
>   at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:388)
>   at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
>   at org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:691)
>   at org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:730)
>   at org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:700)
>   ... 49 elided
> Caused by: java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
>   at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
>   at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
>   at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
>   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1451)
>   ... 80 more
> scala>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message