hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "pankhuri (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-9970) Hive on spark
Date Wed, 15 Jul 2015 14:25:04 GMT

    [ https://issues.apache.org/jira/browse/HIVE-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14628127#comment-14628127
] 

pankhuri commented on HIVE-9970:
--------------------------------

I am facing same issue when trying to run hive on spark in yarn mode.

hive 1.2.0 /1.1.0 -- tried with both
spark 1.3.1
hadoop 2.6.0

I have tried with both the prebuilt version and building with below command to remove any
hive dependency
./make-distribution.sh --name "hadoop2-without-hive" --tgz "-Pyarn,hadoop-provided,hadoop-2.6"

5/07/15 17:44:09 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1436958102207_0007_000001
15/07/15 17:44:09 WARN conf.Configuration: mapred-default.xml:an attempt to override final
parameter: mapreduce.cluster.local.dir;  Ignoring.
15/07/15 17:44:09 WARN conf.Configuration: mapred-default.xml:an attempt to override final
parameter: mapreduce.cluster.temp.dir;  Ignoring.
15/07/15 17:44:09 WARN conf.Configuration: mapred-default.xml:an attempt to override final
parameter: mapreduce.cluster.local.dir;  Ignoring.
15/07/15 17:44:09 WARN conf.Configuration: mapred-default.xml:an attempt to override final
parameter: mapreduce.cluster.temp.dir;  Ignoring.
15/07/15 17:44:09 INFO spark.SecurityManager: Changing view acls to: root
15/07/15 17:44:09 INFO spark.SecurityManager: Changing modify acls to: root
15/07/15 17:44:09 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui
acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/07/15 17:44:09 INFO yarn.ApplicationMaster: Starting the user application in a separate
Thread
15/07/15 17:44:09 INFO yarn.ApplicationMaster: Waiting for spark context initialization
15/07/15 17:44:09 INFO yarn.ApplicationMaster: Waiting for spark context initialization ...

15/07/15 17:44:09 INFO client.RemoteDriver: Connecting to: Impetus-dsrv16:41364
15/07/15 17:44:09 ERROR yarn.ApplicationMaster: User class threw exception: SPARK_RPC_CLIENT_CONNECT_TIMEOUT
java.lang.NoSuchFieldError: SPARK_RPC_CLIENT_CONNECT_TIMEOUT
	at org.apache.hive.spark.client.rpc.RpcConfiguration.<clinit>(RpcConfiguration.java:46)
	at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:146)
	at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:556)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:480)
15/07/15 17:44:09 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason:
User class threw exception: SPARK_RPC_CLIENT_CONNECT_TIMEOUT)
15/07/15 17:44:19 ERROR yarn.ApplicationMaster: SparkContext did not initialize after waiting
for 100000 ms. Please check earlier log output for errors. Failing the application.
15/07/15 17:44:19 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED
(diag message: User class threw exception: SPARK_RPC_CLIENT_CONNECT_TIMEOUT)
15/07/15 17:44:19 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1436958102207_0007
Please let me know if there is a resolution to it.

> Hive on spark
> -------------
>
>                 Key: HIVE-9970
>                 URL: https://issues.apache.org/jira/browse/HIVE-9970
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Amithsha
>
> Hi all,
> Recently i have configured Spark 1.2.0 and my environment is hadoop
> 2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing
> insert into i am getting the following g error.
> Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63
> Total jobs = 1
> Launching Job 1 out of 1
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> set mapreduce.job.reduces=<number>
> Failed to execute spark task, with exception
> 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create
> spark client.)'
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.spark.SparkTask
> Have added the spark-assembly jar in hive lib
> And also in hive console using the command add jar followed by the steps
> set spark.home=/opt/spark-1.2.1/;
> add jar /opt/spark-1.2.1/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar;
> set hive.execution.engine=spark;
> set spark.master=spark://xxxxxxx:7077;
> set spark.eventLog.enabled=true;
> set spark.executor.memory=512m;
> set spark.serializer=org.apache.spark.serializer.KryoSerializer;
> Can anyone suggest!!!!
> Thanks & Regards
> Amithsha



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message