hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brock Noland (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-8836) Enable automatic tests with remote spark client.[Spark Branch]
Date Tue, 25 Nov 2014 23:51:13 GMT

    [ https://issues.apache.org/jira/browse/HIVE-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14225417#comment-14225417
] 

Brock Noland commented on HIVE-8836:
------------------------------------

One of the stuck processes:

{noformat}
2014-11-25 14:30:44,874 INFO  ql.Driver (SessionState.java:printInfo(828)) - Query ID = hiveptest_20141125143030_c56b5f72-8552-4122-930d-7df9dea96638
2014-11-25 14:30:44,874 INFO  ql.Driver (SessionState.java:printInfo(828)) - Total jobs =
1
2014-11-25 14:30:44,874 INFO  ql.Driver (SessionState.java:printInfo(828)) - Launching Job
1 out of 1
2014-11-25 14:30:44,875 INFO  ql.Driver (Driver.java:launchTask(1643)) - Starting task [Stage-1:MAPRED]
in serial mode
2014-11-25 14:30:44,875 INFO  exec.Task (SessionState.java:printInfo(828)) - In order to change
the average load for a reducer (in bytes):
2014-11-25 14:30:44,876 INFO  exec.Task (SessionState.java:printInfo(828)) -   set hive.exec.reducers.bytes.per.reducer=<number>
2014-11-25 14:30:44,876 INFO  exec.Task (SessionState.java:printInfo(828)) - In order to limit
the maximum number of reducers:
2014-11-25 14:30:44,876 INFO  exec.Task (SessionState.java:printInfo(828)) -   set hive.exec.reducers.max=<number>
2014-11-25 14:30:44,876 INFO  exec.Task (SessionState.java:printInfo(828)) - In order to set
a constant number of reducers:
2014-11-25 14:30:44,876 INFO  exec.Task (SessionState.java:printInfo(828)) -   set mapreduce.job.reduces=<number>
2014-11-25 14:30:44,876 INFO  spark.HiveSparkClientFactory (HiveSparkClientFactory.java:initiateSparkConf(105))
- load spark configuration from hive configuration (spark.master -> local-cluster[2,1,2048]).
2014-11-25 14:30:44,894 INFO  slf4j.Slf4jLogger (Slf4jLogger.scala:applyOrElse(80)) - Slf4jLogger
started
2014-11-25 14:30:44,899 INFO  Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Starting remoting
2014-11-25 14:30:44,907 INFO  Remoting (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting started;
listening on addresses :[akka.tcp://d84d934e-e10e-4744-b42c-ed86a49ebbd3@10.227.4.181:56697]
2014-11-25 14:30:44,909 DEBUG client.SparkClientImpl (SparkClientImpl.java:startDriver(252))
- Running client driver with argv: /home/hiveptest/50.18.64.184-hiveptest-1/apache-svn-spark-source/itests/qtest-spark/../../itests/qtest-spark/target/spark/bin/spark-submit
--properties-file /home/hiveptest/50.18.64.184-hiveptest-1/apache-svn-spark-source/itests/qtest-spark/target/tmp/spark-submit.8721943354350626566.properties
--class org.apache.hive.spark.client.RemoteDriver /home/hiveptest/50.18.64.184-hiveptest-1/maven/org/apache/hive/hive-exec/0.15.0-SNAPSHOT/hive-exec-0.15.0-SNAPSHOT.jar
--remote akka.tcp://d84d934e-e10e-4744-b42c-ed86a49ebbd3@10.227.4.181:56697/user/SparkClient-aa5b3525-a031-41d7-ab04-8a18b0aa3fcf
2014-11-25 14:30:48,058 INFO  client.SparkClientImpl (SparkClientImpl.java:onReceive(312))
- Received hello from akka.tcp://92e75da1-125e-4576-b63e-dc3166653dbe@10.227.4.181:35948/user/RemoteDriver
2014-11-25 14:30:48,059 DEBUG session.SparkSessionManagerImpl (SparkSessionManagerImpl.java:getSession(126))
- New session (1743e7e0-1fb3-4766-9249-cc138f88a2a7) is created.
2014-11-25 14:30:48,085 INFO  ql.Context (Context.java:getMRScratchDir(266)) - New scratch
dir is file:/home/hiveptest/50.18.64.184-hiveptest-1/apache-svn-spark-source/itests/qtest-spark/target/tmp/scratchdir/hiveptest/cf1f8ff8-a03d-413c-9922-d7c5c3c25e18/hive_2014-11-25_14-30-44_827_2636220880717371319-1
2014-11-25 14:30:48,283 INFO  client.SparkClientImpl (SparkClientImpl.java:onReceive(329))
- Received result for fa96fc57-3999-4103-9bbd-a4bab346a324
2014-11-25 14:30:48,535 INFO  client.SparkClientImpl (SparkClientImpl.java:onReceive(329))
- Received result for 00716575-c666-4b2a-bffe-682780684df8
2014-11-25 14:47:28,067 INFO  transport.ProtocolStateActor (Slf4jLogger.scala:apply$mcV$sp(74))
- No response from remote. Handshake timed out or transport failure detector triggered.
2014-11-25 14:47:28,071 WARN  remote.ReliableDeliverySupervisor (Slf4jLogger.scala:apply$mcV$sp(71))
- Association with remote system [akka.tcp://92e75da1-125e-4576-b63e-dc3166653dbe@10.227.4.181:35948]
has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
{noformat}

> Enable automatic tests with remote spark client.[Spark Branch]
> --------------------------------------------------------------
>
>                 Key: HIVE-8836
>                 URL: https://issues.apache.org/jira/browse/HIVE-8836
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>    Affects Versions: spark-branch
>            Reporter: Chengxiang Li
>            Assignee: Rui Li
>              Labels: Spark-M3
>             Fix For: spark-branch
>
>         Attachments: HIVE-8836.1-spark.patch, HIVE-8836.2-spark.patch, HIVE-8836.3-spark.patch,
HIVE-8836.4-spark.patch
>
>
> In real production environment, remote spark client should be used to submit spark job
for Hive mostly, we should enable automatic test with remote spark client to make sure the
Hive feature workable with it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message