ambari-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AMBARI-21598) Spark Thrift Server stopped after express upgrade due to undefined port
Date Fri, 28 Jul 2017 18:27:02 GMT

    [ https://issues.apache.org/jira/browse/AMBARI-21598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16105440#comment-16105440
] 

Hudson commented on AMBARI-21598:
---------------------------------

SUCCESS: Integrated in Jenkins build Ambari-branch-2.5 #1762 (See [https://builds.apache.org/job/Ambari-branch-2.5/1762/])
AMBARI-21598. Spark Thrift Server stopped after upgrade due to undefined (adoroszlai: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=28e7fb579fb5f2051c0291c43bec418f72f8c193])
* (edit) ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/config-upgrade.xml
* (edit) ambari-server/src/main/resources/stacks/BigInsights/4.2/upgrades/nonrolling-upgrade-to-hdp-2.6.xml


> Spark Thrift Server stopped after express upgrade due to undefined port
> -----------------------------------------------------------------------
>
>                 Key: AMBARI-21598
>                 URL: https://issues.apache.org/jira/browse/AMBARI-21598
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-upgrade
>    Affects Versions: 2.5.2
>         Environment: Source Ambari Version:- 2.2.2
> Target Ambari Version:-:-ambari-2.5.2.0-189
> Source Stack:-BigInsights-4.2.0.0
> target Stack:- HDP-2.6.2.0-124
>            Reporter: Pradarttana
>            Assignee: Doroszlai, Attila
>            Priority: Blocker
>             Fix For: 2.5.2
>
>         Attachments: AMBARI-21598.patch
>
>
> Steps to reproduce:-
> 1. Installed a IOP cluster ambari-version:- 2.2.0,BigInsights-4.2.0.0
> 2. Upgrade the ambari from 2.2.0 to 2.5.2.0-189(IOP Clusters)
> 3. Remove IOP Select.
> 4. Register HDP Stack to HDP-2.6.2.0-124
> 5. EU
> 6. Post EU
> Spark Thrift Server is failing after Upgrade:-
> Logs:- 
> {code}
> 17/07/28 03:32:18 INFO SparkUI: Stopped Spark web UI at http://natr66-tbus-iop420tofnsec-r6-4.openstacklocal:4040
> 17/07/28 03:32:18 INFO YarnClientSchedulerBackend: Interrupting monitor thread
> 17/07/28 03:32:18 INFO YarnClientSchedulerBackend: Shutting down all executors
> 17/07/28 03:32:18 INFO YarnClientSchedulerBackend: Asking each executor to shut down
> 17/07/28 03:32:18 INFO SchedulerExtensionServices: Stopping SchedulerExtensionServices
> (serviceOption=None,
>  services=List(),
>  started=false)
> 17/07/28 03:32:18 INFO YarnClientSchedulerBackend: Stopped
> 17/07/28 03:32:18 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint
stopped!
> 17/07/28 03:32:18 INFO MemoryStore: MemoryStore cleared
> 17/07/28 03:32:18 INFO BlockManager: BlockManager stopped
> 17/07/28 03:32:18 INFO BlockManagerMaster: BlockManagerMaster stopped
> 17/07/28 03:32:18 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator
stopped!
> 17/07/28 03:32:18 INFO SparkContext: Successfully stopped SparkContext
> 17/07/28 03:32:18 ERROR Utils: Uncaught exception in thread pool-7-thread-1
> java.lang.NullPointerException
>         at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$$anonfun$main$1.apply$mcV$sp(HiveThriftServer2.scala:123)
>         at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:267)
>         at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:239)
>         at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
>         at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:239)
>         at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1817)
>         at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:239)
>         at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
>         at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:239)
>         at scala.util.Try$.apply(Try.scala:161)
>         at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:239)
>         at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:218)
>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> 17/07/28 03:32:18 INFO ShutdownHookManager: Shutdown hook called
> 17/07/28 03:32:18 INFO ShutdownHookManager: Deleting directory /tmp/spark-87997670-a290-4c52-a5f5-4ea0bbe87d4c
> 17/07/28 03:32:18 INFO ShutdownHookManager: Deleting directory /tmp/spark-f76d3d61-f7d5-4a0a-a50d-d3a1766f3f09
> 17/07/28 03:32:18 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote
daemon.
> 17/07/28 03:32:18 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut
down; proceeding with flushing remote transports.
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message