Return-Path: X-Original-To: apmail-ambari-dev-archive@www.apache.org Delivered-To: apmail-ambari-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1EC6F18B7D for ; Tue, 30 Jun 2015 16:18:05 +0000 (UTC) Received: (qmail 44938 invoked by uid 500); 30 Jun 2015 16:18:05 -0000 Delivered-To: apmail-ambari-dev-archive@ambari.apache.org Received: (qmail 44903 invoked by uid 500); 30 Jun 2015 16:18:05 -0000 Mailing-List: contact dev-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ambari.apache.org Delivered-To: mailing list dev@ambari.apache.org Received: (qmail 44890 invoked by uid 99); 30 Jun 2015 16:18:04 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 30 Jun 2015 16:18:04 +0000 Date: Tue, 30 Jun 2015 16:18:04 +0000 (UTC) From: "Jayush Luniya (JIRA)" To: dev@ambari.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (AMBARI-12220) HiveServer2 query fail after RU MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Jayush Luniya created AMBARI-12220: -------------------------------------- Summary: HiveServer2 query fail after RU Key: AMBARI-12220 URL: https://issues.apache.org/jira/browse/AMBARI-12220 Project: Ambari Issue Type: Bug Components: ambari-server Affects Versions: 2.1.0 Reporter: Jayush Luniya Priority: Blocker Fix For: 2.1.0 After Rolling upgrade, HiveServer2 query fail with the message: {code} ERROR : Job Submission failed with exception 'java.io.FileNotFoundException= (File file:/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core-= 0.14.0.2.2.6.0-2800.jar does not exist)' java.io.FileNotFoundException: File file:/usr/hdp/current/hive-webhcat/shar= e/hcatalog/hive-hcatalog-core-0.14.0.2.2.6.0-2800.jar does not exist =09at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLo= calFileSystem.java:606) =09at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(Raw= LocalFileSystem.java:819) =09at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys= tem.java:596) =09at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.= java:421) =09at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) =09at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289) =09at org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobRe= sourceUploader.java:203) =09at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResour= ceUploader.java:128) =09at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSub= mitter.java:95) =09at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitt= er.java:190) =09at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) =09at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) =09at java.security.AccessController.doPrivileged(Native Method) =09at javax.security.auth.Subject.doAs(Subject.java:415) =09at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma= tion.java:1657) =09at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) =09at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:575) =09at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:570) =09at java.security.AccessController.doPrivileged(Native Method) =09at javax.security.auth.Subject.doAs(Subject.java:415) =09at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma= tion.java:1657) =09at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:5= 70) =09at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:561) =09at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:= 431) =09at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:= 137) =09at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) =09at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.ja= va:88) =09at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653) =09at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412) =09at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195) =09at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) =09at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1054) =09at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperat= ion.java:154) =09at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOper= ation.java:71) =09at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperati= on.java:206) =09at java.security.AccessController.doPrivileged(Native Method) =09at javax.security.auth.Subject.doAs(Subject.java:415) =09at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma= tion.java:1657) =09at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation= .java:218) =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:47= 1) =09at java.util.concurrent.FutureTask.run(FutureTask.java:262) =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.= java:1145) =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor= .java:615) =09at java.lang.Thread.run(Thread.java:745) {code} Look into the HiveServer2 process command line: /usr/jdk64/jdk1.7.0_67/bin/java -Xmx1024m -Dhdp.version=3D2.3.0.0-2434 -Dja= va.net.preferIPv4Stack=3Dtrue -Dhdp.version=3D2.3.0.0-2434 -Dhadoop.log.dir= =3D/var/log/hadoop/hive -Dhadoop.log.file=3Dhadoop.log -Dhadoop.home.dir=3D= /usr/hdp/2.3.0.0-2434/hadoop -Dhadoop.id.str=3Dhive -Dhadoop.root.logger=3D= INFO,console -Djava.library.path=3D:/usr/hdp/2.3.0.0-2434/hadoop/lib/native= /Linux-amd64-64:/usr/hdp/2.3.0.0-2434/hadoop/lib/native -Dhadoop.policy.fil= e=3Dhadoop-policy.xml -Djava.net.preferIPv4Stack=3Dtrue -Xmx1024m -XX:MaxPe= rmSize=3D512m -Xmx1024m -Dhadoop.security.logger=3DINFO,NullAppender org.ap= ache.hadoop.util.RunJar /usr/hdp/2.3.0.0-2434/hive/lib/hive-service-1.2.1.2= .3.0.0-2434.jar org.apache.hive.service.server.HiveServer2 --hiveconf hive.= aux.jars.path=3Dfile:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hc= atalog-core-0.14.0.2.2.6.0-2800.jar,file:///usr/hdp/current/hive-webhcat/sh= are/hcatalog/hive-hcatalog-core.jar,file:///usr/hdp/current/hive-webhcat/sh= are/hcatalog/hive-hcatalog-pig-adapter-0.14.0.2.2.6.0-2800.jar,file:///usr/= hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-pig-adapter.jar,file:= ///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-server-extensi= ons-0.14.0.2.2.6.0-2800.jar,file:///usr/hdp/current/hive-webhcat/share/hcat= alog/hive-hcatalog-server-extensions.jar,file:///usr/hdp/current/hive-webhc= at/share/hcatalog/hive-hcatalog-streaming-0.14.0.2.2.6.0-2800.jar,file:///u= sr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-streaming.jar -hiv= econf hive.metastore.uris=3D -hiveconf hive.log.file=3Dhiveserver2.log -hi= veconf hive.log.dir=3D/var/log/hive The version in hcatalog jars is the old one. The jars do not exist after up= grade. -- This message was sent by Atlassian JIRA (v6.3.4#6332)