Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A53DC1003F for ; Sun, 11 Jan 2015 09:18:43 +0000 (UTC) Received: (qmail 93738 invoked by uid 500); 11 Jan 2015 09:18:43 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 93670 invoked by uid 500); 11 Jan 2015 09:18:43 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 93660 invoked by uid 99); 11 Jan 2015 09:18:43 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 11 Jan 2015 09:18:43 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of arthur.hk.chan@gmail.com designates 209.85.220.43 as permitted sender) Received: from [209.85.220.43] (HELO mail-pa0-f43.google.com) (209.85.220.43) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 11 Jan 2015 09:18:15 +0000 Received: by mail-pa0-f43.google.com with SMTP id kx10so26789371pab.2 for ; Sun, 11 Jan 2015 01:18:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=content-type:mime-version:subject:from:in-reply-to:date:cc :message-id:references:to; bh=IMMVA9a7t2Y98KXytLrCoDUR+KFLjxaD+EkmLhSAvL8=; b=QPWZ99aEdCzJM7cZFrezmJ1e1/4n7VTgBpbG4mm1cIGJBqkWH5PM3P7q48Y8fE7Fdo aM2Ch8LzcyN44H8uRpsjCnf/zuGaT7c6L9fUrWiLz6E+bdI3UaHdfQuc6I3Q07WGebZo siWeQQSFvgC6oeexTuUIN2/WZzoW5tNRF5IZi/jEtpAubS0WP4lD+YzRvWupkPztYR61 hS3qFRDqd14NGYhKIQpD9Mm9laLFvmXDogVbvabuAIxvuz2WnvnV+xUUGIi7OK9/u+NS 70X8XkjggRQxp/GorJ1ACEmyR4Pu2OVmulJHI3x+AD39zlmn5OCDypvVsi8mtMmxwIMB EjMg== X-Received: by 10.70.54.168 with SMTP id k8mr35732347pdp.87.1420967892843; Sun, 11 Jan 2015 01:18:12 -0800 (PST) Received: from [192.168.0.102] (123202006231.ctinets.com. [123.202.6.231]) by mx.google.com with ESMTPSA id nr15sm11426195pdb.73.2015.01.11.01.18.10 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 11 Jan 2015 01:18:11 -0800 (PST) Content-Type: multipart/alternative; boundary="Apple-Mail=_872DDEE2-253D-4C72-8419-C5CB0213054F" Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: CREATE FUNCTION: How to automatically load extra jar file? From: "Arthur.hk.chan@gmail.com" In-Reply-To: <86385502-DD0D-4ABB-A4A6-4459669BEC8E@hortonworks.com> Date: Sun, 11 Jan 2015 17:18:15 +0800 Cc: "Arthur.hk.chan@gmail.com" , user@hive.apache.org Message-Id: <16825705-0771-4F8B-A6F6-D6120B01E6A2@gmail.com> References: <1C574674-06CF-4B13-B34B-E89AA0605E05@gmail.com> <5dc63062.2bf48.14a99de4cab.Coremail.vic0777@163.com>, <34f03ac833fd4d868f33b6527844fe12@MBX1.impetus.co.in> <7A6866FB-5ECB-4A25-BA9F-E01938740BE8@gmail.com> <68B91EB2-929F-4A91-80A6-557B3D454468@gmail.com> <86385502-DD0D-4ABB-A4A6-4459669BEC8E@hortonworks.com> To: Jason Dere X-Mailer: Apple Mail (2.1878.6) X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail=_872DDEE2-253D-4C72-8419-C5CB0213054F Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 Hi, 2015-01-04 08:57:12,154 ERROR [main]: DataNucleus.Datastore = (Log4JLogger.java:error(115)) - An exception was thrown while = adding/validating class(es) : Specified key was too long; max key length = is 767 bytes com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key = was too long; max key length is 767 bytes at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native = Method) at = sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAcc= essorImpl.java:57) at = sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstr= uctorAccessorImpl.java:45) at = java.lang.reflect.Constructor.newInstance(Constructor.java:526) at com.mysql.jdbc.Util.handleNewInstance(Util.java:408) at com.mysql.jdbc.Util.getInstance(Util.java:383) at = com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1062) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4226) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4158) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2615) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2776) at = com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2834) at = com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2783) at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:908) at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:788) at = com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:254) at = org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(Abstra= ctTable.java:760) at = org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:6= 48) at = org.datanucleus.store.rdbms.table.TableImpl.validateIndices(TableImpl.java= :593) at = org.datanucleus.store.rdbms.table.TableImpl.validateConstraints(TableImpl.= java:390) at = org.datanucleus.store.rdbms.table.ClassTable.validateConstraints(ClassTabl= e.java:3463) at = org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesVali= dation(RDBMSStoreManager.java:3464) at = org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAnd= Validate(RDBMSStoreManager.java:3190) at = org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreMan= ager.java:2841) at = org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSche= maTransaction.java:122) at = org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager= .java:1605) at = org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.j= ava:954) at = org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStore= Manager.java:679) at = org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidate= s(RDBMSQueryUtils.java:408) at = org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.j= ava:947) at = org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.ja= va:370) at = org.datanucleus.store.query.Query.executeQuery(Query.java:1744) at = org.datanucleus.store.query.Query.executeWithArray(Query.java:1672) at org.datanucleus.store.query.Query.execute(Query.java:1654) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221) at = org.apache.hadoop.hive.metastore.MetaStoreDirectSql.(MetaStoreDirect= Sql.java:121) at = org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:2= 52) at = org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:223)= at = org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73) at = org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:13= 3) at = org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:5= 8) at = org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java= :67) at = org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(Hive= MetaStore.java:497) at = org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaSt= ore.java:475) at = org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_all_database= s(HiveMetaStore.java:1026) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at = org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHand= ler.java:105) at com.sun.proxy.$Proxy10.get_all_databases(Unknown Source) at = org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveM= etaStoreClient.java:837) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at = org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMe= taStoreClient.java:89) at com.sun.proxy.$Proxy11.getAllDatabases(Unknown Source) at = org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1098) at = org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionNames(FunctionR= egistry.java:671) at = org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionNames(FunctionR= egistry.java:662) at = org.apache.hadoop.hive.cli.CliDriver.getCommandCompletor(CliDriver.java:54= 0) at = org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:758) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Regards Arthur On 7 Jan, 2015, at 7:22 am, Jason Dere wrote: > Does your hive.log contain any lines with "adding libjars:"? >=20 > May also search for any lines containing "_resources", would like to = see the result of both searches. >=20 > For example, mine is showing the following line: > 2015-01-06 14:53:28,115 INFO mr.ExecDriver = (ExecDriver.java:execute(307)) - adding libjars: = file:///tmp/d0ed1585-d9e6-4944-b985-225351574de0_resources/spatial-sdk-hiv= e-1.0.3-SNAPSHOT.jar,file:///tmp/d0ed1585-d9e6-4944-b985-225351574de0_reso= urces/esri-geometry-api.jar >=20 > I wonder if your libjars setting for the map/reduce job is somehow = getting sent without the "file:///", which might be causing hadoop to = interpret the path as a HDFS path rather than a local path. >=20 > On Jan 6, 2015, at 1:11 AM, Arthur.hk.chan = wrote: >=20 >> Hi, >>=20 >> my hadoop=E2=80=99s core-site.xml contains following about tmp >>=20 >> >> hadoop.tmp.dir >> /hadoop_data/hadoop_data/tmp >> >>=20 >>=20 >>=20 >> my hive-default.xml contains following about tmp >>=20 >> >> hive.exec.scratchdir >> /tmp/hive-${user.name} >> Scratch space for Hive jobs >> >>=20 >> >> hive.exec.local.scratchdir >> /tmp/${user.name} >> Local scratch space for Hive jobs >> >>=20 >>=20 >>=20 >> Will this related to configuration issue or a bug? >>=20 >> Please help! >>=20 >> Regards >> Arthur >>=20 >>=20 >> On 6 Jan, 2015, at 3:45 am, Jason Dere wrote: >>=20 >>> During query compilation Hive needs to instantiate the UDF class and = so the JAR needs to be resolvable by the class loader, thus the JAR is = copied locally to a temp location for use. >>> During map/reduce jobs the local jar (like all jars added with the = ADD JAR command) should then be added to the distributed cache. It looks = like this is where the issue is occurring, but based on path in the = error message I suspect that either Hive or Hadoop is mistaking what = should be a local path with an HDFS path. >>>=20 >>> On Jan 4, 2015, at 10:23 AM, Arthur.hk.chan@gmail.com = wrote: >>>=20 >>>> Hi, >>>>=20 >>>> A question: Why does it need to copy the jar file to the temp = folder? Why couldn=E2=80=99t it use the file defined in using JAR = 'hdfs://hadoop/hive/nexr-hive-udf-0.2-SNAPSHOT.jar' directly?=20 >>>>=20 >>>> Regards >>>> Arthur >>>>=20 >>>>=20 >>>> On 4 Jan, 2015, at 7:48 am, Arthur.hk.chan@gmail.com = wrote: >>>>=20 >>>>> Hi, >>>>>=20 >>>>>=20 >>>>> A1: Are all of these commands (Step 1-5) from the same Hive CLI = prompt? >>>>> Yes >>>>>=20 >>>>> A2: Would you be able to check if such a file exists with the = same path, on the local file system? >>>>> The file does not exist on the local file system. =20 >>>>>=20 >>>>>=20 >>>>> Is there a way to set the another =E2=80=9Ctmp" folder for HIVE? = or any suggestions to fix this issue? >>>>>=20 >>>>> Thanks !! >>>>>=20 >>>>> Arthur >>>>> =20 >>>>>=20 >>>>>=20 >>>>> On 3 Jan, 2015, at 4:12 am, Jason Dere = wrote: >>>>>=20 >>>>>> The point of USING JAR as part of the CREATE FUNCTION statement = to try to avoid having to do ADD JAR/aux path stuff to get the UDF to = work.=20 >>>>>>=20 >>>>>> Are all of these commands (Step 1-5) from the same Hive CLI = prompt? >>>>>>=20 >>>>>>>> hive> CREATE FUNCTION sysdate AS = 'com.nexr.platform.hive.udf.UDFSysDate' using JAR = 'hdfs://hadoop/hive/nexr-hive-udf-0.2-SNAPSHOT.jar'; >>>>>>>> converting to local = hdfs://hadoop/hive/nexr-hive-udf-0.2-SNAPSHOT.jar >>>>>>>> Added = /tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNAP= SHOT.jar to class path >>>>>>>> Added resource: = /tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNAP= SHOT.jar >>>>>>>> OK >>>>>>=20 >>>>>>=20 >>>>>> One note, = /tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNAP= SHOT.jar here should actually be on the local file system, not on HDFS = where you were checking in Step 5. During CREATE FUNCTION/query = compilation, Hive will make a copy of the source JAR = (hdfs://hadoop/hive/nexr-hive-udf-0.2-SNAPSHOT.jar), copied to a temp = location on the local file system where it's used by that Hive session. >>>>>>=20 >>>>>> The location mentioned in the FileNotFoundException = (hdfs://tmp/5c658d17-dbeb-4b84-ae8d-ba936404c8bc_resources/nexr-hive-udf-0= .2-SNAPSHOT.jar) has a different path than the local copy mentioned = during CREATE FUNCTION = (/tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNA= PSHOT.jar). I'm not really sure why it is a HDFS path here either, but = I'm not too familiar with what goes on during the job submission = process. But the fact that this HDFS path has the same naming convention = as the directory used for downloading resources locally (***_resources) = looks a little fishy to me. Would you be able to check if such a file = exists with the same path, on the local file system? >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>>=20 >>>>>> On Dec 31, 2014, at 5:22 AM, Nirmal Kumar = wrote: >>>>>>=20 >>>>>>> Important: HiveQL's ADD JAR operation does not work with = HiveServer2 and the Beeline client when Beeline runs on a different = host. As an alterntive to ADD JAR, Hive auxiliary path functionality = should be used as described below. >>>>>>>=20 >>>>>>> Refer: >>>>>>> = http://www.cloudera.com/content/cloudera/en/documentation/cloudera-manager= /v4-8-0/Cloudera-Manager-Managing-Clusters/cmmc_hive_udf.html=E2=80=8B >>>>>>>=20 >>>>>>>=20 >>>>>>> Thanks, >>>>>>> -Nirmal >>>>>>>=20 >>>>>>> From: Arthur.hk.chan@gmail.com >>>>>>> Sent: Tuesday, December 30, 2014 9:54 PM >>>>>>> To: vic0777 >>>>>>> Cc: Arthur.hk.chan@gmail.com; user@hive.apache.org >>>>>>> Subject: Re: CREATE FUNCTION: How to automatically load extra = jar file? >>>>>>> =20 >>>>>>> Thank you. >>>>>>>=20 >>>>>>> Will this work for hiveserver2 ? >>>>>>>=20 >>>>>>>=20 >>>>>>> Arthur >>>>>>>=20 >>>>>>> On 30 Dec, 2014, at 2:24 pm, vic0777 wrote: >>>>>>>=20 >>>>>>>>=20 >>>>>>>> You can put it into $HOME/.hiverc like this: ADD JAR = full_path_of_the_jar. Then, the file is automatically loaded when Hive = is started. >>>>>>>>=20 >>>>>>>> Wantao >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> At 2014-12-30 11:01:06, "Arthur.hk.chan@gmail.com" = wrote: >>>>>>>> Hi, >>>>>>>>=20 >>>>>>>> I am using Hive 0.13.1 on Hadoop 2.4.1, I need to automatically = load an extra JAR file to hive for UDF, below are my steps to create the = UDF function. I have tried the following but still no luck to get thru. >>>>>>>>=20 >>>>>>>> Please help!! >>>>>>>>=20 >>>>>>>> Regards >>>>>>>> Arthur >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> Step 1: (make sure the jar in in HDFS) >>>>>>>> hive> dfs -ls = hdfs://hadoop/hive/nexr-hive-udf-0.2-SNAPSHOT.jar; >>>>>>>> -rw-r--r-- 3 hadoop hadoop 57388 2014-12-30 = 10:02hdfs://hadoop/hive/nexr-hive-udf-0.2-SNAPSHOT.jar >>>>>>>>=20 >>>>>>>> Step 2: (drop if function exists)=20 >>>>>>>> hive> drop function sysdate; = =20 >>>>>>>> OK >>>>>>>> Time taken: 0.013 seconds >>>>>>>>=20 >>>>>>>> Step 3: (create function using the jar in HDFS) >>>>>>>> hive> CREATE FUNCTION sysdate AS = 'com.nexr.platform.hive.udf.UDFSysDate' using JAR = 'hdfs://hadoop/hive/nexr-hive-udf-0.2-SNAPSHOT.jar'; >>>>>>>> converting to local = hdfs://hadoop/hive/nexr-hive-udf-0.2-SNAPSHOT.jar >>>>>>>> Added = /tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNAP= SHOT.jar to class path >>>>>>>> Added resource: = /tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNAP= SHOT.jar >>>>>>>> OK >>>>>>>> Time taken: 0.034 seconds >>>>>>>>=20 >>>>>>>> Step 4: (test) >>>>>>>> hive> select sysdate(); = = =20 >>>>>>>> Automatically selecting local only mode for query >>>>>>>> Total jobs =3D 1 >>>>>>>> Launching Job 1 out of 1 >>>>>>>> Number of reduce tasks is set to 0 since there's no reduce = operator >>>>>>>> SLF4J: Class path contains multiple SLF4J bindings. >>>>>>>> SLF4J: Found binding in = [jar:file:/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf= 4j/impl/StaticLoggerBinder.class] >>>>>>>> SLF4J: Found binding in = [jar:file:/hadoop/hbase-0.98.5-hadoop2/lib/phoenix-4.1.0-client-hadoop2.ja= r!/org/slf4j/impl/StaticLoggerBinder.class] >>>>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings = for an explanation. >>>>>>>> SLF4J: Actual binding is of type = [org.slf4j.impl.Log4jLoggerFactory] >>>>>>>> 14/12/30 10:17:06 WARN conf.Configuration: = file:/tmp/hadoop/hive_2014-12-30_10-17-04_514_2721050094719255719-1/-local= -10003/jobconf.xml:an attempt to override final parameter: = mapreduce.job.end-notification.max.retry.interval; Ignoring. >>>>>>>> 14/12/30 10:17:06 WARN conf.Configuration: = file:/tmp/hadoop/hive_2014-12-30_10-17-04_514_2721050094719255719-1/-local= -10003/jobconf.xml:an attempt to override final parameter: = yarn.nodemanager.loacl-dirs; Ignoring. >>>>>>>> 14/12/30 10:17:06 WARN conf.Configuration: = file:/tmp/hadoop/hive_2014-12-30_10-17-04_514_2721050094719255719-1/-local= -10003/jobconf.xml:an attempt to override final parameter: = mapreduce.job.end-notification.max.attempts; Ignoring. >>>>>>>> Execution log at: = /tmp/hadoop/hadoop_20141230101717_282ec475-8621-40fa-8178-a7927d81540b.log= >>>>>>>> java.io.FileNotFoundException: File does not = exist:hdfs://tmp/5c658d17-dbeb-4b84-ae8d-ba936404c8bc_resources/nexr-hive-= udf-0.2-SNAPSHOT.jar >>>>>>>> at = org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSyst= em.java:1128) >>>>>>>> at = org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSyst= em.java:1120) >>>>>>>> at = org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver= .java:81) >>>>>>>> at = org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFile= System.java:1120) >>>>>>>> at = org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFil= eStatus(ClientDistributedCacheManager.java:288) >>>>>>>> at = org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFil= eStatus(ClientDistributedCacheManager.java:224) >>>>>>>> at = org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determ= ineTimestamps(ClientDistributedCacheManager.java:99) >>>>>>>> at = org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determ= ineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57) >>>>>>>> at = org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitte= r.java:265) >>>>>>>> at = org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitte= r.java:301) >>>>>>>> at = org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.ja= va:389) >>>>>>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) >>>>>>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) >>>>>>>> at java.security.AccessController.doPrivileged(Native Method) >>>>>>>> at javax.security.auth.Subject.doAs(Subject.java:415) >>>>>>>> at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1556) >>>>>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) >>>>>>>> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562) >>>>>>>> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557) >>>>>>>> at java.security.AccessController.doPrivileged(Native Method) >>>>>>>> at javax.security.auth.Subject.doAs(Subject.java:415) >>>>>>>> at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1556) >>>>>>>> at = org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557) >>>>>>>> at = org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548) >>>>>>>> at = org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:420) >>>>>>>> at = org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:740) >>>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>>>>>> at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) >>>>>>>> at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) >>>>>>>> at java.lang.reflect.Method.invoke(Method.java:606) >>>>>>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:212) >>>>>>>> Job Submission failed with exception = 'java.io.FileNotFoundException(File does not = exist:hdfs://tmp/5c658d17-dbeb-4b84-ae8d-ba936404c8bc_resources/nexr-hive-= udf-0.2-SNAPSHOT.jar)' >>>>>>>> Execution failed with exit status: 1 >>>>>>>> Obtaining error information >>>>>>>> Task failed! >>>>>>>> Task ID: >>>>>>>> Stage-1 >>>>>>>> Logs: >>>>>>>> /tmp/hadoop/hive.log >>>>>>>> FAILED: Execution Error, return code 1 from = org.apache.hadoop.hive.ql.exec.mr.MapRedTask >>>>>>>>=20 >>>>>>>>=20 >>>>>>>> Step 5: (check the file) >>>>>>>> hive> dfs -ls = /tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNAP= SHOT.jar; >>>>>>>> ls: = `/tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNA= PSHOT.jar': No such file or directory >>>>>>>> Command failed with exit code =3D 1 >>>>>>>> Query returned non-zero code: 1, cause: null >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>> NOTE: This message may contain information that is confidential, = proprietary, privileged or otherwise protected by law. The message is = intended solely for the named addressee. If received in error, please = destroy and notify the sender. Any use of this email is prohibited when = received in error. Impetus does not represent, warrant and/or guarantee, = that the integrity of this communication has been maintained nor that = the communication is free of errors, virus, interception or = interference. >>>>>>=20 >>>>>>=20 >>>>>> CONFIDENTIALITY NOTICE >>>>>> NOTICE: This message is intended for the use of the individual or = entity to which it is addressed and may contain information that is = confidential, privileged and exempt from disclosure under applicable = law. If the reader of this message is not the intended recipient, you = are hereby notified that any printing, copying, dissemination, = distribution, disclosure or forwarding of this communication is strictly = prohibited. If you have received this communication in error, please = contact the sender immediately and delete it from your system. Thank = You. >>>>>=20 >>>>=20 >>>=20 >>>=20 >>> CONFIDENTIALITY NOTICE >>> NOTICE: This message is intended for the use of the individual or = entity to which it is addressed and may contain information that is = confidential, privileged and exempt from disclosure under applicable = law. If the reader of this message is not the intended recipient, you = are hereby notified that any printing, copying, dissemination, = distribution, disclosure or forwarding of this communication is strictly = prohibited. If you have received this communication in error, please = contact the sender immediately and delete it from your system. Thank = You. >>=20 >=20 >=20 > CONFIDENTIALITY NOTICE > NOTICE: This message is intended for the use of the individual or = entity to which it is addressed and may contain information that is = confidential, privileged and exempt from disclosure under applicable = law. If the reader of this message is not the intended recipient, you = are hereby notified that any printing, copying, dissemination, = distribution, disclosure or forwarding of this communication is strictly = prohibited. If you have received this communication in error, please = contact the sender immediately and delete it from your system. Thank = You. --Apple-Mail=_872DDEE2-253D-4C72-8419-C5CB0213054F Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8
Hi,


2015-01-04 = 08:57:12,154 ERROR [main]: DataNucleus.Datastore = (Log4JLogger.java:error(115)) - An exception was thrown while = adding/validating class(es) : Specified key was too long; max key length = is 767 bytes
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: = Specified key was too long; max key length is 767 bytes
at = sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native = Method)
= at = sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAcc= essorImpl.java:57)
at = sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstr= uctorAccessorImpl.java:45)
at = java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at = com.mysql.jdbc.Util.handleNewInstance(Util.java:408)
at = com.mysql.jdbc.Util.getInstance(Util.java:383)
at = com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1062)
at = com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4226)
at = com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4158)
at = com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2615)
at = com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2776)
at = com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2834)
at = com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2783)
at = com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:908)
at = com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:788)
at = com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:254)
<= div style=3D"margin: 0px; font-size: 11px; font-family: Menlo;"> at = org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(Abstra= ctTable.java:760)
at = org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:6= 48)
= at = org.datanucleus.store.rdbms.table.TableImpl.validateIndices(TableImpl.java= :593)
= at = org.datanucleus.store.rdbms.table.TableImpl.validateConstraints(TableImpl.= java:390)
= at = org.datanucleus.store.rdbms.table.ClassTable.validateConstraints(ClassTabl= e.java:3463)
at = org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesVali= dation(RDBMSStoreManager.java:3464)
at = org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAnd= Validate(RDBMSStoreManager.java:3190)
at = org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreMan= ager.java:2841)
at = org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSche= maTransaction.java:122)
at = org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager= .java:1605)
at = org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.j= ava:954)
= at = org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStore= Manager.java:679)
at = org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidate= s(RDBMSQueryUtils.java:408)
at = org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.j= ava:947)
= at = org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.ja= va:370)
= at = org.datanucleus.store.query.Query.executeQuery(Query.java:1744)
at = org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
<= div style=3D"margin: 0px; font-size: 11px; font-family: Menlo;"> at = org.datanucleus.store.query.Query.execute(Query.java:1654)
at = org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221)
at = org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStore= DirectSql.java:121)
at = org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:2= 52)
= at = org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:223)=
= at = org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
= at = org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:13= 3)
= at = org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.= java:58)
= at = org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java= :67)
= at = org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(Hive= MetaStore.java:497)
at = org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaSt= ore.java:475)
at = org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_all_database= s(HiveMetaStore.java:1026)
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHand= ler.java:105)
at = com.sun.proxy.$Proxy10.get_all_databases(Unknown Source)
at = org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveM= etaStoreClient.java:837)
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMe= taStoreClient.java:89)
at = com.sun.proxy.$Proxy11.getAllDatabases(Unknown Source)
at = org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1098)
= at = org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionNames(FunctionR= egistry.java:671)
at = org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionNames(FunctionR= egistry.java:662)
at = org.apache.hadoop.hive.cli.CliDriver.getCommandCompletor(CliDriver.java:54= 0)
= at = org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:758)
at = org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686)
at = org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.util.RunJar.main(RunJar.java:212)

Regards
Arthur



On 7 Jan, 2015, at 7:22 am, Jason Dere <jdere@hortonworks.com> = wrote:

Does your hive.log = contain any lines with "adding libjars:"?

May = also search for any lines containing "_resources", would like to see the = result of both searches.

For example, mine is = showing the following line:
=

I wonder if your libjars setting for the map/reduce = job is somehow getting sent without the "file:///", which might be causing hadoop to = interpret the path as a HDFS path rather than a local = path.

On Jan 6, 2015, at 1:11 AM, Arthur.hk.chan = <arthur.hk.chan@gmail.com> = wrote:

Hi,

my hadoop=E2=80=99s = core-site.xml contains following about = tmp

<property>
  = <name>hadoop.tmp.dir</name>
  = <value>/hadoop_data/hadoop_data/tmp</value>
</property>



my = hive-default.xml contains following about tmp

<property>
  = <name>hive.exec.scratchdir</name>
  <value>/tmp/hive-${user.name}</value>
  <description>Scratch space for Hive = jobs</description>
</property>

<property>
  = <name>hive.exec.local.scratchdir</name>
  = <value>/tmp/${user.name}</value>
  <description>Local scratch space for Hive = jobs</description>
</property>



Will this = related to configuration issue or a bug?

Please help!

Regards
Arthur


On 6 Jan, 2015, at = 3:45 am, Jason Dere <jdere@hortonworks.com> = wrote:

During query = compilation Hive needs to instantiate the UDF class and so the JAR needs = to be resolvable by the class loader, thus the JAR is copied locally to = a temp location for use.
During map/reduce jobs the local jar = (like all jars added with the ADD JAR command) should then be added to = the distributed cache. It looks like this is where the issue is = occurring, but based on path in the error message I suspect that either = Hive or Hadoop is mistaking what should be a local path with an HDFS = path.

On Jan 4, 2015, at 10:23 AM, Arthur.hk.chan@gmail.com = <arthur.hk.chan@gmail.com> = wrote:

Hi,

A = question: Why does it need to copy the jar file to the temp folder? Why = couldn=E2=80=99t it use the file defined in using JAR 'hdfs://hadoop/hive/nexr-hive-udf-0.2-SNAPSHOT.jar' directly? 

Regard= s
Arthur


On 4 Jan, = 2015, at 7:48 am, Arthur.hk.chan@gmail.com = <arthur.hk.chan@gmail.com> = wrote:

Hi,


A1: = Are all of these commands (Step 1-5) from the same Hive CLI = prompt?
Yes

A2:  Would you be = able to check if such a file exists with the same path, on the local = file system?
The file does not exist on the local file = system.  


Is there a way to = set the another =E2=80=9Ctmp" folder for HIVE? or any suggestions to fix = this issue?

Thanks = !!

Arthur
 

On 3 Jan, 2015, at 4:12 am, Jason Dere <jdere@hortonworks.com> = wrote:

The point of USING JAR as part of the CREATE = FUNCTION statement to try to avoid having to do ADD JAR/aux path stuff = to get the UDF to work. 

Are all of these = commands (Step 1-5) from the same Hive CLI = prompt?

hive> CREATE FUNCTION sysdate = AS 'com.nexr.platform.hive.udf.UDFSysDate' using JAR 'hdfs://hadoop= /hive/nexr-hive-udf-0.2-SNAPSHOT.jar';
Added = /tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNAP= SHOT.jar to class path
Added resource: = /tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNAP= SHOT.jar
/tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.= 2-SNAPSHOT.jar). I'm not really sure why it is a HDFS path here either, = but I'm not too familiar with what goes on during the job submission = process. But the fact that this HDFS path has the same naming convention = as the directory used for downloading resources locally (***_resources) = looks a little fishy to me. Would you be able to check if such a file = exists with the same path, on the local file = system?


nirmal.kumar@impetus.co.in&= gt; wrote:

  Important HiveQL's ADD JARnot work with HiveServer2 and the Beeline client = when Beeline runs on a different host. As an alterntive = to ADD = JAR, Hive auxiliary path
htt= p://www.cloudera.com/content/cloudera/en/documentation/cloudera-manager/v4= -8-0/Cloudera-Manager-Managing-Clusters/cmmc_hive_udf.html=E2=80=8B

From: Arthur.hk.chan@gmail.com = <arthur.hk.chan@gmail.com><= br>Sent: Tuesday,= December 30, 2014 9:54 PM
To: vic0777
Cc: Arthur.hk.chan@gmail.com; = user@hive.apache.org
Subjec= t: Re: CREATE = FUNCTION: How to automatically load extra jar = file?
 
Thank = you.

Will this work for vic0777@163.com> wrote:


At 2014-12-30 11:01:06, "Arthur.hk.chan@gmail.com" = <arthur.hk.chan@gmail.com> = wrote:
Hi,

I am using Hive = 0.13.1 on Hadoop 2.4.1, I need to automatically load an extra JAR file = to hive for UDF, below are my steps to create the UDF function. I have = tried the following but still no luck to get = thru.

Please = help!!

Regards
Arthur
=

Step 1:   = (make sure the jar in in HDFS)

SLF4J: Actual binding is of type = [org.slf4j.impl.Log4jLoggerFactory]
14/12/30 10:17:06 WARN = conf.Configuration: = file:/tmp/hadoop/hive_2014-12-30_10-17-04_514_2721050094719255719-1/-local= -10003/jobconf.xml:an attempt to override final parameter: = mapreduce.job.end-notification.max.retry.interval;  = Ignoring.
14/12/30 10:17:06 WARN conf.Configuration: = file:/tmp/hadoop/hive_2014-12-30_10-17-04_514_2721050094719255719-1/-local= -10003/jobconf.xml:an attempt to override final parameter: = yarn.nodemanager.loacl-dirs;  Ignoring.
14/12/30 10:17:06 WARN = conf.Configuration: = file:/tmp/hadoop/hive_2014-12-30_10-17-04_514_2721050094719255719-1/-local= -10003/jobconf.xml:an attempt to override final parameter: = mapreduce.job.end-notification.max.attempts;  Ignoring.
Execution = log at: = /tmp/hadoop/hadoop_20141230101717_282ec475-8621-40fa-8178-a7927d81540b.log=
hdfs://tmp/5c658d17-dbeb-4b84-ae8d-ba936404c8bc_res= ources/nexr-hive-udf-0.2-SNAPSHOT.jar
at = org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSyst= em.java:1128)
at = org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSyst= em.java:1120)
at = org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver= .java:81)
at = org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFil= eStatus(ClientDistributedCacheManager.java:288)
at = org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFil= eStatus(ClientDistributedCacheManager.java:224)
at = org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determ= ineTimestamps(ClientDistributedCacheManager.java:99)
at = org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determ= ineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at = org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitte= r.java:265)
at = org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitte= r.java:301)
at = org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.ja= va:389)
at = org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at = java.security.AccessController.doPrivileged(Native Method)
at = javax.security.auth.Subject.doAs(Subject.java:415)
at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1556)
at = org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at = org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at = java.security.AccessController.doPrivileged(Native Method)
at = javax.security.auth.Subject.doAs(Subject.java:415)
at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1556)
at = org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at = org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:420)<= /div>
at = org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:740)
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Job Submission failed with exception = 'java.io.FileNotFoundException(File does not exist:hdfs://tmp/5c658d17-dbeb-4b84-ae8d-ba936404c8bc_res= ources/nexr-hive-udf-0.2-SNAPSHOT.jar)'
Execution = failed with exit status: 1
Obtaining error information
Task failed!
Task ID:
  Stage-1
Logs:
/tmp/hadoop/hive.log
FAILED: = Execution Error, return code 1 from = org.apache.hadoop.hive.ql.exec.mr.MapRedTask

Step 5: (check the file)
hive> dfs = -ls = /tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr-hive-udf-0.2-SNAP= SHOT.jar;
ls: `/tmp/69700312-684c-45d3-b27a-0732bb268ddc_resources/nexr= -hive-udf-0.2-SNAPSHOT.jar': No such file or directory
Command failed with exit code =3D 1
Query returned non-zero code: 1, cause: = null

















NOTE: This message may contain = information that is confidential, proprietary, privileged or otherwise = protected by law. The message is intended solely for the named = addressee. If received in error, please destroy and notify the sender. = Any use of this email is prohibited when received in error. Impetus does = not represent, warrant and/or guarantee, that the integrity of this = communication has been maintained nor that the communication is free of = errors, virus, interception or = interference.


CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or = entity to which it is addressed and may contain information that is = confidential, privileged and exempt from disclosure under applicable = law. If the reader of this message is not the intended recipient, you = are hereby notified that any printing, copying, dissemination, = distribution, disclosure or forwarding of this communication is strictly = prohibited. If you have received this communication in error, please = contact the sender immediately and delete it from your system. Thank = You.




CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or = entity to which it is addressed and may contain information that is = confidential, privileged and exempt from disclosure under applicable = law. If the reader of this message is not the intended recipient, you = are hereby notified that any printing, copying, dissemination, = distribution, disclosure or forwarding of this communication is strictly = prohibited. If you have received this communication in error, please = contact the sender immediately and delete it from your system. Thank = You.



CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or = entity to which it is addressed and may contain information that is = confidential, privileged and exempt from disclosure under applicable = law. If the reader of this message is not the intended recipient, you = are hereby notified that any printing, copying, dissemination, = distribution, disclosure or forwarding of this communication is strictly = prohibited. If you have received this communication in error, please = contact the sender immediately and delete it from your system. Thank = You.

= --Apple-Mail=_872DDEE2-253D-4C72-8419-C5CB0213054F--