Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6EBB91834B for ; Wed, 30 Dec 2015 00:09:21 +0000 (UTC) Received: (qmail 38640 invoked by uid 500); 30 Dec 2015 00:09:15 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 38509 invoked by uid 500); 30 Dec 2015 00:09:15 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 38498 invoked by uid 99); 30 Dec 2015 00:09:15 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 Dec 2015 00:09:15 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 97A8CC086E for ; Wed, 30 Dec 2015 00:09:14 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.151 X-Spam-Level: *** X-Spam-Status: No, score=3.151 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, HTML_MESSAGE=3, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-west.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id mB5fcVTOVy8A for ; Wed, 30 Dec 2015 00:09:01 +0000 (UTC) Received: from mail-ob0-f177.google.com (mail-ob0-f177.google.com [209.85.214.177]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id 5CB04201EB for ; Wed, 30 Dec 2015 00:09:01 +0000 (UTC) Received: by mail-ob0-f177.google.com with SMTP id wp13so34517218obc.1 for ; Tue, 29 Dec 2015 16:09:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=+lujP2aKivVCmuEUGJXAsUyivew+DqUzB5pb6nmePdY=; b=ikbUzYinZMAFHjkYloxmICGXoIKrbitQFQwlTxk1rhgnoCx7pXl3hAVmIFurhGuK3U PkNQOB3V0jrA6LQCGX9I3Rbd2IymZ8H3kkq0htk/i8wcKYG1B8fuNHQ0Ku07nGmoFYhD bUBl7BLb0udrdi9rUyswhL9Tdgg7iYBu0wKb/YPxGEy2hAQp8CRDdruaUISFH+Dog3Nq c7yQ3Nti50Q8XUDwfF91eJ6M8XvEGbfBtpo2qJyMHNmWQOcYsikXfR7P1+EtQhgQbzMH o86qDGoWELJzwJ1babr1ZCCtWAKEicLvICBkXwqMur8dmAG3rf4nsQdyx57MafanqQm3 eStQ== MIME-Version: 1.0 X-Received: by 10.182.18.105 with SMTP id v9mr38465670obd.59.1451434134336; Tue, 29 Dec 2015 16:08:54 -0800 (PST) Received: by 10.202.57.69 with HTTP; Tue, 29 Dec 2015 16:08:54 -0800 (PST) Date: Tue, 29 Dec 2015 16:08:54 -0800 Message-ID: Subject: Getting ERROR 2118: Permission denied: while running PIG script using HCATALOG From: Kumar Jayapal To: Hue-Users , user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c33278fe17f905281257b3 --001a11c33278fe17f905281257b3 Content-Type: text/plain; charset=UTF-8 Hi, When I run this simple pig script from pig editor in hue I get permission denied error. I can execute queries in hive as the same user any idea why? We are using sentry for authorisation. Here is my pig script. LOAD_TBL_A = LOAD 'sandbox.suppliers' USING org.apache.hive.hcatalog.pig.HCatLoader(); STORE LOAD_TBL_A INTO '/tmp/pig_testing001/'; Apache Pig version 0.12.0-cdh5.4.5 (rexported) compiled Aug 12 2015, 14:17:24 Run pig script using PigRunner.run() for Pig version 0.8+ 2015-12-30 00:00:42,435 [uber-SubtaskRunner] INFO org.apache.pig.Main - Apache Pig version 0.12.0-cdh5.4.5 (rexported) compiled Aug 12 2015, 14:17:24 2015-12-30 00:00:42,437 [uber-SubtaskRunner] INFO org.apache.pig.Main - Logging error messages to: /mnt/drive08-sdj/nm/usercache/edhadmsvc/appcache/application_1449847448721_0473/container_e64_1449847448721_0473_01_000001/pig-job_1449847448721_0473.log 2015-12-30 00:00:42,487 [uber-SubtaskRunner] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/edhadmsvc/.pigbootup not found 2015-12-30 00:00:42,617 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address 2015-12-30 00:00:42,617 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS 2015-12-30 00:00:42,617 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://nameservice1 2015-12-30 00:00:42,623 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: yarnRM 2015-12-30 00:00:42,627 [uber-SubtaskRunner] WARN org.apache.pig.PigServer - Empty string specified for jar path 2015-12-30 00:00:43,320 [uber-SubtaskRunner] INFO hive.metastore - Trying to connect to metastore with URI thrift://hmscdh01094p001.corp.costco.com:9083 2015-12-30 00:00:43,387 [uber-SubtaskRunner] INFO hive.metastore - Opened a connection to metastore, current connections: 1 2015-12-30 00:00:43,388 [uber-SubtaskRunner] INFO hive.metastore - Connected to metastore. 2015-12-30 00:00:43,658 [uber-SubtaskRunner] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN 2015-12-30 00:00:43,750 [uber-SubtaskRunner] INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, DuplicateForEachColumnRewrite, GroupByConstParallelSetter, ImplicitSplitInserter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier, PartitionFilterOptimizer]} 2015-12-30 00:00:43,769 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS 2015-12-30 00:00:43,772 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.textoutputformat.separator is deprecated. Instead, use mapreduce.output.textoutputformat.separator 2015-12-30 00:00:43,856 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false 2015-12-30 00:00:43,921 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1 2015-12-30 00:00:43,921 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1 2015-12-30 00:00:44,006 [uber-SubtaskRunner] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job 2015-12-30 00:00:44,044 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent 2015-12-30 00:00:44,044 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 2015-12-30 00:00:44,044 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress 2015-12-30 00:00:44,266 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS 2015-12-30 00:00:44,267 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id 2015-12-30 00:00:44,267 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job4443028594885224634.jar 2015-12-30 00:00:47,524 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job4443028594885224634.jar created 2015-12-30 00:00:47,524 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.jar is deprecated. Instead, use mapreduce.job.jar 2015-12-30 00:00:47,550 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job 2015-12-30 00:00:47,617 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission. 2015-12-30 00:00:47,618 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker.http.address is deprecated. Instead, use mapreduce.jobtracker.http.address 2015-12-30 00:00:47,667 [JobControl] INFO org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider - Failing over to rm1199 2015-12-30 00:00:47,929 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1449847448721_0473_m_000000_0 is : 1.0 2015-12-30 00:00:48,076 [JobControl] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir 2015-12-30 00:00:48,103 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area job_1449847448721_0474 " target="_blank">/user/edhadmsvc/.staging/job_1449847448721_0474 2015-12-30 00:00:48,112 [JobControl] WARN org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException as:edhadmsvc (auth:SIMPLE) cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Permission denied: user=edhadmsvc, access=EXECUTE, inode="/user/hive/warehouse ":hive:hive:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6599) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4229) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:526) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:822) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) 2015-12-30 00:00:48,113 [JobControl] INFO org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:script.pig got an error while submitting org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Permission denied: user=edhadmsvc, access=EXECUTE, inode="/user/hive/warehouse ":hive:hive:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6599) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4229) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:526) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:822) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:597) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1306) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1303) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1303) at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128) at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191) at java.lang.Thread.run(Thread.java:745) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270) Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=edhadmsvc, access=EXECUTE, inode="/user/hive/warehouse ":hive:hive:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6599) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4229) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:526) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:822) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1984) at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1128) at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1124) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1124) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) at org.apache.hadoop.fs.Globber.glob(Globber.java:252) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1644) at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:257) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) at org.apache.hive.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:157) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274) ... 18 more Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=edhadmsvc, access=EXECUTE, inode="/user/hive/warehouse ":hive:hive:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6599) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4229) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:526) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:822) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at org.apache.hadoop.ipc.Client.call(Client.java:1468) at org.apache.hadoop.ipc.Client.call(Client.java:1399) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752) at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1982) ... 30 more 2015-12-30 00:00:48,119 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1449847448721_0474 2015-12-30 00:00:48,120 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases LOAD_TBL_A 2015-12-30 00:00:48,120 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: LOAD_TBL_A[1,13] C: R: 2015-12-30 00:00:48,123 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete 2015-12-30 00:00:53,133 [uber-SubtaskRunner] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure. 2015-12-30 00:00:53,133 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_1449847448721_0474 has failed! Stop running all dependent jobs 2015-12-30 00:00:53,133 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete 2015-12-30 00:00:53,202 [uber-SubtaskRunner] INFO org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider - Failing over to rm1199 2015-12-30 00:00:53,207 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Could not get Job info from RM for job job_1449847448721_0474. Redirecting to job history server. 2015-12-30 00:00:53,245 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Could not get Job info from RM for job job_1449847448721_0474. Redirecting to job history server. 2015-12-30 00:00:53,260 [uber-SubtaskRunner] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed! 2015-12-30 00:00:53,262 [uber-SubtaskRunner] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 2.6.0-cdh5.4.5 0.12.0-cdh5.4.5 edhadmsvc 2015-12-30 00:00:44 2015-12-30 00:00:53 UNKNOWN Failed! Failed Jobs: JobId Alias Feature Message Outputs job_1449847448721_0474 LOAD_TBL_A MAP_ONLY Message: org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Permission denied: user=edhadmsvc, access=EXECUTE, inode="/user/hive/warehouse ":hive:hive:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6599) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4229) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:526) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:822) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288) at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:597) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1306) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1303) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1303) at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128) at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191) at java.lang.Thread.run(Thread.java:745) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270) Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=edhadmsvc, access=EXECUTE, inode="/user/hive/warehouse ":hive:hive:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6599) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4229) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:526) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:822) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1984) at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1128) at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1124) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1124) at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) at org.apache.hadoop.fs.Globber.glob(Globber.java:252) at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1644) at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:257) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) at org.apache.hive.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:157) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274) ... 18 more Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=edhadmsvc, access=EXECUTE, inode="/user/hive/warehouse ":hive:hive:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:180) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:137) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6599) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4229) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:526) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:822) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at org.apache.hadoop.ipc.Client.call(Client.java:1468) at org.apache.hadoop.ipc.Client.call(Client.java:1399) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752) at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1982) ... 30 more /tmp/pig_testing001, Input(s): Failed to read data from "sandbox.suppliers" Output(s): Failed to produce result in "/tmp/pig_testing001 " Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_1449847448721_0474 2015-12-30 00:00:53,262 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed! 2015-12-30 00:00:53,269 [uber-SubtaskRunner] ERROR org.apache.pig.tools.grunt.GruntParser - ERROR 2244: Job failed, hadoop does not return any error message Hadoop Job IDs executed by Pig: job_1449847448721_0474 Thanks Jay --001a11c33278fe17f905281257b3 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi,

When I run this simple p= ig script from pig editor in hue I get permission denied error. I can execu= te queries in hive as the same user any idea why?

W= e are using sentry for authorisation.=C2=A0


Her= e is my pig script.


LOAD_TBL_A =3D = LOAD 'sandbox.suppliers' USING org.apache.hive.hcatalog.pig.HCatLoa= der();

STORE LOAD_TBL_A INTO '/tmp/pig_testing= 001/';




Apache Pig version 0.12.0-=
cdh5.4.5 (rexported) 
compiled Aug 12 2015, 14:17:24

Run pig scri= pt using PigRunner.run() for Pig version 0.8+
2015-12-30 00:00:42,435 [u= ber-SubtaskRunner] INFO org.apache.pig.Main - Apache Pig version 0.12.0-c= dh5.4.5 (rexported) compiled Aug 12 2015, 14:17:24
2015-12-30 00:00:42,4= 37 [uber-SubtaskRunner] INFO org.apache.pig.Main - Logging error messages= to: /mnt/drive08-sdj/nm/usercache/edhadmsvc/appcache/application= _1449847448721_0473/container_e64_1449847448721_0473_01_000001/pig-job_1449= 847448721_0473.log
2015-12-30 00:00:42,487 [uber-SubtaskRunner] INFO= org.apache.pig.impl.util.Utils - Default bootup file /home/edhadmsvc/.pigbootup not found
2015-12-30 00:00:= 42,617 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.depr= ecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtrac= ker.address
2015-12-30 00:00:42,617 [uber-SubtaskRunner] INFO org.apach= e.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2015-12-= 30 00:00:42,617 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.ex= ecutionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://nameservice1
2015-12-30 00:00:42,623 [uber-SubtaskRunner]= INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Co= nnecting to map-reduce job tracker at: yarnRM
2015-12-30 00:00:42,627 [u= ber-SubtaskRunner] WARN org.apache.pig.PigServer - Empty string specified= for jar path
2015-12-30 00:00:43,320 [uber-SubtaskRunner] INFO hive.me= tastore - Trying to connect to metastore with URI thrift://hmscdh01094p001.corp.costco.com:90= 83
2015-12-30 00:00:43,387 [uber-SubtaskRunner] INFO hive.metastore= - Opened a connection to metastore, current connections: 1
2015-12-30 = 00:00:43,388 [uber-SubtaskRunner] INFO hive.metastore - Connected to meta= store.
2015-12-30 00:00:43,658 [uber-SubtaskRunner] INFO org.apache.pig= .tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
= 2015-12-30 00:00:43,750 [uber-SubtaskRunner] INFO org.apache.pig.newplan.l= ogical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=3D[AddForEach, Colu= mnMapKeyPrune, DuplicateForEachColumnRewrite, GroupByConstParallelSetter, I= mplicitSplitInserter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, Me= rgeForEach, NewPartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilt= er, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=3D[FilterLogicExpr= essionSimplifier, PartitionFilterOptimizer]}
2015-12-30 00:00:43,769 [ub= er-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.deprecation -= fs.default.name is deprecated. Inst= ead, use fs.defaultFS
2015-12-30 00:00:43,772 [uber-SubtaskRunner] INFO = org.apache.hadoop.conf.Configuration.deprecation - mapred.textoutputforma= t.separator is deprecated. Instead, use mapreduce.output.textoutputformat.s= eparator
2015-12-30 00:00:43,856 [uber-SubtaskRunner] INFO org.apache.p= ig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concate= nation threshold: 100 optimistic? false
2015-12-30 00:00:43,921 [uber-Su= btaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceL= ayer.MultiQueryOptimizer - MR plan size before optimization: 1
2015-12-= 30 00:00:43,921 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.ex= ecutionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after opti= mization: 1
2015-12-30 00:00:44,006 [uber-SubtaskRunner] INFO org.apach= e.pig.tools.pigstats.ScriptState - Pig script settings are added to the jo= b
2015-12-30 00:00:44,044 [uber-SubtaskRunner] INFO org.apache.hadoop.c= onf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent= is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent
2= 015-12-30 00:00:44,044 [uber-SubtaskRunner] INFO org.apache.pig.backend.ha= doop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce= .markreset.buffer.percent is not set, set to default 0.3
2015-12-30 00:0= 0:44,044 [uber-SubtaskRunner] INFO org.apache.hadoop.conf.Configuration.de= precation - mapred.output.compress is deprecated. Instead, use mapreduce.o= utput.fileoutputformat.compress
2015-12-30 00:00:44,266 [uber-SubtaskRun= ner] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.d= efaultFS
2015-12-30 00:00:44,267 [uber-SubtaskRunner] INFO org.apache.h= adoop.conf.Configuration.deprecation - m= apred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
2015-12-30 00:00:44,267 [u= ber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapR= educeLayer.JobControlCompiler - creating jar file Job4443028594885224634.j= ar
2015-12-30 00:00:47,524 [uber-SubtaskRunner] INFO org.apache.pig.bac= kend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file J= ob4443028594885224634.jar created
2015-12-30 00:00:47,524 [uber-SubtaskR= unner] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.jar= is deprecated. Instead, use mapreduce.job.jar
2015-12-30 00:00:47,550 [= uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.map= ReduceLayer.JobControlCompiler - Setting up single store job
2015-12-30= 00:00:47,617 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.exec= utionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting= for submission.
2015-12-30 00:00:47,618 [uber-SubtaskRunner] INFO org.= apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker.http.add= ress is deprecated. Instead, use mapreduce.jobtracker.http.address
2015-= 12-30 00:00:47,667 [JobControl] INFO org.apache.hadoop.yarn.client.Configu= redRMFailoverProxyProvider - Failing over to rm1199
2015-12-30 00:00:47= ,929 [communication thread] INFO org.apache.hadoop.mapred.TaskAttemptListe= nerImpl - Progress of TaskAttempt attempt_1449847448721_0473_m_000000_0 is= : 1.0
2015-12-30 00:00:48,076 [JobControl] INFO org.apache.hadoop.conf= .Configuration.deprecation - mapred.input.dir is deprecated. Instead, use = mapreduce.input.fileinputformat.inputdir
2015-12-30 00:00:48,103 [JobCon= trol] INFO org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the sta= ging area job_1449847448721_0= 474" target=3D"_blank">/user/edhadmsvc/.staging/job_1449847448721_0474
2015-12-30 00:00:48,1= 12 [JobControl] WARN org.apache.hadoop.security.UserGroupInformation - Pr= iviledgedActionException as:edhadmsvc (auth:SIMPLE) cause:org.apache.pig.ba= ckend.executionengine.ExecException: ERROR 2118: Permission denied: user=3D= edhadmsvc, access=3DEXECUTE, inode=3D"/user= /hive/warehouse":hive:hive:drwxrwx---
at org.apache.hadoop.hdf= s.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAut= horizationProvider.java:257)
at org.apache.hadoop.hdfs.server.namenode.= DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.c= heckTraverse(DefaultAuthorizationProvider.java:180)
at org.apache.hadoo= p.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(Default= AuthorizationProvider.java:137)
at org.apache.hadoop.hdfs.server.nameno= de.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
at= org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSName= system.java:6599)
at org.apache.hadoop.hdfs.server.namenode.FSNamesyste= m.getFileInfo(FSNamesystem.java:4229)
at org.apache.hadoop.hdfs.server.= namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881)
at o= rg.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProto= col.getFileInfo(AuthorizationProviderProxyClientProtocol.java:526)
at o= rg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslator= PB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:822)
a= t org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Client= NamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call= (ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(R= PC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java= :2044)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)<= br> at java.security.AccessController.doPrivileged(Native Method)
at ja= vax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.s= ecurity.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at or= g.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

2015-12-30 = 00:00:48,113 [JobControl] INFO org.apache.hadoop.mapreduce.lib.jobcontrol.= ControlledJob - PigLatin:script.pig got an error while submitting
org.= apache.pig.backend.executionengine.ExecException: ERROR 2118: Permission de= nied: user=3Dedhadmsvc, access=3DEXECUTE, inode=3D"/user/hive/warehouse":hive:hive:drwxrwx---
at org.apac= he.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermissi= on(DefaultAuthorizationProvider.java:257)
at org.apache.hadoop.hdfs.ser= ver.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvide= r.java:238)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizat= ionProvider.checkTraverse(DefaultAuthorizationProvider.java:180)
at org= .apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermi= ssion(DefaultAuthorizationProvider.java:137)
at org.apache.hadoop.hdfs.= server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.jav= a:138)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPerm= ission(FSNamesystem.java:6599)
at org.apache.hadoop.hdfs.server.namenod= e.FSNamesystem.getFileInfo(FSNamesystem.java:4229)
at org.apache.hadoop= .hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:= 881)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderPro= xyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:= 526)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerS= ideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.ja= va:822)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocol= Protos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolPr= otos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRp= cInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$= Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.ru= n(Server.java:2044)
at org.apache.hadoop.ipc.Server$Handler$1.run(Serve= r.java:2040)
at java.security.AccessController.doPrivileged(Native Meth= od)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.ap= ache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:16= 71)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInput= Format.getSplits(PigInputFormat.java:288)
at org.apache.hadoop.mapreduc= e.JobSubmitter.writeNewSplits(JobSubmitter.java:597)
at org.apache.hado= op.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
at org.apa= che.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)<= br> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1306)
at org.apa= che.hadoop.mapreduce.Job$10.run(Job.java:1303)
at java.security.AccessC= ontroller.doPrivileged(Native Method)
at javax.security.auth.Subject.do= As(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation= .doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapreduce.Jo= b.submit(Job.java:1303)
at org.apache.hadoop.mapreduce.lib.jobcontrol.C= ontrolledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethod= AccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessor= Impl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.Delegating= MethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java= .lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.= hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig= .backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
at java.lan= g.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executio= nengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)<= br>Caused by: org.apache.hadoop.security.AccessControlException: Permission= denied: user=3Dedhadmsvc, access=3DEXECUTE, inode=3D"/user/hive/warehouse":hive:hive:drwxrwx---
at org.a= pache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermi= ssion(DefaultAuthorizationProvider.java:257)
at org.apache.hadoop.hdfs.= server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProv= ider.java:238)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthori= zationProvider.checkTraverse(DefaultAuthorizationProvider.java:180)
at = org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPe= rmission(DefaultAuthorizationProvider.java:137)
at org.apache.hadoop.hd= fs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.= java:138)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkP= ermission(FSNamesystem.java:6599)
at org.apache.hadoop.hdfs.server.name= node.FSNamesystem.getFileInfo(FSNamesystem.java:4229)
at org.apache.had= oop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.ja= va:881)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProvider= ProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.ja= va:526)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServ= erSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB= .java:822)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProto= colProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtoco= lProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBu= fRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.R= PC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1= .run(Server.java:2044)
at org.apache.hadoop.ipc.Server$Handler$1.run(Se= rver.java:2040)
at java.security.AccessController.doPrivileged(Native M= ethod)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org= .apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java= :1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Meth= od)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeCons= tructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccess= orImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.l= ang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache= .hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)<= br> at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteEx= ception.java:73)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSCli= ent.java:1984)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCal= l(DistributedFileSystem.java:1128)
at org.apache.hadoop.hdfs.Distribute= dFileSystem$18.doCall(DistributedFileSystem.java:1124)
at org.apache.ha= doop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
= at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFi= leSystem.java:1124)
at org.apache.hadoop.fs.Globber.getFileStatus(Globb= er.java:57)
at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
= at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1644)
at = org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInput= Format.java:257)
at org.apache.hadoop.mapred.FileInputFormat.listStatus= (FileInputFormat.java:228)
at org.apache.hadoop.mapred.FileInputFormat.= getSplits(FileInputFormat.java:313)
at org.apache.hive.hcatalog.mapredu= ce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:157)
at org.a= pache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getS= plits(PigInputFormat.java:274)
... 18 more
Caused by: org.apache.had= oop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):= Permission denied: user=3Dedhadmsvc, access=3DEXECUTE, inode=3D"/user/hive/warehouse":hive:hive:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.c= heckFsPermission(DefaultAuthorizationProvider.java:257)
at org.apache.h= adoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthor= izationProvider.java:238)
at org.apache.hadoop.hdfs.server.namenode.Def= aultAuthorizationProvider.checkTraverse(DefaultAuthorizationProvider.java:1= 80)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvi= der.checkPermission(DefaultAuthorizationProvider.java:137)
at org.apach= e.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermiss= ionChecker.java:138)
at org.apache.hadoop.hdfs.server.namenode.FSNamesy= stem.checkPermission(FSNamesystem.java:6599)
at org.apache.hadoop.hdfs.= server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4229)
at org= .apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeR= pcServer.java:881)
at org.apache.hadoop.hdfs.server.namenode.Authorizat= ionProviderProxyClientProtocol.getFileInfo(AuthorizationProviderProxyClient= Protocol.java:526)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeP= rotocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideT= ranslatorPB.java:822)
at org.apache.hadoop.hdfs.protocol.proto.ClientNa= menodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientName= nodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Ser= ver$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.h= adoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Serve= r$Handler$1.run(Server.java:2044)
at org.apache.hadoop.ipc.Server$Handl= er$1.run(Server.java:2040)
at java.security.AccessController.doPrivileg= ed(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)=
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfor= mation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.ja= va:2038)

at org.apache.hadoop.ipc.Client.call(Client.java:1468)
= at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.h= adoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
= at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at org.apache.had= oop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNa= menodeProtocolTranslatorPB.java:752)
at sun.reflect.GeneratedMethodAcce= ssor3.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImp= l.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Met= hod.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocati= onHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.h= adoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:10= 2)
at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source)
at org.apa= che.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1982)
... 30 more<= br>2015-12-30 00:00:48,119 [uber-SubtaskRunner] INFO org.apache.pig.backen= d.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: <= a href=3D"https://huecdh01094p001.corp.costco.com:8888/jobbrowser/jobs/job_= 1449847448721_0474" target=3D"_blank" style=3D"color:rgb(51,139,184);text-d= ecoration:none;outline:0px">job_1449847448721_0474
2015-12-30 00:00:= 48,120 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executionen= gine.mapReduceLayer.MapReduceLauncher - Processing aliases LOAD_TBL_A
2= 015-12-30 00:00:48,120 [uber-SubtaskRunner] INFO org.apache.pig.backend.ha= doop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations= : M: LOAD_TBL_A[1,13] C: R:
2015-12-30 00:00:48,123 [uber-SubtaskRunne= r] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapRe= duceLauncher - 0% complete
2015-12-30 00:00:53,133 [uber-SubtaskRunner]= WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapRedu= ceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you w= ant Pig to stop immediately on failure.
2015-12-30 00:00:53,133 [uber-Su= btaskRunner] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceL= ayer.MapReduceLauncher - job job_14498474487= 21_0474 has failed! Stop running all dependent jobs
2015-12-30 00:00= :53,133 [uber-SubtaskRunner] INFO org.apache.pig.backend.hadoop.executione= ngine.mapReduceLayer.MapReduceLauncher - 100% complete
2015-12-30 00:00= :53,202 [uber-SubtaskRunner] INFO org.apache.hadoop.yarn.client.Configured= RMFailoverProxyProvider - Failing over to rm1199
2015-12-30 00:00:53,20= 7 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.ClientServiceDelegate= - Could not get Job info from RM for job job_1449847448721_0474. Redirect= ing to job history server.
2015-12-30 00:00:53,245 [uber-SubtaskRunner] = INFO org.apache.hadoop.mapred.ClientServiceDelegate - Could not get Job i= nfo from RM for job job_1449847448721_0474. Redirecting to job history serv= er.
2015-12-30 00:00:53,260 [uber-SubtaskRunner] ERROR org.apache.pig.to= ols.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2015-12-30 00:0= 0:53,262 [uber-SubtaskRunner] INFO org.apache.pig.tools.pigstats.SimplePig= Stats - Script Statistics:

HadoopVersion PigVersion UserId Started= At FinishedAt Features
2.6.0-cdh5.4.5 0.12.0-cdh5.4.5 edhadmsvc 2015-12-= 30 00:00:44 2015-12-30 00:00:53 UNKNOWN

Failed!

Failed Jobs:<= br>JobId Alias Feature Message Outputs
job_= 1449847448721_0474 LOAD_TBL_A MAP_ONLY Message: org.apache.pig.backend.= executionengine.ExecException: ERROR 2118: Permission denied: user=3Dedhadm= svc, access=3DEXECUTE, inode=3D"/user/hive/= warehouse":hive:hive:drwxrwx---
at org.apache.hadoop.hdfs.serv= er.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthoriza= tionProvider.java:257)
at org.apache.hadoop.hdfs.server.namenode.Defaul= tAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
at = org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkTr= averse(DefaultAuthorizationProvider.java:180)
at org.apache.hadoop.hdfs= .server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthor= izationProvider.java:137)
at org.apache.hadoop.hdfs.server.namenode.FSP= ermissionChecker.checkPermission(FSPermissionChecker.java:138)
at org.a= pache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem= .java:6599)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getF= ileInfo(FSNamesystem.java:4229)
at org.apache.hadoop.hdfs.server.nameno= de.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881)
at org.apa= che.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.ge= tFileInfo(AuthorizationProviderProxyClientProtocol.java:526)
at org.apa= che.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.get= FileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:822)
at org.= apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNameno= deProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at o= rg.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(Proto= bufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.jav= a:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)=
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at= java.security.AccessController.doPrivileged(Native Method)
at javax.se= curity.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.securit= y.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apac= he.hadoop.ipc.Server$Handler.run(Server.java:2038)

at org.apache.pi= g.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(Pi= gInputFormat.java:288)
at org.apache.hadoop.mapreduce.JobSubmitter.writ= eNewSplits(JobSubmitter.java:597)
at org.apache.hadoop.mapreduce.JobSub= mitter.writeSplits(JobSubmitter.java:614)
at org.apache.hadoop.mapreduc= e.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
at org.apache.h= adoop.mapreduce.Job$10.run(Job.java:1306)
at org.apache.hadoop.mapreduc= e.Job$10.run(Job.java:1303)
at java.security.AccessController.doPrivile= ged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415= )
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfo= rmation.java:1671)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1= 303)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit= (ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke= 0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeM= ethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.= invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Metho= d.invoke(Method.java:606)
at org.apache.pig.backend.hadoop23.PigJobCont= rol.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.P= igJobControl.run(PigJobControl.java:191)
at java.lang.Thread.run(Thread= .java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLa= yer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
Caused by: org.a= pache.hadoop.security.AccessControlException: Permission denied: user=3Dedh= admsvc, access=3DEXECUTE, inode=3D"/user/hi= ve/warehouse":hive:hive:drwxrwx---
at org.apache.hadoop.hdfs.s= erver.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthor= izationProvider.java:257)
at org.apache.hadoop.hdfs.server.namenode.Def= aultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238)
= at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.chec= kTraverse(DefaultAuthorizationProvider.java:180)
at org.apache.hadoop.h= dfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAut= horizationProvider.java:137)
at org.apache.hadoop.hdfs.server.namenode.= FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
at or= g.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesys= tem.java:6599)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.g= etFileInfo(FSNamesystem.java:4229)
at org.apache.hadoop.hdfs.server.nam= enode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881)
at org.= apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol= .getFileInfo(AuthorizationProviderProxyClientProtocol.java:526)
at org.= apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.= getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:822)
at o= rg.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNam= enodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
a= t org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(Pr= otobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.= java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:20= 44)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
= at java.security.AccessController.doPrivileged(Native Method)
at javax= .security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.secu= rity.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.a= pache.hadoop.ipc.Server$Handler.run(Server.java:2038)

at sun.reflec= t.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.refl= ect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl= .java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(= DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constru= ctor.newInstance(Constructor.java:526)
at org.apache.hadoop.ipc.RemoteE= xception.instantiateException(RemoteException.java:106)
at org.apache.h= adoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1984)
= at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSy= stem.java:1128)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCa= ll(DistributedFileSystem.java:1124)
at org.apache.hadoop.fs.FileSystemL= inkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoo= p.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1124)=
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
at = org.apache.hadoop.fs.Globber.glob(Globber.java:252)
at org.apache.hadoo= p.fs.FileSystem.globStatus(FileSystem.java:1644)
at org.apache.hadoop.m= apred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:257) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.ja= va:228)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInput= Format.java:313)
at org.apache.hive.hcatalog.mapreduce.HCatBaseInputFor= mat.getSplits(HCatBaseInputFormat.java:157)
at org.apache.pig.backend.h= adoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputForma= t.java:274)
... 18 more
Caused by: org.apache.hadoop.ipc.RemoteExcep= tion(org.apache.hadoop.security.AccessControlException): Permission denied:= user=3Dedhadmsvc, access=3DEXECUTE, inode=3D"/user/hive/warehouse":hive:hive:drwxrwx---
at org.apache.ha= doop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(De= faultAuthorizationProvider.java:257)
at org.apache.hadoop.hdfs.server.n= amenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.jav= a:238)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationPr= ovider.checkTraverse(DefaultAuthorizationProvider.java:180)
at org.apac= he.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission= (DefaultAuthorizationProvider.java:137)
at org.apache.hadoop.hdfs.serve= r.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138= )
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermissio= n(FSNamesystem.java:6599)
at org.apache.hadoop.hdfs.server.namenode.FSN= amesystem.getFileInfo(FSNamesystem.java:4229)
at org.apache.hadoop.hdfs= .server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:881)<= br> at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyCli= entProtocol.getFileInfo(AuthorizationProviderProxyClientProtocol.java:526)<= br> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTr= anslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:82= 2)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProto= s$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.= java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvo= ker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Serve= r.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Ser= ver.java:2044)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.jav= a:2040)
at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.= hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

at= org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hado= op.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufR= pcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$P= roxy14.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB= .ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTrans= latorPB.java:752)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknow= n Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Delegating= MethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.j= ava:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMet= hod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.Retr= yInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.p= roxy.$Proxy15.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFS= Client.getFileInfo(DFSClient.java:1982)
... 30 more
/tmp/pig_testin= g001,

Input(s):
Failed to read data from "sandbox.suppliers&= quot;

Output(s):
Failed to produce result in "/tmp/pig_testing001"

Counters:
Total records= written : 0
Total bytes written : 0
Spillable Memory Manager spill c= ount : 0
Total bags proactively spilled: 0
Total records proactively = spilled: 0

Job DAG:
job_14498474487= 21_0474


2015-12-30 00:00:53,262 [uber-SubtaskRunner] INFO o= rg.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLaunch= er - Failed!
2015-12-30 00:00:53,269 [uber-SubtaskRunner] ERROR org.apa= che.pig.tools.grunt.GruntParser - ERROR 2244: Job failed, hadoop does not = return any error message
Hadoop Job IDs executed by Pig: job_1449847448721_0474



Thanks
Jay
--001a11c33278fe17f905281257b3--