Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 964BCF30D for ; Wed, 27 Mar 2013 20:03:00 +0000 (UTC) Received: (qmail 14151 invoked by uid 500); 27 Mar 2013 20:02:55 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 14036 invoked by uid 500); 27 Mar 2013 20:02:55 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 14025 invoked by uid 99); 27 Mar 2013 20:02:55 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Mar 2013 20:02:55 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [209.85.215.52] (HELO mail-la0-f52.google.com) (209.85.215.52) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Mar 2013 20:02:50 +0000 Received: by mail-la0-f52.google.com with SMTP id fs12so16472060lab.11 for ; Wed, 27 Mar 2013 13:02:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:from:date:message-id:subject:to :content-type:x-gm-message-state; bh=2V2vCEMX2sKOihWRXNRKxjl2GXsOtcT8z/ye8HUgM1o=; b=KFCD75VsiWs0Y9xlW60gY/RYPmEgHfrhXMjXsntkW1PiNzfbwJbVJp4xEXTZ7E/mem vdag+fMYd3cBx+QxpzYvdrHA9DoP1iDVuEShg8dprWllXLD3ZJpXiV4OE3Zh/q/x/moq oCua3qHxXiivmwsUO0K5sMiztH8uVLacr4QEx/RJQmJAinIUa26cyIng74lNDE7eQnyI YJR3YOERXSgNTw+NDwgjYAXF3FqOcW3gW4r6XhKJHWRsMv245eT7CeiLQDOovR5oANYL ursnK0ZR6cQWJx6GNI20B+4D50x83YvsKf+olrQ73Q7lYo2t48N62N9f4uhqiwg7Hajd UqZg== X-Received: by 10.112.5.6 with SMTP id o6mr10680655lbo.57.1364414548396; Wed, 27 Mar 2013 13:02:28 -0700 (PDT) MIME-Version: 1.0 Received: by 10.152.6.131 with HTTP; Wed, 27 Mar 2013 13:02:08 -0700 (PDT) From: Marcos Sousa Date: Wed, 27 Mar 2013 17:02:08 -0300 Message-ID: Subject: Hadoop Mapreduce fails with permission management enabled To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=14dae94ee2537be61804d8ed8498 X-Gm-Message-State: ALoCoQk8qQbUn3oBT7RSCu50dikDJeJ1RRrOlD8xCJmRIBS6I8hp5ddiguJAGcoNutWOlpiHIGJ5 X-Virus-Checked: Checked by ClamAV on apache.org --14dae94ee2537be61804d8ed8498 Content-Type: text/plain; charset=ISO-8859-1 I enabled the permission management in my hadoop cluster, but I'm facing a problem sending jobs with pig. This is the scenario: 1 - I have hadoop/hadoop user 2 - I have myuserapp/myuserapp user that runs PIG script. 3 - We setup the path /myapp to be owned by myuserapp 4 - We set pig.temp.dir to /myapp/pig/tmp But when we pig try to run the jobs we got the following error: job_201303221059_0009 all_actions,filtered,raw_data DISTINCT Message: Job failed! Error - Job initialization failed: org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=realtime, access=EXECUTE, inode="system":hadoop:supergroup:rwx------ Hadoop jobtracker requires this permission to statup it's server. My hadoop policy looks like: security.client.datanode.protocol.acl hadoop,myuserapp supergroup,myuserapp security.inter.tracker.protocol.acl hadoop,myuserapp supergroup,myuserapp security.job.submission.protocol.acl hadoop,myuserapp supergroup,myuserapp My hdfs-site.xml: dfs.permissions true dfs.datanode.data.dir.perm 755 dfs.web.ugi hadoop,supergroup My core site: ... hadoop.security.authorization true ... And finally my mapred-site.xml ... mapred.local.dir /tmp/mapred mapreduce.jobtracker.jobhistory.location /opt/logs/hadoop/history mapreduce.jobtracker.staging.root.dir /user Is there a missing configuration? How can I deal with multiples users running jobs in a restrict HDFS cluster? --14dae94ee2537be61804d8ed8498 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

I enabled the permission management in my hadoop cluster, but I'm facin= g a problem sending jobs with pig. This is the scenario:

1 - I have hadoop/hadoop user

2 - I have myuserapp/myuserapp user that runs PIG script.

3 - We setup the path /myapp to be owned by myuserapp

4 - We set pig.temp.dir to /myapp/pig/tmp

But when we pig try to run the jobs we got the following error:


job_201303221059_0009    all_actions,filt=
ered,raw_data    DISTINCT    Message: Job failed! Error - Job initializatio=
n failed: org.apache.hadoop.security.AccessControlException: org.apache.had=
oop.security.AccessControlException: Permission denied: user=3Drealtime, ac=
cess=3DEXECUTE, inode=3D"system":hadoop:supergroup:rwx------

Hadoop jobtracker requires this per= mission to statup it's server.

My hadoop policy looks like:


<property>
<name>security.client.datanode.protocol.acl</name>
<value>hadoop,myuserapp supergroup,myuserapp</value>
</property>
<property>
<name>security.inter.tracker.protocol.acl</name>
<value>hadoop,myuserapp supergroup,myuserapp</value>
</property>
<property>
<name>security.job.submission.protocol.acl</name>
<value>hadoop,myuserapp supergroup,myuserapp</value>
<property>

My hdfs-site.xml:


<property>
<name>dfs.permissions</name>
<value>true</value>
</property>

<property>
 <name>dfs.datanode.data.dir.perm</name>
 <value>755</value>
</property>

<property>
 <name>dfs.web.ugi</name>
 <value>hadoop,supergroup</value>
</property>

My core site:


...
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
...

And finally my mapred-site= .xml


...
<property>
 <name>mapred.local.dir</name>
 <value>/tmp/mapred</value>
</property>

<property>
 <name>mapreduce.jobtracker.jobhistory.location</name>
 <value>/opt/logs/hadoop/history</value>
</property>

<property>
<name>mapreduce.jobtracker.staging.root.dir</name&= gt;
<value>/user</value>
</property>

Is there a missing configu= ration? How can I deal with multiples users running jobs in a restrict HDFS= cluster?

--14dae94ee2537be61804d8ed8498--