Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 42E76D415 for ; Thu, 6 Dec 2012 19:35:28 +0000 (UTC) Received: (qmail 94260 invoked by uid 500); 6 Dec 2012 19:35:26 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 94214 invoked by uid 500); 6 Dec 2012 19:35:26 -0000 Mailing-List: contact mapreduce-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-user@hadoop.apache.org Delivered-To: mailing list mapreduce-user@hadoop.apache.org Received: (qmail 94205 invoked by uid 99); 6 Dec 2012 19:35:26 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Dec 2012 19:35:26 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of lordjoe2000@gmail.com designates 209.85.212.51 as permitted sender) Received: from [209.85.212.51] (HELO mail-vb0-f51.google.com) (209.85.212.51) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Dec 2012 19:35:21 +0000 Received: by mail-vb0-f51.google.com with SMTP id fq11so6220253vbb.38 for ; Thu, 06 Dec 2012 11:35:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=bq7syZLJiM4DgXOqSuEWg+2cJ6AJBjACeWsCNUBXOMU=; b=WJNJ+nLIA51qhZT+6CcY6L1SCJCM4vVhUlgo4Hq3PkSby3C95UQrnQoKWEWsvFjEBq oa1+8W4kCgwXbe1GDJC/AXZiBJpSXGZIot8QLuDrsw9Ir8CeMps8NUHlB4KRvlrmyRBG KusdfNWEb6Vylew28Y+jOsyz4M0kXoEyVNaoBWPAtQXwhJ/Wvhz5g7BuKdrWBFmclVH7 nuHv/SMXetOkyqEm/BDQbsMY3qSkCq156qGjcR9D/qYn3zsU2uvVRb5G3CUO9+/gHH98 jYsX4+uRyVljHHtrQZ2zVZ6bGDiVui6fT60SqQ9zG0hhf2gWq9c3KHObTGmSL32tZvSv g/gQ== MIME-Version: 1.0 Received: by 10.52.33.11 with SMTP id n11mr1821073vdi.131.1354822500866; Thu, 06 Dec 2012 11:35:00 -0800 (PST) Received: by 10.58.11.162 with HTTP; Thu, 6 Dec 2012 11:35:00 -0800 (PST) Date: Thu, 6 Dec 2012 11:35:00 -0800 Message-ID: Subject: File permissions in HDFS From: Steve Lewis To: mapreduce-user Content-Type: multipart/alternative; boundary=20cf307c9d64e5db9704d03431f8 X-Virus-Checked: Checked by ClamAV on apache.org --20cf307c9d64e5db9704d03431f8 Content-Type: text/plain; charset=ISO-8859-1 I am running Hadoop jobs on a cluster whose jobtracker and file system are on a machine in my network called mycluster. conf.set("fs.default.name","hdfs://mycluster:9000"); conf.set("mapred.job.tracker","mycluster:9001); If I set in the configuration as shown , set the configuration in a Tool and call run the Tool run command, the job runs on the cluster. My problem is that even when in the configuration I say conf.set("user.name","MyDesiredUser"); the job runs as the local user. On my cluster the files and directories are created as the local user StevesPc\Steve. In my cluster hdfs-site.xml has the property dfs.permissions false true and in hadoop-policy.xml all properies are set to * as shown below security.client.protocol.acl * On my cluster things work but on other clusters they do not. The documentation is not very clear on permissions and I would like a good discussion on this issue. Ideally I would want to set the user (and provide a password) for a hadoop job but still run on the local box -- Steven M. Lewis PhD 4221 105th Ave NE Kirkland, WA 98033 206-384-1340 (cell) Skype lordjoe_com --20cf307c9d64e5db9704d03431f8 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
=A0 I am running Hadoop jobs on a cluster whose jobtracker and=A0
file system are on a machine in my network called mycluster.

=A0conf.set("fs.de= fault.name","hdfs://mycluster:9000");=A0
=A0conf.set("mapred.job.tracker","mycluster:9001);=A0

=A0If I set in the configuration as shown , set the= configuration in a Tool and call run=A0
the Tool run command, th= e job runs on the cluster.

My problem is that even when in the configuration I say=
conf.set("user.name&q= uot;,"MyDesiredUser");

the job run= s as the local user.=A0

On my cluster the files and directories are created as = the local user StevesPc\Steve.
In my cluster hdfs-site.xml has th= e property

<property>
=A0 =A0= <name>dfs.permissions</name>
=A0 =A0<value>false</value>
=A0 =A0<final>= true</final>
</property>

and in hadoop-policy.xml
all properies are set to * as shown b= elow=A0
=A0<property>
=A0 =A0 <name>security.client= .protocol.acl</name>
=A0 =A0 <value>*</value>
=A0 </property>

On my cluster= things work but on other clusters they do not. The documentation is not ve= ry=A0
clear on permissions and I would like a good discussion on this issue.= Ideally I would want=A0
to set the user (and provide a password)= for a hadoop job but still run on the local box

-- Steven M. Lewis PhD
4221 105th Ave NE
Kirkland, WA 98033
206-384-1340 (cell)
Skype lordjoe_com


--20cf307c9d64e5db9704d03431f8--