Return-Path: Delivered-To: apmail-hadoop-core-user-archive@www.apache.org Received: (qmail 92875 invoked from network); 12 Jun 2008 15:14:20 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 12 Jun 2008 15:14:20 -0000 Received: (qmail 43371 invoked by uid 500); 12 Jun 2008 15:14:18 -0000 Delivered-To: apmail-hadoop-core-user-archive@hadoop.apache.org Received: (qmail 43269 invoked by uid 500); 12 Jun 2008 15:14:18 -0000 Mailing-List: contact core-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-user@hadoop.apache.org Delivered-To: mailing list core-user@hadoop.apache.org Received: (qmail 43256 invoked by uid 99); 12 Jun 2008 15:14:18 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 12 Jun 2008 08:14:18 -0700 X-ASF-Spam-Status: No, hits=-4.0 required=10.0 tests=RCVD_IN_DNSWL_MED,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy includes SPF record at spf.trusted-forwarder.org) Received: from [64.18.1.25] (HELO exprod6og110.obsmtp.com) (64.18.1.25) by apache.org (qpsmtpd/0.29) with SMTP; Thu, 12 Jun 2008 15:13:27 +0000 Received: from source ([38.99.21.133]) by exprod6ob110.postini.com ([64.18.5.12]) with SMTP; Thu, 12 Jun 2008 08:11:24 PDT Received: from [192.167.0.106] ([71.134.241.144]) by mv-exch.minorventures.com with Microsoft SMTPSVC(6.0.3790.3959); Thu, 12 Jun 2008 08:10:22 -0700 Message-Id: <12FBE199-9E4A-4C87-8A85-1727DA4B2F69@scoutlabs.com> From: Chris Collins To: core-user@hadoop.apache.org In-Reply-To: <945046.8512.qm@web56202.mail.re3.yahoo.com> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v924) Subject: Re: client connect as different username? Date: Thu, 12 Jun 2008 08:10:20 -0700 References: <945046.8512.qm@web56202.mail.re3.yahoo.com> X-Mailer: Apple Mail (2.924) X-OriginalArrivalTime: 12 Jun 2008 15:10:22.0205 (UTC) FILETIME=[6F2B0AD0:01C8CC9E] X-Virus-Checked: Checked by ClamAV on apache.org Thanks Nicolas, I read it yet again (ok, only the third time). Yes it talks of whoami, I actually knew that from single stepping the client too, I was still stuck. It mentioned the posix model, kinda guessed that also from the javadocs. From Dougs note it clearly states that "No foo account need to exist on the namenode" and that the only exception is the user that started the server. I didnt get that clarity from the perms doc. Perhaps an example for the case where there are users other than that that started the server....I would of thought this was a common one. In our office we dumped this on a bunch of linux boxes that all share the same username, but all our developers are using macs with their own user name and they dont expect to have their own user on the linux boxes (cause we are lazy that way). For instance, that all it requires is for me to create the ability for say a mac user with login of bob to access things under /bob is for me to go in as the super user and do something like: hadoop dfs -mkdir /bob hadoop dfs -chown bob /bob where bob literally doesnt exist on the hdfs box and was not mentioned prior to those two commands. On Jun 11, 2008, at 10:00 PM, s29752-hadoopuser@yahoo.com wrote: > This information can be found in http://hadoop.apache.org/core/docs/current/hdfs_permissions_guide.html > Nicholas > > > ----- Original Message ---- >> From: Chris Collins >> To: core-user@hadoop.apache.org >> Sent: Wednesday, June 11, 2008 9:31:18 PM >> Subject: Re: client connect as different username? >> >> Thanks Doug, should this be added to the permissions doc or to the >> faq? See you in Sonoma. >> >> C >> On Jun 11, 2008, at 9:15 PM, Doug Cutting wrote: >> >>> Chris Collins wrote: >>>> You are referring to creating a directory in hdfs? Because if I am >>>> user chris and the hdfs only has user foo, then I cant create a >>>> directory because I dont have perms, infact I cant even connect. >>> >>> Today, users and groups are declared by the client. The namenode >>> only records and checks against user and group names provided by the >>> client. So if someone named "foo" writes a file, then that file is >>> owned by someone named "foo" and anyone named "foo" is the owner of >>> that file. No "foo" account need exist on the namenode. >>> >>> The one (important) exception is the "superuser". Whatever user >>> name starts the namenode is the superuser for that filesystem. And >>> if "/" is not world writable, a new filesystem will not contain a >>> home directory (or anywhere else) writable by other users. So, in a >>> multiuser Hadoop installation, the superuser needs to create home >>> directories and project directories for other users and set their >>> protections accordingly before other users can do anything. Perhaps >>> this is what you've run into? >>> >>> Doug >