Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 691B411E6A for ; Fri, 10 May 2013 19:33:03 +0000 (UTC) Received: (qmail 77562 invoked by uid 500); 10 May 2013 19:32:58 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 77492 invoked by uid 500); 10 May 2013 19:32:58 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 77482 invoked by uid 99); 10 May 2013 19:32:58 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 10 May 2013 19:32:58 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of amalgjos@gmail.com designates 209.85.210.171 as permitted sender) Received: from [209.85.210.171] (HELO mail-ia0-f171.google.com) (209.85.210.171) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 10 May 2013 19:32:51 +0000 Received: by mail-ia0-f171.google.com with SMTP id r13so5106541iar.16 for ; Fri, 10 May 2013 12:32:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=GuCt1XJjPSlK2/Ii+FJe4n8GYPYS+SsM1cPbl+2oL+o=; b=fMZKnhROZtCWQoZJddPUj9uw8pPwm3fJiBsQsSDqVgtvB9rcjn+72ZfCVCYXjcaZmc vsx0TnDTI8qs0y+xDqYHX+jVlf+9MVyW6kFsnmVJHLRGT9YLdMqjrCZBJ12+Mr/mYEQj E57nxa8jtJ1KCZtjl72RyI5yxJi/5ZcZGnHi5o5RgIYCQGVEB2sUd1bJBqxMmlaN19EP Xhz9x4uq4iHD7Pf25GhEHVx4fR+fMYZwkkG/Q85dLRXSxttdZekVSccXh6SfPjfGI705 uoKhv4CXTS3D7FyDmvsWyO0wszKnQs6+sv7l82M69eY6SClfcgiOsshFvB/h69XP560m Xa4g== MIME-Version: 1.0 X-Received: by 10.42.117.138 with SMTP id t10mr1512581icq.25.1368214350467; Fri, 10 May 2013 12:32:30 -0700 (PDT) Received: by 10.64.137.197 with HTTP; Fri, 10 May 2013 12:32:30 -0700 (PDT) In-Reply-To: <3bfead81.77e752.13e5796f752.Webtop.43@charter.net> References: <3bfead81.77e752.13e5796f752.Webtop.43@charter.net> Date: Sat, 11 May 2013 01:02:30 +0530 Message-ID: Subject: Re: Permissions From: Amal G Jose To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf303bfb62561ac804dc623a0f X-Virus-Checked: Checked by ClamAV on apache.org --20cf303bfb62561ac804dc623a0f Content-Type: text/plain; charset=ISO-8859-1 After starting the hdfs, ie NN, SN and DN, create an hdfs directory structure in the form //mapred/staging. Then give 777 permission to staging. After that change the ownership of mapred directory to mapred user. After doing this start jobtracker, it will start. Otherwise, it will not start. The reason for not showing any datanodes may be due to firewall. Check whether the necessary ports are open. On Tue, Apr 30, 2013 at 2:28 AM, wrote: > I look in the name node log and I get the following errors: > > 2013-04-29 15:25:11,646 ERROR > org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException > as:mapred (auth:SIMPLE) > cause:org.apache.hadoop.security.AccessControlException: Permission denied: > *user=mapred*, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x > > 2013-04-29 15:25:11,646 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 6 on 9000, call > org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs from > 172.16.26.68:45044: error: > org.apache.hadoop.security.AccessControlException: Permission denied: * > user=mapred*, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x > org.apache.hadoop.security.AccessControlException: Permission denied: * > user=mapred,* access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186) > > When I create the file system I have the user hdfs on the root folder. > (/). I am not sure now to have both the user mapred and hdfs have access to > the root (which it seems these errors are indicating). > > I get a page from 50070 put when I try to browse the filesystem from the > web UI I get an error that there are no nodes listening (I have 3 data > nodes and 1 namenode). The browser indicates that there is nothing > listening to port 50030, so it seems that the JobTracker is not up. > --20cf303bfb62561ac804dc623a0f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
After starting the hdfs, ie NN, SN and DN, create an hdfs = directory structure in the form /<hadoop.tmp.dir>/mapred/staging.Then give 777 permission to staging. After that change the ownership= of mapred directory to mapred user.
After doing this start jobtracker, it will start. Otherwise, it = will not start.
The reason for not showing any datanodes ma= y be due to firewall. Check whether the necessary ports are open.



On Tue, Apr 30, 2013 at 2:28 AM, <rkevinburton@chart= er.net> wrote:
I look in the name node log and I get the following errors= :

2013-04-29 15:25:11,646 ERROR org.apa= che.hadoop.security.UserGroupInformation: PriviledgedActionException as:map= red (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException: = Permission denied: user=3Dmapred, access=3DWRITE, inode=3D"/&qu= ot;:hdfs:supergroup:drwxr-xr-x

2013-04-29 15:25:11,646 INFO org.apac= he.hadoop.ipc.Server: IPC Server handler 6 on 9000, call org.apache.hadoop.= hdfs.protocol.ClientProtocol.mkdirs from 172.16.26.68:45044: error: org.apache.hadoop.secu= rity.AccessControlException: Permission denied: user=3Dmapred, acces= s=3DWRITE, inode=3D"/":hdfs:supergroup:drwxr-xr-x
org.apache.had= oop.security.AccessControlException: Permission denied: user=3Dmapred,=A0access=3DWRITE, inode=3D"/":hdfs:supergroup:drwxr-xr-x
=A0=A0=A0=A0=A0=A0=A0 at org.apach= e.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker= .java:205)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FS= PermissionChecker.check(FSPermissionChecker.java:186)

When I create the file system I have = the user hdfs on the root folder. (/). I am not sure now to have both the u= ser mapred and hdfs have access to the root (which it seems these errors ar= e indicating).

I get a page from 50070 put when I tr= y to browse the filesystem from the web UI I get an error that there are no= nodes listening (I have 3 data nodes and 1 namenode). The browser indicate= s that there is nothing listening to port 50030, so it seems that the JobTr= acker is not up.

--20cf303bfb62561ac804dc623a0f--