Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 581E6F302 for ; Sat, 20 Apr 2013 10:02:49 +0000 (UTC) Received: (qmail 55988 invoked by uid 500); 20 Apr 2013 10:02:44 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 55788 invoked by uid 500); 20 Apr 2013 10:02:43 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 55769 invoked by uid 99); 20 Apr 2013 10:02:43 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 20 Apr 2013 10:02:43 +0000 X-ASF-Spam-Status: No, hits=-0.1 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_MED,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of hemanty@thoughtworks.com designates 64.18.0.24 as permitted sender) Received: from [64.18.0.24] (HELO exprod5og112.obsmtp.com) (64.18.0.24) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 20 Apr 2013 10:02:35 +0000 Received: from mail-ob0-f199.google.com ([209.85.214.199]) (using TLSv1) by exprod5ob112.postini.com ([64.18.4.12]) with SMTP ID DSNKUXJnpXQgx8av6nV+F0bcELzmQsyXLN9r@postini.com; Sat, 20 Apr 2013 03:02:14 PDT Received: by mail-ob0-f199.google.com with SMTP id wp18so5692520obc.6 for ; Sat, 20 Apr 2013 03:02:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:x-received:in-reply-to:references:date :message-id:subject:from:to:content-type:x-gm-message-state; bh=k8SF904FqC927+j2rv5MwT5RDHUySRKUq2omg0/+GD0=; b=j/u61335xtfUQhPcguLi8w5afIN38iR0kgMIxa7i4U80AQFyc4Uv9kMer3TL3bTPfN KGbZMajAdmEWBGJWE32hFBpv2JM8MwZDMuGe0gPIdl6IIOdz74ceEhy8BSVRjTq0g9LB MTs5XDE8LRP+TzCXpbhXi1NcpO2DIMGiDFsljVTfs2/aBtvry/+vFp9GgcH+jr/UIwsz BrSqbX3yGIdmTgF58Lek4wdZL8jh+odIpMTFnv+VlI6/v5Kltlf5hcy586IuHJWr13lb GL3qOoGiozeNBA1Uru84I2o7Fx2Jk2Y8zQQPgAgqeBfZezMXAvhBim0TS1ecB5JCpJSS XPsQ== X-Received: by 10.60.42.104 with SMTP id n8mr6517586oel.94.1366452132503; Sat, 20 Apr 2013 03:02:12 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.60.42.104 with SMTP id n8mr6517580oel.94.1366452132348; Sat, 20 Apr 2013 03:02:12 -0700 (PDT) Received: by 10.76.27.166 with HTTP; Sat, 20 Apr 2013 03:02:12 -0700 (PDT) In-Reply-To: References: Date: Sat, 20 Apr 2013 15:32:12 +0530 Message-ID: Subject: Re: jobtracker is stopping because of permissions From: Hemanth Yamijala To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=089e0149ca5cf3e38404dac7edb2 X-Gm-Message-State: ALoCoQnKEOllE8al9muRVcKtKKl6THUZ/il+ng0W2OHWTjEpT9JrCsPbGh4BNsm+hD/yqOMv7ans91469ADmBA97/UHYR14pY3znaAaTxnh1E46xs4wBHLU2E16SVNzGK1Al8Vlli16HaXYZPYc5MhP97B0c+lQntw== X-Virus-Checked: Checked by ClamAV on apache.org --089e0149ca5cf3e38404dac7edb2 Content-Type: text/plain; charset=ISO-8859-1 /mnt/san1 - owned by aye, hadmin and user mapred is trying to write to this directory. Can you look at your core-, hdfs- and mapred-site.xml to see where /mnt/san1 is configured as a value - that might make it more clear what needs to be changed. I suspect this could be one of the system directories that the JobTracker has to manage on HDFS to run jobs. Thanks Hemanth On Fri, Apr 19, 2013 at 11:42 AM, Mohit Vadhera < project.linux.proj@gmail.com> wrote: > Can anybody help me to start jobtracker service. it is an urgent . it > looks permission issue . > What permission to give on which directory. I am pasting log for the same. > Service start and stops > > 2013-04-19 02:21:06,388 FATAL org.apache.hadoop.mapred.JobTracker: > org.apache.hadoop.security.AccessControlException: Permission denied: > user=mapred, access=WRITE, inode="/mnt/san1":aye:hadmin:drwxr > -xr-x > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2880) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2844) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2823) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:639) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687) > > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) > > > Thanks, > --089e0149ca5cf3e38404dac7edb2 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
/mnt/san1 - owned by aye, hadmin and user mapred is trying to write to th= is directory. Can you look at your core-, hdfs- and mapred-site.xml to see = where /mnt/san1 is configured as a value - that might make it more clear wh= at needs to be changed.

I suspect this could be one of the system directories that the JobTracke= r has to manage on HDFS to run jobs.

Thanks
Hemanth


On Fri, A= pr 19, 2013 at 11:42 AM, Mohit Vadhera <project.linux.proj@gmai= l.com> wrote:
Can anybody help = me to start jobtracker service. it is an urgent . it looks permission issue= .
What permission to give on which directory. I am pasting log for the = same. Service start and stops

2013-04-19 02:21:06,388 FATAL org.apache.hadoop.mapred.JobTracker: org.= apache.hadoop.security.AccessControlException: Permission denied: user=3Dma= pred, access=3DWRITE, inode=3D"/mnt/san1":aye:hadmin:drwxr
-xr= -x
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FSPermissio= nChecker.check(FSPermissionChecker.java:205)
=A0=A0=A0=A0=A0=A0=A0 at or= g.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermission= Checker.java:186)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server= .namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)=
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesyste= m.checkPermission(FSNamesystem.java:4547)
=A0=A0=A0=A0=A0=A0=A0 at org.a= pache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesy= stem.java:4518)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.n= amenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2880)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesyste= m.mkdirsInt(FSNamesystem.java:2844)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.= hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2823)
= =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpc= Server.mkdirs(NameNodeRpcServer.java:639)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodePr= otocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTransla= torPB.java:417)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocol= .proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMe= thod(ClientNamenodeProtocolProtos.java:44096)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$Pro= toBufRpcInvoker.call(ProtobufRpcEngine.java:453)
=A0=A0=A0=A0=A0=A0=A0 a= t org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
=A0=A0=A0=A0=A0=A0= =A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.= java:1689)
=A0=A0=A0=A0=A0=A0=A0 at java.security.AccessController.doPri= vileged(Native Method)
=A0=A0=A0=A0=A0=A0=A0 at javax.security.auth.Subj= ect.doAs(Subject.java:396)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.se= curity.UserGroupInformation.doAs(UserGroupInformation.java:1332)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler.run(Server.ja= va:1687)

=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.NativeConstructorAcces= sorImpl.newInstance0(Native Method)
=A0=A0=A0=A0=A0=A0=A0 at sun.reflect= .NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.ja= va:39)


Thanks,

--089e0149ca5cf3e38404dac7edb2--