hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: hadoop cluster ssh username
Date Tue, 06 Mar 2012 22:45:10 GMT

On Wed, Mar 7, 2012 at 4:10 AM, Pat Ferrel <pat@farfetchers.com> wrote:
> Thanks, #2 below gets me partway.
> I can start-all.sh and stop-all.sh from the laptop and can fs -ls but
> copying gives me:
> Maclaurin:mahout-distribution-0.6 pferrel$ fs -copyFromLocal
> wikipedia-seqfiles/ wikipedia-seqfiles/
> 2012-03-06 13:45:04.225 java[7468:1903] Unable to load realm info from
> SCDynamicStore
> copyFromLocal: org.apache.hadoop.security.AccessControlException: Permission
> denied: user=pferrel, access=WRITE, inode="user":pat:supergroup:rwxr-xr-x

This seems like a totally different issue now, and deals with HDFS
permissions not cluster start/stop.

Yes, you have some files created (or some daemons running) with
username pat, while you try to access now as pferrel (your local
user). This you can't work around against or evade and will need to
fix via "hadoop fs -chmod/-chown" and such. You can disable
permissions if you do not need it though, simply set dfs.permissions
to false in NameNode's hdfs-site.xml and restart NN.

Harsh J

View raw message