hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mohammad Tariq <donta...@gmail.com>
Subject Re: Permission problem
Date Tue, 30 Apr 2013 17:06:51 GMT
Sorry Kevin, I was away for a while. Are you good now?

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Tue, Apr 30, 2013 at 9:50 PM, Arpit Gupta <arpit@hortonworks.com> wrote:

> Kevin
>
> You will have create a new account if you did not have one before.
>
> --
> Arpit
>
> On Apr 30, 2013, at 9:11 AM, Kevin Burton <rkevinburton@charter.net>
> wrote:
>
> I don’t see a “create issue” button or tab. If I need to log in then I am
> not sure what credentials I should use to log in because all I tried failed.
>
>
>
> <image001.png>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com <arpit@hortonworks.com>]
>
> *Sent:* Tuesday, April 30, 2013 11:02 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> https://issues.apache.org/jira/browse/HADOOP and select create issue.
>
>
>
> Set the affect version to the release you are testing and add some basic
> description.
>
>
>
> Here are the commands you should run.
>
>
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
>
>
> and
>
>
>
> sudo –u hdfs hadoop fs –chmod -R 777 /data
>
>
>
> chmod is also for the directory on hdfs.
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rkevinburton@charter.net>
> wrote:
>
>
>
> I am not sure how to create a jira.
>
>
>
> Again I am not sure I understand your workaround. You are suggesting that
> I create /data/hadoop/tmp on HDFS like:
>
>
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
>
>
> I don’t think I can chmod –R 777 on /data since it is a disk and as I
> indicated it is being used to store data other than that used by hadoop.
> Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
> and tmp folder. Which one of these local folders need to be opened up? I
> would rather not open up all folders to the world if at all possible.
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 10:48 AM
> *To:* Kevin Burton
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> It looks like hadoop.tmp.dir is being used both for local and hdfs
> directories. Can you create a jira for this?
>
>
>
> What i recommended is that you create /data/hadoop/tmp on hdfs and chmod
> -R /data
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rkevinburton@charter.net>
> wrote:
>
>
>
>
> I am not clear on what you are suggesting to create on HDFS or the local
> file system. As I understand it hadoop.tmp.dir is the local file system. I
> changed it so that the temporary files would be on a disk that has more
> capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
> HDFS. I already have this created.
>
>
>
> Found 1 items
>
> drwxr-xr-x   - mapred supergroup          0 2013-04-29 15:45 /tmp/mapred
>
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
>
> Found 1 items
>
> drwxrwxrwt   - hdfs supergroup          0 2013-04-29 15:45 /tmp
>
>
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I
> open up all the data to everyone? Isn’t that a bit extreme? First /data is
> the mount point for this drive and there are other uses for this drive than
> hadoop so there are other folders. That is why there is /data/hadoop. As
> far as hadoop is concerned:
>
>
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
>
> total 12
>
> drwxrwxr-x 4 hdfs   hadoop 4096 Apr 29 16:38 dfs
>
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
>
> drwxrwxrwx 3 hdfs   hadoop 4096 Apr 19 15:14 tmp
>
>
>
> dfs would be where the data blocks for the hdfs file system would go,
> mapred would be the folder for M/R jobs, and tmp would be temporary
> storage. These are all on the local file system. Do I have to make all of
> this read-write for everyone in order to get it to work?
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 10:01 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> ah
>
>
>
> this is what mapred.sytem.dir defaults to
>
>
>
> <property>
>
>   <name>mapred.system.dir</name>
>
>   <value>${hadoop.tmp.dir}/mapred/system</value>
>
>   <description>The directory where MapReduce stores control files.
>
>   </description>
>
> </property>
>
>
>
>
>
> So thats why its trying to write to
> /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
>
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
>  then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
> /data or you can remove the hadoop.tmp.dir from your configs and let it be
> set to the default value of
>
>
>
> <property>
>
>   <name>hadoop.tmp.dir</name>
>
>   <value>/tmp/hadoop-${user.name}</value>
>
>   <description>A base for other temporary directories.</description>
>
> </property>
>
>
>
> So to fix your problem you can do the above or set mapred.system.dir to
> /tmp/mapred/system in your mapred-site.xml.
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rkevinburton@charter.net>
> wrote:
>
>
>
>
>
> In core-site.xml I have:
>
>
>
> <property>
>
>   <name>fs.default.name</name>
>
>   <value>hdfs://devubuntu05:9000</value>
>
>   <description>The name of the default file system. A URI whose scheme and
> authority determine the FileSystem implementation. </description>
>
> </property>
>
>
>
> In hdfs-site.xml I have
>
>
>
> <property>
>
>   <name>hadoop.tmp.dir</name>
>
>   <value>/data/hadoop/tmp/hadoop-${user.name}</value>
>
>   <description>Hadoop temporary folder</description>
>
> </property>
>
>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 9:48 AM
> *To:* Kevin Burton
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> Based on the logs your system dir is set to
>
>
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
>
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rkevinburton@charter.net>
> wrote:
>
>
>
>
>
>
>
>
> Thank you.
>
>
>
> mapred.system.dir is not set. I am guessing that it is whatever the
> default is. What should I set it to?
>
>
>
> /tmp is already 777
>
>
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
>
> Found 1 items
>
> drwxr-xr-x   - hdfs supergroup          0 2013-04-29 15:45 /tmp/mapred
>
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
>
> Found 1 items
>
> drwxrwxrwt   - hdfs supergroup          0 2013-04-29 15:45 /tmp
>
>
>
> But notice that the mapred folder in the /tmp folder is 755.
>
> So I changed it:
>
>
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
>
> drwxrwxrwt   - hdfs supergroup          0 2013-04-29 15:45 /tmp
>
>
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
>
> drwxr-xr-x   - mapred supergroup          0 2013-04-29 15:45 /tmp/mapred
>
> drwxr-xr-x   - mapred supergroup          0 2013-04-29 15:45
> /tmp/mapred/system
>
>
>
> I still get the errors in the log file:
>
>
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed
> to operate on mapred.system.dir (
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
> because of permissions.
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
> directory should be owned by the user 'mapred (auth:SIMPLE)'
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
> out ...
>
> . . . . .
>
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
> . . . . .
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> Permission denied: user=mapred, access=WRITE,
> inode="/":hdfs:supergroup:drwxrwxr-x
>
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
> . . . . . .
>
>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 9:25 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
>
>
> By default it will write to /tmp on hdfs.
>
>
>
> So you can do the following
>
>
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
> jobtracker and tasktrackers.
>
>
>
> In case its set to /mapred/something then create /mapred and chown it to
> user mapred.
>
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rkevinburton@charter.net>
> wrote:
>
>
>
>
>
>
>
> To further complicate the issue the log file in
> (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log)
> is owned by mapred:mapred and the name of the file seems to indicate some
> other lineage (hadoop,hadoop). I am out of my league in understanding the
> permission structure for hadoop hdfs and mr. Ideas?
>
>
>
> *From:* Kevin Burton [mailto:rkevinburton@charter.net]
> *Sent:* Tuesday, April 30, 2013 8:31 AM
> *To:* user@hadoop.apache.org
> *Cc:* 'Mohammad Tariq'
> *Subject:* RE: Permission problem
>
>
>
> That is what I perceive as the problem. The hdfs file system was created
> with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R
> job the user ‘mapred’ needs to have write permission to the root. I don’t
> know how to satisfy both conditions. That is one reason that I relaxed the
> permission to 775 so that the group would also have write permission but
> that didn’t seem to help.
>
>
>
> *From:* Mohammad Tariq [mailto:dontariq@gmail.com <dontariq@gmail.com>]
> *Sent:* Tuesday, April 30, 2013 8:20 AM
> *To:* Kevin Burton
> *Subject:* Re: Permission problem
>
>
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
>
> Warm Regards,
>
> Tariq
>
> https://mtariq.jux.com/
>
> cloudfront.blogspot.com
>
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rkevinburton@charter.net>
> wrote:
>
> I have relaxed it even further so now it is 775
>
>
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
>
> Found 1 items
>
> drwxrwxr-x   - hdfs supergroup          0 2013-04-29 15:43 /
>
>
>
> But I still get this error:
>
>
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
>
>
>
> *From:* Mohammad Tariq [mailto:dontariq@gmail.com]
> *Sent:* Monday, April 29, 2013 5:10 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Incompartible cluserIDS
>
>
>
> make it 755.
>
>
> Warm Regards,
>
> Tariq
>
> https://mtariq.jux.com/
>
> cloudfront.blogspot.com
>
>
>
>
>
>
>
>

Mime
View raw message