hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Patterson <patt...@gmail.com>
Subject Re: Permission Denied
Date Mon, 02 Mar 2015 14:13:38 GMT
David,

Thanks for the information. I've issued those two commands in my hadoop
shell and still get the same error when I try to initialize accumulo in
*its* shell. :

2015-03-02 13:30:41,175 [init.Initialize] FATAL: Failed to initialize
filesystem
   org.apache.hadoop.security.AccessControlException: Permission denied:
user=accumulo, access=WRITE, inode="/accumulo":
   accumulo.supergroup:supergroup:drwxr-xr-x

My comment that I had 3 users was meant in a linux sense, not in a hadoop
sense. So (to borrow terminoloy from RDF or XML) is there something I have
to do in my hadoop setup (running under linix:hadoop) or my accumulo setup
(running under linux:accumulo) so that the accumuulo I/O gets processed as
from someone in the hadoop:supergroup?


I tried running the accumulo init from the linux:hadoop user and it worked.
I'm not sure if any permissions/etc were hosed by doing it there. I'll see.

Thanks for you help.

(By the way, is it wrong or a bad idea to split the work into three
linux:users, or should it all be done in one linux:user space?)

Dave Patterson

On Sun, Mar 1, 2015 at 8:35 PM, dlmarion <dlmarion@comcast.net> wrote:

> hadoop fs -mkdir /accumulo
> hadoop fs -chown accumulo:supergroup /accumulo
>
>
>
> -------- Original message --------
> From: David Patterson <patterd@gmail.com>
> Date:03/01/2015 7:04 PM (GMT-05:00)
> To: user@hadoop.apache.org
> Cc:
> Subject: Re: Permission Denied
>
> David,
>
> Thanks for the reply.
>
> Taking the questions in the opposite order, my accumulo-site.xml does not
> have volumes specified.
>
> I edited the accumulo-site.xml so it now has
>   <property>
>     <name>instance.volumes</name>
>     <value>hdfs://localhost:9000/accumulo</value>
>     <description>comma separated list of URIs for volumes. example:
> hdfs://localhost:9000/accumulo</description>
>   </property>
>
> and got the same error.
>
> How can I precreate /accumulo ?
>
> Dave Patterson
>
> On Sun, Mar 1, 2015 at 3:50 PM, david marion <dlmarion@hotmail.com> wrote:
>
>>  It looks like / is owned by hadoop.supergroup and the perms are 755. You
>> could precreate /accumulo and chown it appropriately, or set the perms for
>> / to 775. Init is trying to create /accumulo in hdfs as the accumulo user
>> and your perms dont allow it.
>>
>>  Do you have instance.volumes set in accumulo-site.xml?
>>
>>
>> -------- Original message --------
>> From: David Patterson <patterd@gmail.com>
>> Date:03/01/2015 3:36 PM (GMT-05:00)
>> To: user@hadoop.apache.org
>> Cc:
>> Subject: Permission Denied
>>
>>        I'm trying to create an Accumulo/Hadoop/Zookeeper configuration
>> on a single (Ubuntu) machine, with Hadoop 2.6.0, Zookeeper 3.4.6 and
>> Accumulo 1.6.1.
>>
>>  I've got 3 userids for these components that are in the same group and
>> no other users are in that group.
>>
>>  I have zookeeper running, and hadoop as well.
>>
>>  Hadoop's core-site.xml file has the hadoop.tmp.dir set to
>> /app/hadoop/tmp.The /app/hadoop/tmp directory is owned by the hadoop user
>> and has permissions that allow other members of the group to write
>> (drwxrwxr-x).
>>
>>  When I try to initialize Accumulo, with bin/accumulo init, I get FATAL:
>> Failed to initialize filesystem.
>>  org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=accumulo, access=WRITE, inode="/":hadoop:supergroup:drwxr-xr-x
>>
>>  So, my main question is which directory do I need to give group-write
>> permission so the accumulo user can write as needed so it can initialize?
>>
>>  The second problem is that the Accumulo init reports
>> [Configuration.deprecation] INFO : fs.default.name is deprecated.
>> Instead use fs.defaultFS. However, the hadoop core-site.xml file contains:
>>     <name>fs.defaultFS</name>
>>     <value>hdfs://localhost:9000</value>
>>
>>  Is there somewhere else that this value (fs.default.name) is specified?
>> Could it be due to Accumulo having a default value and not getting the
>> override from hadoop because of the problem listed above?
>>
>>  Thanks
>>
>>  Dave Patterson
>>  patterd@gmail.com
>>
>
>

Mime
View raw message